The abject failure of official global-warming predictions

Guest essay by Monckton of Brenchley

The IPCC published its First Assessment Report a quarter of a century ago, in 1990. The Second Assessment Report came out 20 years ago, the Third 15 years ago. Even 15 years is enough to test whether the models’ predictions have proven prophetic. In 2008, NOAA’s report on the State of the Global Climate, published as a supplement to the Bulletin of the American Meteorological Society, said: “The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”

To the continuing embarrassment of the profiteers of doom, the least-squares linear-regression trends on Dr Roy Spencer’s UAH satellite dataset shows no global warming at all for 18 years 6 months, despite a continuing (and gently accelerating) increase in atmospheric CO2 concentration, shown on the graph as a gray trace:

clip_image002

Dr Carl Mears’ RSS dataset shows no global warming for 18 years 8 months:

clip_image004

By contrast, the mean of the three much-altered terrestrial tamperature datasets since May 1997 shows a warming equivalent to a not very exciting 1.1 C°/century:

clip_image006

It is now time to display the graph that will bring the global warming scare to an end (or, at least, in a rational scientific debate it would raise serious questions):

clip_image008

The zones colored orange and red, bounded by the two red needles, are, respectively, the low-end and high-end medium-term predictions made by the IPCC in 1990 that global temperature would rise by 1.0 [0.7, 1.5] Cº in the 36 years to 2025, equivalent to 2.78 [1.94, 4.17] Cº/century (page xxiv). The boundary between the two zones is the IPCC’s then best prediction: warming equivalent to about 2.8 C°/century by now.

The green region shows the range of measured global temperatures over the quarter-century since 1990. GISS, as usual following the alterations that were made to all three terrestrial datasets in the two years preceding the Paris climate conference, gives the highest value, at 1.71 C°/century equivalent. The UAH and RSS datasets are at the lower bound of observation, at 1.00 and 1.11 C°/century respectively.

Two remarkable facts stand out. First, the entire interval of observational measurements is below the IPCC’s least estimate in 1990, individual measurements falling between one-half and one-third of the IPCC’s then central estimate.

Secondly, the interval between the UAH and GISS measurements is very large – 0.71 C°/century equivalent. The GISS warming rate is higher by 71% than the UAH warming rate – and these are measured rates. But the central IPCC predicted rate is not far short of thrice the UAH measured rate, and the highest predicted rate is more than four times the UAH measured rate.

The absolute minimum uncertainty in the observational global-temperature measurements is thus 0.71 C°/century, the difference between the UAH and GISS measured warming rates. Strictly speaking, therefore, it is not possible to be sure that any global warming has occurred unless the warming rate is at least 0.71 C° century. On the mean of the RSS and UAH datasets, the farthest one can go back in the data and yet obtain a rate less than 0.71 C° is August 1993.

clip_image010

In short, the Pause may in reality be as long as 22 years 5 months – and the more the unduly politicized keepers of the terrestrial records tamper with them with the effect of boosting the rate of warming above the true rate the more they widen the observational uncertainty and hence increase the possible length of the Pause.

In 1995 the IPCC offered a prediction of the warming rates to be expected in response to various rates of increase in CO2 concentration:

clip_image012

clip_image014

The actual increase in CO2 concentration in the two decades since 1995 has been 0.5% per year. So there should have been 0.36 C° global warming since then, equivalent to 1.8o C°/century, as shown by the single red needle above.

Once again the graph comparing observation with prediction displays some remarkable features. First, the IPCC’s 1995 prediction of the warming rate to the present on the basis of what has turned out to be the actual change in CO2 concentration over the period since 1995 was below the entire interval of predictions of the warming rate in its 1990 report.

Secondly, all five of the principal global-temperature datasets show warming rates below even the IPCC’s new and very much lower predicted warming rate.

Thirdly, the spread of temperature measurements is wide: 0.38 C°/century equivalent for UAH, up to 1.51 C°/century equivalent for GISS, a staggeringly wide interval of 1.17 C°/century. The GISS warming rate over the past two decades is four times the UAH warming rate.

Fourthly, the measured warming rate has declined compared with that measured since 1990, even though CO2 concentration has continued to increase.

clip_image016

So to the 2001 Third Assessment Report. Here, the IPCC, at page 8 of the Summary for Policymakers, says: “For the periods 1990-2025 and 1990to 2050, the projected increses are 0.4-1.1 C° and 0.8-2.6 C° respectively.” The centennial-equivalent upper and lower bounds are shown by the two red needles in the graph above.

Once again, there are some remarkable revelations in this graph.

First, both the upper and lower bounds of the interval of predicted medium-term warming, here indicated by the two red needles, have been greatly reduced compared with their values in 1990. The upper bound is now down from 4.17 to just 3.06 C°/century equivalent.

Secondly, the spread between the least and greatest measured warming rates remains wide: from –0.11 C°/century equivalent on the RSS dataset to +1.4 C°/century equivalent on the NCEI dataset, an interval of 1.51 C°/century equivalent. Here, as with the 1990 and 1995 graphs, the two satellite datasets are at the lower bound and the terrestrial datasets at or close to the upper bound.

Which datasets are more likely to be correct, the terrestrial or the satellite datasets?

The answer, based on the first-class research conducted by Anthony Watts and his colleagues in a poster presentation for the Fall 2015 meeting of the American Geophysical Union, is that the satellite datasets are closer to the truth than the terrestrial datasets, though even the satellite datasets may be suffering from urban heat-island contamination to some degree, so that even they may be overstating the true rate of global warming. The following graph shows the position:

clip_image018

NOAA’s much-altered dataset (J. Karl, prop., say no more) appears to have overstated the true warming rate by some 60%. Watts et al. determined the true warming rate over the continental United States by a sensible and straightforward method: they adopted as normative a standard for the ideal siting and maintenance of temperature monitoring stations that had been independently drawn up and peer reviewed, and then they applied that standard to all the stations in the contiguous United States, excluding all stations that did not comply with the standard. The result, in blue, is that from 1979-2008 the true rate of warming over the continental U.S. was not the 3.2 C°/century equivalent found by NOAA, nor even the 2.3 C°/century equivalent found by UAH, which keeps a separate record for the 48 states of the contiguous U.S., but just 2.0 C°/century equivalent.

On this evidence, the satellites are far closer to the mark than the terrestrial datasets.

Thirdly, the measured rate of warming has again fallen, directly in opposition to the continuing (and gently accelerating) increase in atmospheric CO2 concentration and in anthropogenic forcings generally.

This inexorably widening divergence between prediction and reality is a real and unexplained challenge to the modelers and their over-excited, over-egged predictions. The warming rate should be increasing in response not only to past forcings but also to the growth in current anthropogenic forcings. Yet it has been declining since the mid-1980s, as the following interesting graph shows:

clip_image020

At no point has the rate of global warming reached the lower bound of the interval of global warming rates predicted by the IPCC in 1990:

clip_image022

Displaying the three prediction-vs.-reality graphs side by side shows just how badly off beam have been the official predictions on the basis of which governments continue to squander trillions.

clip_image023clip_image024clip_image025

The graphs show between them a failure of prediction that is nothing less than abject. The discrepancies between prediction and observation are far too great, and far too persistent, and far too contrary to the official notion of high climate sensitivity, to be explained away.

The West is purposelessly destroying its industries, its workers’ jobs, its prosperity, its countryside, and above all its scientific credibility, by continuing to allow an unholy mesalliance of politicians, profiteers, academics, environmental extremists, journalists and hard-left activists to proclaim, in defiance of the data now plainly shown for all to see for the first time, that the real rate of global warming is “worse than we thought”. It isn’t.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

490 Comments
Inline Feedbacks
View all comments
u.k(us)
January 13, 2016 4:03 pm

“The West is purposelessly destroying its industries, …”
=============
Ain’t nobody stupid enough to push that button again, is there ?

Terry
January 13, 2016 4:04 pm

It is undoubtedly the case that climate model predictions are not consistent with the observed (and adjusted) temperature data, and appears not to have improved since 1990.
But this is not evidence that emissions of greenhouse gases have no impact on climate – only that scientific knowledge of atmosphere, ocean circulation, feedback mechanisms etc are inadequately modelled or understood. Confidence claims and assurances that the science is settled are inappropriate.
The “pause” seems in large part an artifact of el nino in 1998. All that has been proven is that however many times the same data set is analysed, you get the same answer – just as you do if you take a little number away from a big one!
The open questions which need to be answered are:
– can the science be improved so that observations and models are more confidently aligned
– if there is an issue what is the best solution – reduce emissions , mitigate or ignore.

Reply to  Terry
January 13, 2016 4:08 pm

The pause does not depend on the 1998 El Niño spike, for that was offset by the 2010 spike. The models have failed.

Editor
Reply to  Monckton of Brenchley
January 13, 2016 4:44 pm

Christopher, further to your comment about the spike due to the 1997/98 El Nino, that event was followed by the 1998/99/00/01 La Nina.
Cheers.

Reply to  Terry
January 13, 2016 4:38 pm

The “pause” seems in large part an artifact of el nino in 1998.

The slope for RSS is negative whether you start from May 1997 or November 2000. See:
http://www.woodfortrees.org/plot/rss/from:1997.3/plot/rss/from:1997.3/trend/plot/rss/from:2000.8/trend

brians356
Reply to  Terry
January 13, 2016 9:16 pm

“scientific knowledge of atmosphere, ocean circulation, feedback mechanisms etc are inadequately modelled or understood. Confidence claims and assurances that the science is settled are inappropriate.”
Very comforting to know how little uncertainty underpins A) dismantling the world’s incredibly inexpensive, efficient, and reliable fossil fuel-based energy production and distribution system, and B) condemning billions of human beings to a hopeless subsistence existence, and untold millions of the middle class to declining resources and security.

TYoke
Reply to  brians356
January 14, 2016 12:59 am

Brian, your response is the correct one. Terry admits that the predictions are inconsistent with observations, but seems perfectly ready nonetheless, to inflict Trillions of dollars of costs on the world economy. Just wave off the fact that the theory has failed the most important test of validity, and barge ahead regardless.

Gloateus Maximus
Reply to  Terry
January 16, 2016 3:00 pm

Ignore is the only policy based upon reality.
CO2 has risen monotonously since 1945. During that interval, the slope of temperature was strongly negative until c. 1977, then weakly positive until c. 1996 (or 1993-4), then slightly negative again since c. 1997.
And for this, we are to reduce CO2 emissions? Why, when the increase has benefited life on this planet?

Robert B
January 13, 2016 4:51 pm

“The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”
How did this get interpreted by alarmists once the pause was obvious? That the 95% confidence interval of a fit needed to be below 0°C/century and that the CI were 3 times as big as one would estimate assuming random noise. Basically, the world needed to be cooling at 2°C/century and heading for an ice age before they would conceded the models were flawed.

John Whitman
January 13, 2016 4:57 pm

Christopher Monckton wrote in his lead post,
“To the continuing embarrassment of the profiteers of doom, the least-squares linear-regression trends on Dr Roy Spencer’s UAH satellite dataset shows no global warming at all for 18 years 6 months, despite a continuing (and gently accelerating) increase in atmospheric CO2 concentration . . .”

Feynman would expect the “profiteers of doom” to raise a white flag conceding falsification of their hypothesis. But, I think their hypothesis is clearly not even just wrong (falsified), it was only ever just an ages old mythological story signifying a deep yearning for guilt via an original sin meme.
Christopher Monckton, I thank you for your long term persistent efforts at keeping the story alive that there no significant change in GAT temperature for more than 1.5 decades and possibly more than 2 decades. It is a pleasant rhythmic drum beat you have established with your periodic posts on the topic.
John

January 13, 2016 6:51 pm

Reblogged this on Public Secrets and commented:
The Intergovernmental Panel on Climate Change (IPCC) can’t get a single prediction about global warming right, and yet we’re supposed to take drastic, economically harmful action on their say so? Yeah, right.

Tom in Florida
January 13, 2016 7:09 pm

“The IPCC published its First Assessment Report a quarter of a century ago…”
When phrased like that it gives one a real feel for how long this has been going on. And it also makes me realize how old I am getting, not that I needed to be reminded again.

JohnWho
Reply to  Tom in Florida
January 13, 2016 7:42 pm

To a fellow Floridian,
does it help to know that not all of that quarter century was this century?

Phil's Dad
Reply to  JohnWho
January 13, 2016 8:27 pm

97% of it surely.

Tom in Florida
Reply to  JohnWho
January 14, 2016 4:31 am

JohnWho January 13, 2016 at 7:42 pm
“To a fellow Floridian, does it help to know that not all of that quarter century was this century?”
No.

RD
January 13, 2016 8:57 pm

Another excellent essay from this author, thanks.

Dr. S. Jeevananda Reddy
January 13, 2016 9:18 pm

Is Global Warming a Reality?
Part-I: The use of average of a wider area to study its impact on localized areas like agriculture is bad in science as the average present wide space and time variants in it. Global average temperature anomaly consist of wide variants in terms of climate system and global circulation patterns wherein wind speed and direction plays an important role. Such an average has no significance when we assess the impact on changes in weather, ice and in sea level and as well on water resources & agriculture. Though we are talking of anomaly, they present high variations in space and time: ocean to land; Southern Hemisphere to Northern Hemisphere; country to country and region to region within a country; location to location within a region.
Global warming is put on false foundations. In the literature, it is commonly seen that the word “climate change” is misused as de-facto “global warming”; and “the global [land & ocean] average temperature anomaly” is misused as de-facto global warming. Global warming refers to temperature while climate change refers to meteorological parameters, more specifically to precipitation and temperature. Precipitation presents fluctuations but they are region specific, following general circulation pattern-climate system. This modifies the temperature accordingly. Also, unusual cold & warm conditions prevailing over different parts of the globe are associated with local general circulation patterns and associated multi-decadal Oscillations/Southern Oscillation. For example, Polar Regions are affected by the frequency & duration of occurrence of circumpolar Vortex [6 months day and six months night] and in India the location of High pressure belt around Nagpur.
IPCC in its AR5 stated that more than half of the global [land & ocean] average temperature anomaly is associated with the “greenhouse effect” component [global warming is a part of it] and the remaining [less than half] component is associated with the “non-greenhouse effect” [ecological changes]. It must be noted the fact that: the urban-heat-island effect with its’ dense met network overemphasized and rural-cold-island-effect with less dense met network is underemphasized in the global [land & ocean] temperature anomaly and thus as a result it is over emphasized; but this is not so with the satellite data. The model based predicted temperature anomaly curves are compared with global average temperature anomaly but not with global warming component.
The consensus [the average] of the climate models used by the IPCC for their 5th Assessment Report shows no long-term increase in global precipitation from 1861 [the start of the mean of the climate model outputs] to 1999 [the end of the 20th century] though their global average temperature anomaly during the same period presented an increasing trend. However, increase or decrease in precipitation shows a decrease or increase in temperature. For example, during the 2002 and 2009 drought years in India, the annual average temperature has gone up by 0.7 and 0.9 oC.
Sea Breeze and land Breeze patterns over East and West Coasts of India are different. Here we can get the clue from the human comfort equation wherein under the same temperature & humidity conditions with changing wind speed [direction also play an important role based on the location] the human comfort changes. Same way, heat and cold waves, wherein the high pressure belt location and winds decide the direction in which they move and create the heat wave condition in summer and cold wave condition in winter. It is like advection of heat or cold based on winds direction & strength.
Though the precipitation presents fluctuations with different periods, in a rare case of solidarity they all present below or above the average precipitation condition. For example, in the Southern Hemisphere, the fluctuations, though, cycles varied from 52 to 66 years [the main cycle], they all coincidentally come under the below the average condition in 2013. This resulted above the average temperature anomaly in Southern Hemisphere. However, this will be part of natural variation superposed on trend.
The central England the longest continuous instrumental record of annual mean temperature shows 0.95 oC rise during 356 years [1659-2016]. The peak similar to that observed after 1990 was also presented after 1730.
NOAA satellite data of sea ice extent from 1972 – 1990 presented a fluctuation pattern but Northern Hemisphere fluctuation pattern is in opposite direction to that of the Southern Hemisphere fluctuation pattern. That means in one hemisphere when ice is diminishing in the other hemisphere ice is increasing. ARGO buoys data of 430 years show the Oceans are warming by around 0.23 oC per Century.
Part-II: There is a big discussion on: Why there are differences between surface temperature measured in Stevenson Screens and atmosphere as measured by satellite. Here we are not talking of station value but a region value and the averaging over that region. Also, we are talking of trend and not the fluctuation part. To get meaningful answer to such questions we must compare and explain for the causes of differences.
Reports state that “RSS satellite monthly global mean surface temperature anomaly data set shows no global warming for 18 y 8 m since May 1997, though one-third of all the anthropogenic forcing have occurred during the period of pause. The UAH satellite data showed a pause almost as long as the RSS data set. However, the much altered surface datasets show a warming rate 1.1 oC per Century during the period of pause for May 1997 to September 2015. During 1997-2016 the difference in the warming trend between RSS [satellite data] and GISS [surface data] is around 0.25 oC. Also, Global Temperature trends [Phil Jones] repeated with around 0.15-0.16 oC during 1860-1880, 1910-1940 & 1975-2009. Between 1979 and 2014 two volcanoes have occurred, namely Chichon in 1983 and Pinatubo in 1991 presented cooling effect prior to 1997/98 El Nino.
Part III: When the data presents different patterns, blindly fitting the data to linear, it gives misleading conclusions. For example, if we use the data of 1978 to 2015 [444 months or 37 years], UAH satellite data presents a trend of about 1.14 oC per Century. After seeing this trend that contains period before and after 1997/98 El Nino, I pointed out saying that while fitting such data, we must eliminate the anomaly associated with El Nino affect or fit the data sets prior and after El Nino periods separately to see the trend. I have seen this in https://kenskingdom.wordpress.com/ an article “Earth and Water” on January 13, 2016. The analysis of both the periods namely prior and after 1997 showed a “nearly zero trend”. Some mentioned that prior to 1997 volcanic activity cooled the temperature and after 1997 the raised temperature did not come down in surface data but in satellite data it did come down and followed a zero trend pattern. If the difference is zero at 1997, it is 0.25 oC at 2015. That is the difference in around 18 years is 0.25 oC and the surface data showed an increase of 1.1 oC per Century.
In the North Extra Tropics: the differences between the prior and after 1997 data series of 229 and 216 months data series with zero trend showed a sudden jump around 1997 by 0.48 oC on land 0.26 oC on ocean. This is the difference between volcanic cooling and El Nino warming on land and in ocean. The below table show the differences under three surface measurements and two satellite measurements in three different periods:
Measurements 1990-2015 1995-2015 2001-2015
[oC per Century]
Surface measurements:
GISS 1.71 1.51 0.76
HadCRUT4 1.62 1.52 1.11
NCEI 1.58 1.49 1.40
Average 1.64 1.51 1.09
Satellite measurements:
RSS 1.11 0.42 – 0.11
UAH 1.00 0.38 0.12
Average 1.05 0.40 0.00
After the El Nino in 1997/98 the global average temperature anomaly has gone up and maintained that trend under the ground based data series; but under satellite data series the temperature anomaly has come down and presented no trend. Under the Southern Oscillation, the temperature rises during El Nino phase and thereafter the temperature drops down and later recovers to normal condition. However, the changes in temperature associated with the Southern Oscillation and volcanic activity becomes part of intra-annual and intra-seasonal variations and thus their contribution to long term trend is insignificant.
From the above table it is clear that in all the data sets, they presented a decreasing trend per Century. Can we call this a non-linearly decreasing trend in the contribution of anthropogenic greenhouse gases effect?
Some argued that the atmospheric temperature anomalies are necessarily different from surface anomalies. Usually, atmospheric anomalies are less than the surface maximum in hot periods and higher than the surface anomalies in cooler periods. It is like night and day conditions. We need to average them and thus they should present the same averages both surface & satellite measurements.
Here we must remember the fact that the surface data does not cover the entire climate system and general circulation patterns. Particularly in land areas, the met stations are dense in urban areas and sparse in rural areas — this is clearly evident in Australia’s met net work and also the periods are different. Satellite data covers all the climate systems and general circulation patterns. Thus, surface data is more biased with urban conditions and thus expected high global temperatures. This is what the present scenario. Lowering the past data series and raising the current data series also add to this. Naturally surface data will be higher than satellite data.
What we need to address now is: study the local regional temperature patterns and look in to causes for regional and local differences. Then only it serves the needs of policy makers to common man. Without that, simply harping on global average will lead to disasters in coming decades with trillions of dollars spending on good for nothing theory of “Global Warming & Carbon Credits”.
Dr. S. Jeevananda Reddy
Formerly Chief Technical Advisor – WMO/UN & Expert – FAO/UN
Fellow, Andhra Pradesh/Telangana Akademy of Sciences
Convenor, Forum for a Sustainable Environment
Hyderabad, Telangana, India
jeevanandareddy@yahoo.com

David A
Reply to  Dr. S. Jeevananda Reddy
January 14, 2016 3:06 am

…Need paragraphs, shorten with link. ( just a suggestion)

JohnKnight
Reply to  Dr. S. Jeevananda Reddy
January 14, 2016 4:58 pm

Dr. S. Jeevananda Reddy,
“What we need to address now is: study the local regional temperature patterns and look in to causes for regional and local differences. Then only it serves the needs of policy makers to common man. Without that, simply harping on global average will lead to disasters in coming decades with trillions of dollars spending on good for nothing theory of “Global Warming & Carbon Credits”.”
Thank you for trying to get the science back on track, and back to work trying to help real people with real need of useful climatology/meteorological information and research. No doubt some in the field have never stopped in the effort you champion here, and it’s a shame they are being drowned out by this grotesque caricature of real science they call “climate change”.

Scottish Sceptic
January 14, 2016 12:12 am

The big difference between alarmists and sceptics is how much we think we know. They know they know how the climate must change – we know they don’t know. But better still, we know we don’t know and that anyone who thinks they can predict the climate is a fool …
… except this year we have an el nino that failed to produce the expected warming. In addition there are two reasons to believe we are about to see cooling (low sunspot activity and what appears to be a 60year cycle in the climate which peaked around 2010.
So, 2016 appears to have the greatest chance of a trend as three different indicators suggest cooling. If however we do see cooling (in the only reliable temperature from the satellites), then there is a good chance of global cooling in the next 20 years. But given how small these global changes are compared to local effects, I doubt anyone will be able to spot it except by looking at statistics.

tatelyle
January 14, 2016 1:18 am

Their theory does not work because the primary feedback agent is probably albedo, rather than CO2. Prof Clive Best has a good review of the albedo feedback concept over on his site.
http://clivebest.com/blog/?p=7024

madmikedavies
January 14, 2016 2:18 am

Climate ‘Computer modelling’
Posted on January 14, 2016 by madmikedavies
The blogosphere is full of claims of bad science and data corruption in the climate change machine. Most of these claims are sound and CAGW is mostly discredited.
To confound the public and compound their errors and bad science billions of dollars have been spent on supercomputers and computer models which continually make alarmist predictions which are at odds with the empirical observations.
The whole UN/IPCC proposals, claims and plans are built on this crumbling edifice. I came across the following quote which seems apt.
‘To Err is Human; To Really Foul Things Up Requires a Computer’
madmike

David A
Reply to  madmikedavies
January 14, 2016 3:17 am

Yes the projected harms are worse science on top of failed science.
Instead of basing there projected harm’s on actual observations of both T change and harm flux, they are based on the modeled mean of consistently wrong in one direction climate models.
Inconceivable!##, but the shisters use falsified predictions to project harms. Models trump reality.
Additionally, even if greater warming ever manifests, there is much evidence that the projected harms are greatly exaggerated and the benefits largel underestimated.

madmikedavies
Reply to  David A
January 14, 2016 3:55 am

What I don’t understand is I was able to dismiss the CAGW claims myself over 5 years ago just by examining the graphs in the Wikipedia article on Milankovich, and my knowledge of geology 101 and oscillators. How has this fraud persisted so long

January 14, 2016 4:30 am

When all the “solutions” to a problem all strangely converge towards socialism, then we don’t have science, we have a secular religion.

richardscourtney
Reply to  buckwheaton
January 14, 2016 5:32 am

buckwheaton:
When all the solutions to a problem all converge towards socialism then we have additional evidence of the efficacy of socialism.
If there is a “problem” and if all its solutions converge towards socialism then any other conclusion is a non sequitur.
Richard

MarkW
Reply to  richardscourtney
January 14, 2016 10:39 am

Real world data shows that socialism makes all problems worse.

richardscourtney
Reply to  richardscourtney
January 14, 2016 1:26 pm

MarkW:
Real world data shows your political opinions are wrong.
Richard

JohnKnight
Reply to  richardscourtney
January 14, 2016 5:20 pm

richardscourtney
He put the word solutions in quotes, and you know why, I believe . . And that makes what you just did . . well, you know what, I believe ; )

richardscourtney
Reply to  richardscourtney
January 18, 2016 8:38 am

JohnKnight:
I do know what I did: I told the truth.
I don’t know – and I don’t want to know – what you “believe”.
Richard

Leonard Weinstein
January 14, 2016 5:12 am

I truly do not see the need of looking back only a decade or two or even three to show that models are wrong. Going from 1940 to the present only shows 0.5 C per century based on the surface data (which is likely in error on the high side anyway). The period of 1850 to present only shows less than 0.5 C per century. Picking a start point in the local low period (1945 to 1978) and going forward is cherry picking for a slope. There is no basis from the data for saying there is any measurable effect, even though it is likely there is a small human effect.

Jeff
January 14, 2016 5:22 am

“…the mean of the three much-altered terrestrial tamperature datasets since May 1997…”
TAMPERATURE! I love it. Make it so.

Bartemis
Reply to  Jeff
January 14, 2016 8:47 am

That’s a beaut. I imagine the conversation between Karl and his superiors went something like that classic scene in Casablanca:
Captain Renault (Karl): But, I’ve no excuse to [alter the data].
Major Strasser (his boss): FInd one.

Rick (Dr. Curry): How can you [change the data]? On what grounds?
Captain Renault (Karl): I’m shocked, shocked! To find [that the two sea surface measurements differ in quality]. I’ll adjust the best ones to agree with the worst ones.

Tom Graney
January 14, 2016 5:50 am

Can someone comment about creating a simple model which duplicates observed temperature data in terms of long term trend and month to month variability. When I create such a model, when the month to month variability is high enough to match actual data (using random values), the resulting trends are too great.

benben
January 14, 2016 7:34 am

It would be interesting – and not to mention more relevant – to see the difference between the various temperature records and more recent models (e.g. AR5). The models in 1990 really aren’t comparable to current models, so its fairly unfair to keep using them as a comparison point. But I am really interested in seeing how the most recent models perform!
Cheers,
Benben

Reply to  benben
January 14, 2016 5:24 pm

In response to the apical stone of an Egyptian pyramid, one needs a minimum of 15 years’ data to make a fair comparison between prediction and observation. So we can’t test the AR4/CMIP3 or AR5/CMIP5 models yet.
And it’s not unfair to show how wildly exaggerated were the original predictions on which the scare was based.

benben
Reply to  benben
January 15, 2016 4:35 am

Dear Monckton of Brenchley,
Thank you for your reply.
The thing is, these projections from the 90’s weren’t much more than back of the envelope linear extrapolations. They weren’t used for anything else than to say ‘look, there might be a problem in this direction, more research is necessary’. And then a lot of research was done, and based on AR5 projections people still think there is a problem.
Obviously I really enjoy debating this with my colleagues, which is why I keep reading this blog. And just yesterday I spent an evening talking with a couple of policy makers in environmental policy on this topic. They said that current policy is made on the basis of current science, which is a valid point to make. If WUWT wants to make a useful contribution to the debate then critiquing AR5 is the way to go (and conversely, critiquing 1990 models weakens your argument).
With respect to having to wait 15 years… I agree that would obviously be best. But it’s not really a useful option when deciding what policy to make today (and it’s not a lot of fun either! I don’t want to wait 15 years to continue debating this with my friends). Looking at how accurate AR5 models are at recreating past climate patterns is a valid approach for getting a rough feel for how accurate they will be in predicting future trends. Important point: for policy purposes, climate models don’t need to be perfect, they just need to be good enough (e.g. economic models suck, but we still use them every day). The interesting question is, are AR5 models good enough for policy?
Also interesting would be to see how AR5 compares to AR3 and AR1. I’ve seen a lot of comments here that climate models have not become any better at all in the past decade. Is that actually true? Can we have a graph showing that? It would be a good stick to beat my science friends with!
Cheers,
benben

Reply to  benben
January 15, 2016 7:06 am

It’s not clear that Lord Monckton has always been so reticent about AR5. In connection with the Monckton et al. paper’s Fig. 6, which supposedly demonstrated the skill of the authors’ “irreducibly simple climate model,” he and his co-authors say of the “IPCC (2013 final draft)” that it projects a 0.13 K/decade trend for the remainder of this century, a trend that exceeds the 0.11 K/decade trend the Monckton et al. paper says the HadCRUT4 dataset exhibited for the last 67 years.
Unfortunately, that paper didn’t burden its readers with the forcing assumptions on which that graph was based, so its logic is, well, obscure. Since RCP 6.0 scenario’s trend for the rest of the century is over half again the trend it recommends be assumed for the last 67 years, for example, that graph’s trend comparison could be seen to suggest that if anything the IPCC may actually have underprojected the coming trend, not overprojected it.
Of course, the HadCRUT4 values have resulted from numerous trend-increasing adjustments, and one can always reach different conclusions by using different datasets, time intervals, and/or forcing values. The values mentioned above are merely the ones that the authors inexplicably thought demonstrated the superiority of their model’s skill over that of the models on which the IPCC relies.

Reply to  benben
January 17, 2016 3:44 am

Mr Born , having been caught lying in the opening sentence of a head posting futilely and ignorantly attempting to attack our paper in vol. 60 no. 1 of the Science Bulletin of the Chinese Academy of Sciences, and having been called out on his lie by numerous commenters, and having had the oseudoscience in that posting utterly dismantled, has been whining lay petulant ever since.
Let readers decide for themselves by downloading the paper from scibull.com. From the home page just click “Most read papers” and we are by an order of magnitude the all-time no. 1.

benben
Reply to  benben
January 17, 2016 7:18 am

AR5 my dear sirs, AR5. I’m not saying anything about who is wrong or right. I’m just much more interested in seeing how modern models perform than how those really simplistic 1990’s graphs look!
Cheers,
Ben

Reply to  benben
January 19, 2016 3:16 am

Mr Born , having been caught lying in the opening sentence of a head posting. . . .
So long as Lord Monckton continues thus to slander me in his attempts to divert attention from the substance, which in this case is that his paper’s purported demonstration of skill is anything but, I will continue to set for the facts that show that I’m not the one who’s the liar here.
The basis for his allegation is a passage in which I referred to a post of mine as a request and to his reply post as turning it down.
The issue was the contents of Monckton et al.’s Table 2. That table’s caption claims that all of its entries, which Monckton et al. refer to as “transience fractions,” were “derived from” a paper by Gerard Roe. Unless “derived from” means “inconsistent with,” however, that caption is a falsehood. Roe’s Fig. 6 shows that at every point in time after t = 0 the response value for a higher-feedback system must exceed that for every lower-feedback system. In contrast, the Monckton et al. table’s first-row entries dictate that in the early years the lowest-feedback system’s response exceed higher-feedback systems’.
Readers before me had placed those quantities at issue in blog threads in which Lord Monckton participated. Characteristically, however, any answers he gave were at best evasive; even in the face of objections that such values appeared to be non-physical he failed to explain how he could possibly have inferred from Roe et al. that the zero-feedback values would be unity for all time values.
To elicit a clear explanation, therefore, I cranked up the volume: I wrote a post specifically entitled “Reflections on Monckton et al.’s Transience Fraction.” In that post I explicitly stated that the manner in which Monckton et al. inferred that table’s values had not been made entirely clear.
So it was hardly a stretch for a subsequent post of mine to refer to that earlier post as a “request for further information about how the Table 2 ‘transience fraction’ values . . . were obtained from a Gerard Roe paper’s Fig. 6.” Nor was it inappropriate for me to characterize as turning down that request a reply post in which Lord Monckton merely repeated the paper’s contention that “The table was derived from a graph in Gerard Roe’s magisterial paper of 2009 on feedbacks and the climate” without explaining, as I requested, how that could possibly be true.
By hyperlinking the word “request” to it, I explicitly identified my previous post as the request. I similarly hyperlinked “turned down” to his reply post. No one who knows how to click on a hyperlink could have had any excuse for not knowing what those terms referred to.
Such is the forthright, above-board, completely transparent behavior that Lord Monckton has chosen to characterize as a lie. That he would thus resort to slander is an indication of how desperate he is to avoid a technical discussion, in which it would be apparent to anyone that the authors did not understand even their own “transience fraction” concept. His doing so is of a piece with the posts in which he claims to have “utterly dismantled” mine: it continues his practice of distortion, evasion, and misdirection.
Incidentally, “whining” is the term Lord Monckton appears to use to refer to rigorous arguments to which he has no creditable answer.

Reply to  benben
January 19, 2016 3:22 am

I flubbed the tags in my last comment. It should have been:

Mr Born , having been caught lying in the opening sentence of a head posting. . . .

So long as Lord Monckton continues thus to slander me in his attempts to divert attention from the substance, which in this case is that his paper’s purported demonstration of skill is anything but, I will continue to set for the facts that show that I’m not the one who’s the liar here.
The basis for his allegation is a passage in which I referred to a post of mine as a request and to his reply post as turning it down.
The issue was the contents of Monckton et al.’s Table 2. That table’s caption claims that all of its entries, which Monckton et al. refer to as “transience fractions,” were “derived from” a paper by Gerard Roe. Unless “derived from” means “inconsistent with,” however, that caption is a falsehood. Roe’s Fig. 6 shows that at every point in time after t = 0 the response value for a higher-feedback system must exceed that for every lower-feedback system. In contrast, the Monckton et al. table’s first-row entries dictate that in the early years the lowest-feedback system’s response exceed higher-feedback systems’.
Readers before me had placed those quantities at issue in blog threads in which Lord Monckton participated. Characteristically, however, any answers he gave were at best evasive; even in the face of objections that such values appeared to be non-physical he failed to explain how he could possibly have inferred from Roe et al. that the zero-feedback values would be unity for all time values.
To elicit a clear explanation, therefore, I cranked up the volume: I wrote a post specifically entitled “Reflections on Monckton et al.’s Transience Fraction.” In that post I explicitly stated that the manner in which Monckton et al. inferred that table’s values had not been made entirely clear.
So it was hardly a stretch for a subsequent post of mine to refer to that earlier post as a “request for further information about how the Table 2 ‘transience fraction’ values . . . were obtained from a Gerard Roe paper’s Fig. 6.” Nor was it inappropriate for me to characterize as turning down that request a reply post in which Lord Monckton merely repeated the paper’s contention that “The table was derived from a graph in Gerard Roe’s magisterial paper of 2009 on feedbacks and the climate” without explaining, as I requested, how that could possibly be true.
By hyperlinking the word “request” to it, I explicitly identified my previous post as the request. I similarly hyperlinked “turned down” to his reply post. No one who knows how to click on a hyperlink could have had any excuse for not knowing what those terms referred to.
Such is the forthright, above-board, completely transparent behavior that Lord Monckton has chosen to characterize as a lie. That he would thus resort to slander is an indication of how desperate he is to avoid a technical discussion, in which it would be apparent to anyone that the authors did not understand even their own “transience fraction” concept. His doing so is of a piece with the posts in which he claims to have “utterly dismantled” mine: it continues his practice of distortion, evasion, and misdirection.
Incidentally, “whining” is the term Lord Monckton appears to use to refer to rigorous arguments to which he has no creditable answer.

January 14, 2016 8:28 am

Thanks, Christopher, Lord Monckton. This is (as always) a very good essay.
You keep beating CAGW’s dead horse, but politicians keep riding it.
The only solution to a political problem is a political solution.

Jeff (FL)
January 14, 2016 10:35 am
January 14, 2016 11:20 am

There is either a highly amusing typo, if that’s what it is, or a glorious pun in the description of your 3rd graph. when you refer to “terrestrial TAMPERature”. Surely that should be the new standard description of the land-based records, as they have indeed been extensively tampered with. 🙂
So, ‘fess up. Did you mean to say that, or was it accidental? If the former, chapeau!!!

Reply to  diogenesnj
January 14, 2016 5:20 pm

TAMPERature was deliberate, and I’ve used it before, but it bears using again.

January 14, 2016 11:22 am

Ah, I see someone else caught that first, a few hours ago. Sharp eyes, Jeff.

lorenz
January 14, 2016 2:00 pm

This is probably offtopic, but I want to know.
Disclaimer: I’m no climate scientist. Neither am I a meteorologist. I try to think scientifically.
When I read Heinlein’s ‘Farmer in the sky’ in the eighties, his proposed heat trap sounded very
much like the now often heard greenhouse effect.
And it is very believable and also observable. With a cloud cover at night, it stays much warmer than with clear skies. This seems to imply that there is some kind of greenhouse effect.
Anyone?
Thanks, Lorenz

Bartemis
Reply to  lorenz
January 14, 2016 4:17 pm

Of course there is a greenhouse effect. That is not really in question, or should not be. Given an atmosphere which absorbs radiation significantly in the band radiated by the surface, all things being equal, that atmosphere causes the surface to warm beyond what it otherwise would have been.
It is the “all things being equal” that is the rub. E.g., if increasing CO2 produces additional warmth, which then causes water to evaporate leading to clouds that shield the surface from incoming radiation, then the increase in CO2 may cause no discernible change in surface temperatures at all.
The Earth’s thermal regulatory system is very complex. They are many other potential feedbacks such as the cloud reaction suggested about which can ameliorate, or entirely cancel, any induced warming. So, it is by no means guaranteed that increasing concentration of a specific “greenhouse” gas will necessarily produce an increase in surface temperatures.

JohnKnight
Reply to  lorenz
January 14, 2016 8:44 pm

lorenz,
As I see this matter you ask about, it is entirely possible that CO2 in the atmosphere will have some effect on global average temperatures, but entirely impossible that human CO2 emissions would be a dominant effect. And unlikely to be more than a minor player in the vast exchange of energy between the Sun and deep space, via the Earth ; )
One detail (among many I feel) that perhaps might give you a quick grasp of the problem with attributing vast powers of change to CO2, is that it has been experimentally discovered/demonstrated that any such “interference” in the energy emissions from ground toward space, is limited and increases at an ever diminishing rate with additional CO2.
For a cloud cover analogy, imagine a thin cloud layer obscuring the moon. Say, 50% obscured on average. Another layer obscures quite a bit more, another quarter of the view, and then a significant eight . . but the next and next etc, fade into insignificance.
The CO2 gas itself has done it’s little warming thing for the most part, and we’re in fade away territory in terms of further effect, as I understand the matter. . Only through supposed “positive feed-backs” is it even possible to go scary with this. The trace plant food gas enraging the Furies and triggering a great flood, and whatever suits the current political narrative, is dependent on CO2’s effect getting amplified.
Mr. Monckton is essentially asking, as I hear him; to put it crudely ; )
*What if we don’t amplify that signal? What if it’s just “normal” physical matter, and causes a slight warming, and not Super Warming! . . The fart that ate the windstorm!!* ; )

lorenz
Reply to  JohnKnight
January 15, 2016 10:22 am

Thanks for your explanations. I just had read the Wiki-articles on moon and mars. They both have an average surface temperature around -55°C. So that’s without an atmosphere. Venus, by comparison has around 464°C, with an atmosphere of quite different composition, and nearer to the sun.
So there IS a greenhouse effect, but the science is far from settled. (As if it ever could B-)

JohnKnight
Reply to  JohnKnight
January 15, 2016 1:49 pm

(You’re welcome of course, lorenz . . One thing about Venus that needs to be born in mind when discussing “greenhouse effect” is that the atmosphere there, is over 90 times as dense as here on Earth!)

grumpyoldman22
January 14, 2016 3:29 pm

It matters not one jot what climate models and imaginary, creative temperature definitions are fed into them can be massaged to say. A large part of what Lord Chris and subsequent respondents postulate is founded on changes these temperatures may cause in various parts of our Earth.
Fixation on an incorrectly selected parameter at least causes vast additions of hot air and friction to a political ploy without any obvious progress toward its final solution or extinction.
Temperature is a spatial and temporal property of matter. It may be measured in a clearly defined, finite, element of matter in equilibrium with its neighbours and that measurement taken as the average for that element. Mass or energy transfer to or from that element over time will change its temperature property. These transfers are the basis for our weather systems. The integral of weather is climate.
It beggars my belief that an imaginary global average temperature was selected as the prime driver of this socio-eco-political debate when the most likely scientific driver for it is global average heat (energy) content. Ask a global warming believer to define average global temperature and watch as apoplexy sends the questioner to Google where it will not be found.
Lord Chris’ fixation on the history of IPCC and temperature driven effects (or non-effects) at least allows readers to see the futility of the debate so far.

Reply to  grumpyoldman22
January 14, 2016 5:18 pm

Whether we like it or not, the true-believers found their manufactured alarm on globally-averaged temperatures. Yet globally-averaged temperatures are not rising. They are worried, therefore, and rightly.

u.k(us)
Reply to  Monckton of Brenchley
January 14, 2016 5:48 pm

If the stock market continues its swoon, again, we’ll soon find our priorities.
It ain’t gonna be new methods to increase the price of electricity, which seems to be truly the last recession-proof game.

Bartemis
January 14, 2016 4:02 pm

“They show ZERO variance from standard gas law. They prove definitively there is no previously un-noted effect. “
The “standard gas law” can get you the dry adiabatic lapse rate. That is the rate of change of temperature with altitude. Given a temperature at some altitude, then you can figure out the temperature at all altitudes. But, from where do you get that reference temperature?
It’s like saying the speedometer on your car gives you your velocity, so why would you need a GPS unit to tell you where you are? I traveled in a straight line from point A at uniform velocity V for time T, so I am now at point B = A + V*T. Fine but, where actually is B? I cannot know until you tell me where point A is.

Harry Twinotter
January 14, 2016 5:47 pm

A cherry-pick of a cherry pick.
You are starting off with the result you want, then picking the period on that basis.

Reply to  Harry Twinotter
January 14, 2016 6:50 pm

HT, Lord Monckton always posts a wealth of detailed evidence. You only post your baseless opinion. That’s really lame.
You make assertions, and expect readers to accept them? Why? You have no credibility; those are just your opinions. Without any evidence, facts, or data, your assertions mean nothing.
You can’t understand that, can you?

u.k(us)
Reply to  dbstealey
January 14, 2016 7:03 pm

Never underestimate the deviousness of your opponent, they might just be trying to SMOKE you out.

Harry Twinotter
Reply to  dbstealey
January 14, 2016 9:58 pm

dbstealey.
You do know what a “cherry pick” is, don’t you?
Now, if you were to go about this like a true skeptic, you would counter my argument by demonstrating why it isn’t a cherry pick.
This is a good video to watch for background.

Dr. S. Jeevananda Reddy
Reply to  dbstealey
January 15, 2016 12:15 am

Harry Twinotter — Though I did not understood the issue from the video, I would like to give my experience on the satellite data.
Indian Meteorological Society [IMS] organizes a national symposium “TROPMET” since 1992. In 1995, the IVth in the series was conducted at National Remote Sensing Agency [NRSA], Hyderabad during February 8-11, 1995. The symposium proceedings were published at:
R. K. Gupta & S. Jeevananda Reddy, [Edts.] 1999. “Advanced Technologies in Meteorology”, Tata McGraw-Hill Publ. Comp. Limited, New Delhi, 549p.
In this two of my articles are also included, with titles “Advanced Technology in Disaster [Due to Agricultural Drought] Mitigation: An African Experience” [9-16 pp] and “Problems and Prospects of the Application of Meteorological Data in Agriculture: Models and Interpretation of Results” [419-426pp].
Out of nine sessions, three sessions allocated to space measurements and applications. Several scientists from several institutions presented papers using such data. During the presentation sessions one scientist who was associated with the institution that is responsible for decoding the data from space instruments, observed that the data is not correct. Then I questioned him, if the data is not correct why did you release such data for research by institutions? When you detected the errors, you should have informed the groups whom you supplied the data. But, he did not responded to it. Later, I came to know that this is game of politics.
International agency in the same way, the satellite temperature data was put on the internet and later withdrew from the internet as this data showed far less than ground based data. In fact, I used this data in my book published in 2008 [available on net].
In ground based average temperature is derived from maximum and minimum, thinking it followed a sine curve [with day and night length are the same. In reality it is not so. If you cut the hourly data from the thermograph, the average may or may not be equal based on the skewness from the sine curve that vary with the cloud conditions and changes in the duration of day & night.
Dr. S. Jeevananda Reddy

Reply to  dbstealey
January 15, 2016 4:01 pm

“In ground based average temperature is derived from maximum and minimum, thinking it followed a sine curve [with day and night length are the same. In reality it is not so. If you cut the hourly data from the thermograph, the average may or may not be equal based on the skewness from the sine curve that vary with the cloud conditions and changes in the duration of day & night.”
I am very happy to see you make this point, Dr. Reddy.
This very obvious detail has bothered me for a very long time, and we rarely see it mentioned.
But I think it is only one of many problems with the surface data, and using such to derive a so-called global average temperature.

Reply to  Harry Twinotter
January 15, 2016 4:05 am

The furtively pseudonymous “Harry Twitotter” appears, as usual, to be unable to read. The graphs presented in the head posting give results for all five principal global-temperature datasets (no cherry-pick there, then), and for three separate time-periods whose start-points are, respectively, the years of publication of the First, Second and Third ASSessment Reports of the IPCC, whose predictions the head posting tested against the measurements of the five global-temperature datasets. No cherry-pick there, then, either.
The fact is that the world is not warming anything like as fast as predicted. There is no climate crisis. And that’s that.

Dennis Horne
January 14, 2016 7:03 pm

So, increasing CO2 in the atmosphere 40% from 280 to 400ppm since industrialisation doesn’t increase the energy retained by Earth, causing climate change.
I’ll be enormously reassured when nearly every climate scientist and informed scientist and scientific society on the planet explains why.

Reply to  Dennis Horne
January 14, 2016 7:44 pm

Dennis Horne,
Explain what dark matter is. What mediates it? After all, it comprises most of the universe.
What, you can’t explain it? That’s OK. You don’t have to explain something to know it has an effect or not. You just have to observe.
You commented on the observation that CO2 has increased by 40%. But global warming stopped many years ago.
What do you make of that? Looks like those climate scientists and scientific societies were wrong, doesn’t it?
So, who are you gonna believe? Those scientists? Or your lyin’ eyes?

Dennis Horne
Reply to  dbstealey
January 14, 2016 9:36 pm

Good gracious a dark-matter man! I would expect added greenhouse gas to add energy. Indeed, if it wasn’t found I would assume difficulties with detection and measurement. Anyway, the results are in.
http://berkeleyearth.org/wp-content/uploads/2016/01/Annual_time_series_combined1.png
http://ds.data.jma.go.jp/tcc/tcc/products/gwp/temp/ann_wld.html
Who to believe? Gosh, I wish all questions were so easy to answer!

Harry Twinotter
Reply to  dbstealey
January 14, 2016 10:03 pm

Interesting result from Berkeley Earth.
For 2015, they are saying it is a clear record this time. No “statistical ties” and stuff like that.
Oh well I will wait and see what spin is put on it. The usual tactic is to just deny the data, followed up by some sort of Conspiracy Theory.

Reply to  Harry Twinotter
January 15, 2016 12:15 am

Ah, now any evidence contrary to HT’s assertions is a “conspiracy theory”.
Got it.
Satellite data shows that 2015 is nothing to write home about. But spin that any way you like…

Harry Twinotter
Reply to  dbstealey
January 15, 2016 2:37 am

dbstealey.
So you are unwilling to discuss the video. Or the results from Berkeley Earth.
Got it 🙂

Reply to  Harry Twinotter
January 15, 2016 9:14 am

otterboi, I don’t watch your videos. Waste of time. As for Berkeley Earth:comment image
Wake me if you get it…

Dennis Horne
Reply to  dbstealey
January 15, 2016 5:16 am

You commented on the observation that CO2 has increased by 40%. But global warming stopped many years ago.
Then this ‘allegation’ must be of interest to you. CO2 increase and global surface temperature increase since 1958 are correlated at the >99% confidence level:
http://www.woodfortrees.org/plot/gistemp/from:1958.16/mean:12/normalise/plot/esrl-co2/mean:12/normalise

Reply to  dbstealey
January 15, 2016 9:44 am

Dennis Horne says:
I would expect added greenhouse gas to add energy.
And I would think you need to get educated on the difference between insulation and ‘adding energy’. CO2 does not “add energy”.
Next, your graph is simply an overlay of CO2 and temperature. It doesn’t show “correlation” like you think it does.
As a noobie here, you probably don’t know that changes in temperature cause subsequent changes in CO2. So I suggest reading the WUWT archives for a few weeks, at least. Use the keyword “CO2”. You will find lots of information on causation.
Then come back and we can discuss the issue as educated folks.

Bartemis
Reply to  dbstealey
January 15, 2016 10:14 am

Dennis Horne –
“I would expect added greenhouse gas to add energy.”
No, at best, you would expect them to trap more energy. And, perhaps you would expect a heavier object to fall faster than a lighter object. Or, that light beamed from an object moving toward you would travel faster than from an object moving away from you. Expectations come in all shapes and sizes. Not all of them pan out. That is why the scientific method demands that you confirm your expectations experimentally.
There are several avenues for feedback responses which could render any warming tendency of additional CO2 null and void. There are convective flows of energy past the filter of IR radiative gases in the lower atmosphere. There are cloud responses, vegetative responses, and others.
The Land-Ocean plot you show is doubtless fudged towards the end. Sure, you can claim it isn’t, but we’ve seen too much of that activity to have any confidence in that claim. Moreover, the warming trend from 1910-1940 is every bit as rapid as the trend from 1970-2000, yet the rate of CO2 emissions was not anywhere near the same in the two periods. The growth was not exponential – it was clearly delineated into two separate regimes in the two periods – so you cannot argue it is just a logarithmic response.
So, not only is it not proven that CO2 has been a significant driver of temperatures, it is not at all proven even that it can be one. Believing that it has and that it is, is a statement of faith, not of science.
“CO2 increase and global surface temperature increase since 1958 are correlated at the >99% confidence level:”
Nonsense. You can make any two trending variables with minimal curvature of the same sign look similar to one another by scaling and offsetting – just do a linear regression on the one with respect to the other. What is difficult it getting correlation between all the peaks and valleys, as here. That plot shows that CO2 concentration is the effect, and temperature is the cause.

co2islife
Reply to  Dennis Horne
January 16, 2016 6:14 am

So, increasing CO2 in the atmosphere 40% from 280 to 400ppm since industrialisation doesn’t increase the energy retained by Earth, causing climate change.

MODTRAN calculations quantify that increase in energy being trapped, and when you add H2O to the mix CO2 effectively becomes irrelevant. Do the calculations yourself. BTW 600 million years and no Co2 driven catastrophic warming. I have 600 million years of evidence, you have the opinion of “experts.”
http://www.sustainableoregon.com/_wp_generated/wpcdf36281.png