Statistical proof of 'the pause' – Overestimated global warming over the past 20 years

Commentary from Nature Climate Change, by John C. Fyfe, Nathan P. Gillett, & Francis W. Zwiers

Recent observed global warming is significantly less than that simulated by climate models. This difference might be explained by some combination of errors in external forcing, model response and internal climate variability.

Global mean surface temperature over the past 20 years (1993–2012) rose at a rate of 0.14 ± 0.06 °C per decade (95% confidence interval)1. This rate of warming is significantly slower than that simulated by the climate models participating in Phase 5 of the Coupled Model Intercomparison Project (CMIP5). To illustrate this, we considered trends in global mean surface temperature computed from 117 simulations of the climate by 37 CMIP5

models (see Supplementary Information).

These models generally simulate natural variability — including that associated

with the El Niño–Southern Oscillation and explosive volcanic eruptions — as

well as estimate the combined response of climate to changes in greenhouse gas

concentrations, aerosol abundance (of sulphate, black carbon and organic carbon,

for example), ozone concentrations (tropospheric and stratospheric), land

use (for example, deforestation) and solar variability. By averaging simulated

temperatures only at locations where corresponding observations exist, we find

an average simulated rise in global mean surface temperature of 0.30 ± 0.02 °C

per decade (using 95% confidence intervals on the model average). The

observed rate of warming given above is less than half of this simulated rate, and

only a few simulations provide warming trends within the range of observational

uncertainty (Fig. 1a).

Ffe_figure1

Figure 1 | Trends in global mean surface temperature. a, 1993–2012. b, 1998–2012. Histograms of observed trends (red hatching) are from 100 reconstructions of the HadCRUT4 dataset1. Histograms of model trends (grey bars) are based on 117 simulations of the models, and black curves are smoothed versions of the model trends. The ranges of observed trends reflect observational uncertainty, whereas the ranges of model trends reflect forcing uncertainty, as well as differences in individual model responses to external forcings and uncertainty arising from internal climate variability.

The inconsistency between observed and simulated global warming is even more

striking for temperature trends computed over the past fifteen years (1998–2012).

For this period, the observed trend of 0.05 ± 0.08 °C per decade is more than four

times smaller than the average simulated trend of 0.21 ± 0.03 °C per decade (Fig. 1b).

It is worth noting that the observed trend over this period — not significantly

different from zero — suggests a temporary ‘hiatus’ in global warming. The

divergence between observed and CMIP5-simulated global warming begins in the

early 1990s, as can be seen when comparing observed and simulated running trends

from 1970–2012 (Fig. 2a and 2b for 20-year and 15-year running trends, respectively).

The evidence, therefore, indicates that the current generation of climate models

(when run as a group, with the CMIP5 prescribed forcings) do not reproduce

the observed global warming over the past 20 years, or the slowdown in global

warming over the past fifteen years.

This interpretation is supported by statistical tests of the null hypothesis that the

observed and model mean trends are equal, assuming that either: (1) the models are

exchangeable with each other (that is, the ‘truth plus error’ view); or (2) the models

are exchangeable with each other and with the observations (see Supplementary

Information).

Brief: http://www.pacificclimate.org/sites/default/files/publications/pcic_science_brief_FGZ.pdf

Paper at NCC: http://www.nature.com/nclimate/journal/v3/n9/full/nclimate1972.html?WT.ec_id=NCLIMATE-201309

Supplementary Information (241 KB) CMIP5 Models
Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
348 Comments
Inline Feedbacks
View all comments
September 5, 2013 7:28 am

The initial wildly exaggerated claims of climate disaster was deliberate so that draconian restrictions on human freedom could be quickly imposed. Had that been successful, then the inevitable pause could be claimed on the success of the freedom killing regiment imposed on the people. This would then be justified in making them permanent. Fortunately they failed in their efforts and exposed the big lie of AGW and ACC.

richardscourtney
September 5, 2013 7:32 am

Rich:
Thankyou for your reply to me at September 5, 2013 at 6:34 am which says in full

richardscourtney: Thank you for trying to make that clear. Can I summarize it as, “There’s more noise in the system than we assumed”? If so, aren’t we just back with Lorenz’s discovery that chaotic systems produce output that looks like noise? If that’s the case then it’s the noise that has to be modelled not condensed into “error bars”. (I do know it’s not you I’m arguing with. Thanks for your efforts to explain the climate modellers’ thinking).

As to your first question; viz.
“Can I summarize it as, “There’s more noise in the system than we assumed”?”
I answer, Yes.
But your second question is a bit more tricky. It asks,
“If so, aren’t we just back with Lorenz’s discovery that chaotic systems produce output that looks like noise?”
The answer is, possibly.
Please note that I am not avoiding your question. A full answer would contain so many “ifs” and “buts” that it would require a book. However, I addressed part of the answer in my post which supported Gail Combs and is at September 5, 2013 at 4:04 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408455
Indeed, that linked post leads to the entire issue of what is – and what is not – noise.
(Incidentally, I point out that when Gail Combs gets involved in a thread it is useful to read her posts although they are often long: she usually goes to the real crux of an issue.)
I fully recognise that this answer is inadequate and trivial, but I think it is the best I can do here. Sorry.
Richard

Bob L.
September 5, 2013 7:37 am

kadaka: I’m thoroughly enjoying your posts. If the IPCC brought 1/10th of the scientific rigor and honesty to climate issues that you invest in spur-of-the-moment experiments in your kitchen, there would be no AGW movement. Cheers!

richard verney
September 5, 2013 7:38 am

Geoff Sherrington says:
September 5, 2013 at 5:16 am
“..Note, however, that there is no compelling argument that temperatures taken from a Stevenson screen 2.5 m above the surface of the Earth should be the same as (not offset from) those from a satellite measuring microwaves from a thickness of oxygen some distance above the Earth”
///////////////////////
One would not expect the temperature measurement (ie., the absolute temperature) to be the same since as you state, they are measuring temperatures at different locations. However, one would expect the trend of their respective temperature anomalies to be the same. If not, where is the temperature increase that has been observed 2.5m above the ground going, if not upwards to where the satellite is making measurements?

Gail Combs
September 5, 2013 7:38 am

Ric Werme says: September 5, 2013 at 5:32 am
….Of course, there’s the claim that visible light doesn’t heat objects, only infrared does that, probably the most blatantly idiotic claim.
>>>>>>>>>>>>>>>>>>>
That claim is quickly refuted by touching a white vs a black surface in the south at about the same time you get treatment for the burns.

richard verney
September 5, 2013 7:43 am

Steven Hill from Ky (the welfare state) says:
September 5, 2013 at 6:21 am
Man is nothing more than an ant in a tiny corner of the universe….that’s it, nothing more, nothing less.
//////////////////////////////////
And ants and termites emit more CO2 than man!
Dangerous thing ants.

Gail Combs
September 5, 2013 7:47 am

Gene Selkov says: September 5, 2013 at 6:36 am
….The real damages were caused by Gore and nearly half the population of the planet. Can we sue them all?
>>>>>>>>>>>>>>>>>>>>>>>
Depends on whether or not you can equate it to someone yelling FIRE in a crowded theater. it’s called Reckless Endangerment and is illegal in all (US) states. You have to prove it was done intentionally and knowing that there is no such danger.

Gene Selkov
Reply to  Gail Combs
September 5, 2013 8:34 am

Gail: Thank you for reminding me of Reckless Endangerment. I hoped something like that would apply. Now I recall there were efforts made at one time to trap the persons triggering fire alarms:
http://blog.modernmechanix.com/fire-box-traps-pranksters/

TomRude
September 5, 2013 7:49 am

Got to love it: “This difference might be explained by some combination of errors in external forcing, model response and internal climate variability.”
Yeah Gillett and Co… simply put, your pal AGW science is hardly settled.

richard verney
September 5, 2013 7:50 am

All of those discussing warming the oceans by heat from above are overlooking that the temperature of the air above the open oceans is at about the same temperature as the ocean below.
It is rare for there to be as much as 1 degC difference (usually far less), so nothing like a hot hair drier, or hot IR lamp over a bowl or bucket of water.

September 5, 2013 7:53 am

Gösta Oscarsson says:
September 5, 2013 at 12:29 am
There are a few “model trends” which correctly describes “observed trends”. Wouldn´t it be intresting to analyse in what way they differ from the rest?
####################
Yes that’s what some of us are doing. Contrary to popular belief “the models” are not falsified.
The vast vast majority over estimate the warming and need correction. The question is are those that match observations any better when you look at additional metrics and additional time periods. Or can you learn something from those that do match observations to correct those that dont?
If you are interested in looking at model ‘failures” with a mind toward improving our understanding then this is what you do. If you are interested in preserving the IPCC storyline
then you ignore the failures, and if you just interested in opposing the IPPC storyline then you
just ignore the fact that some do better and you argue that the whole lot are bad.
So in between the triumphalism of “the models are falsified” and the blind allegiance to the IPCC storyline, there is work to do.

BrianR
September 5, 2013 7:53 am

How could the error range for modeled data be a third of observational data? That just seems counterintuitive to me.

Ian L. McQueen
September 5, 2013 7:58 am

david eisenstadt wrote about the incorrect phrase “is more than four times smaller than…..” David, you stole my thunder.
I see this kind of error frequently, and was prepared to comment here. I wrote to Scientific American some time ago about their (mis)use of the phrase and then saw it repeated several months later, so they obviously did not pay attention.
As you point out, if anything becomes one time smaller, it disappears.
IanM

milodonharlani
September 5, 2013 8:01 am

Gene Selkov says:
September 5, 2013 at 6:36 am
Steven Hill from Ky (the welfare state) says:
Re suing Gore:

Theo Goodwin
September 5, 2013 8:01 am

The posts above show that people who post at WUWT have achieved a degree of clarity about the differences between the views of modelers and skeptics that does not exist elsewhere. Richard S Courtney deserves a large portion of the credit for this. I want to emphasize just a point or two and I am confident that Richard will correct my errors.
1. What modelers mean by “internal variability” has nothing to do with what everyone else understands as natural variability. Take ENSO as an example. For modelers, ENSO is not a natural regularity that exists in the world apart from their models; at least, it is not worthy of scientific (empirical) investigation as a natural regularity in the world. Rather, it is a range of temperatures that sometimes runs higher and sometimes runs lower and is treated as noise. Modelers assume that these temperatures will sum to zero over long periods of time. They have no interest in attempting to predict the range of temperatures or lengths of periods. In effect, ENSO is noise for modelers. Given these assumptions, it is clear that the natural regularity cannot serve in any fashion as a bound on models. That is, a natural regularity in the real world cannot serve as a bound on models.
2. Obviously, the way that modelers think about ENSO is the way that they think about anything that a skeptic might recognize as a natural regularity that is worthy of scientific investigation in its own right and that serves as a bound on models. Modelers think of clouds the same way that they think of ENSO. They admit that the models do not handle clouds well and maybe not at all. But this admission does not really matter to them. If they could model clouds well they would treat them as noise; that is, they would assume that cloud behavior averages to zero over longer periods of time and amounts to noise. Consequently, no modeler has professional motivation to create a model that is that ingeniously captures cloud behavior. (Clouds are an especially touchy topic for them because changes in albedo directly limits incoming radiation. However, if you are assuming that it all sums to zero then there is no problem.)
3. Modelers care only for “the signal.” The signal, in practical terms for modelers, is the amount of change in global average temperature that can be assigned to CO2. Theoretically, the signal should include all GHGs but modelers focus on CO2. So, what are modelers trying to accomplish? They are trying to show that some part of global temperature change can be attributed to CO2. Is that science?
4. Modelers’ greatest nightmare is a lack of increase in global average temperature. If there is no increase then there is no signal of CO2 forcing. If there is no signal for a lengthy period then that fact counts, even for modelers, as evidence that their models are wrong. The length of that period cannot be calculated. Why?
5. The length of period cannot be calculated because models embody only “internal variability” and not natural variability. Recall that internal variability is noise. If all representations of natural regularities, such as ENSO, must sum to zero over long periods of time then models cannot provide an account of changes to temperature that are caused by natural variability. In other words, modelers assume that there is not some independently existing world that can bound their models.
6. The only hope for modelers is to drop their assumption that ENSO and similar natural regularities are noise. Modelers must treat ENSO as a natural phenomenon that is worthy of empirical investigation in its own right and do the same for all other natural regularities. They must require that their models are bounded by natural regularities. Modelers must drop the assumption that the temperature numbers generated by ENSO must sum to zero over a long period of time. Once they can model all or most natural regularities then they will have a background of climate change against which a signal for an external forcing such as CO2 will have meaning.

September 5, 2013 8:09 am

“Anthony and his team of volunteers found problems with the US system. Since these two systems would be considered ‘Top of the Line’ the rest of the surface station data can only be a lot worse.”
Actually there is little evidence that the US system is “top of the line”
In terms of long term consistency the US system is plagued by several changes that almost no other country has gone through. The most notable being the TOBS change.
There are only a couple other countries that have had to make TOBS adjustments and in no case is the adjustment in other countries as pervasive as it is in the US.
On the evidence one could argue that while the US has a very dense network of stations the homogeniety of that network and the adjustments required put it more toward the BOTTOM
of the station piles than the Top of the line.
Of course that can also be answered objectively by looking at the number of break points that US systems generate as opposed to the rest of the world.
I’ll leave it at this. there is no evidence that the us system is top of the line. There is more evidence that it has problems that other networks done have, for example, you have to TOBS adjust the data. And finally there is an objective way of telling how “top of the line” a network is. I suppose when I get some time I could take a look at that. But for now I think folks would be wise to suspend judgement ( its not settled science ) about the quality of the US network as opposed to others. could be. could not be.

September 5, 2013 8:09 am

Speaking of climate models, I made this comment some time ago.
http://wattsupwiththat.com/2012/05/12/tisdale-an-unsent-memo-to-james-hansen/#comment-985181

Gunga Din says:
May 14, 2012 at 1:21 pm

joeldshore says:
May 13, 2012 at 6:10 pm
Gunga Din: The point is that there is a very specific reason involving the type of mathematical problem it is as to why weather forecasts diverge from reality. And, the same does not apply to predicting the future climate in response to changes in forcings. It does not mean such predictions are easy or not without significant uncertainties, but the uncertainties are of a different and less severe type than you face in the weather case.
As for me, I would rather hedge my bets on the idea that most of the scientists are right than make a bet that most of the scientists are wrong and a very few scientists plus lots of the ideologues at Heartland and other think-tanks are right…But, then, that is because I trust the scientific process more than I trust right-wing ideological extremism to provide the best scientific information.

=========================================================
What will the price of tea in China be each year for the next 100 years? If Chinese farmers plant less tea, will the replacement crop use more or less CO2? What values would represent those variables? Does salt water sequester or release more or less CO2 than freshwater? If the icecaps melt and increase the volume of saltwater, what effect will that have year by year on CO2? If nations build more dams for drinking water and hydropower, how will that impact CO2? What about the loss of dry land? What values do you give to those variables? If a tree falls in the woods allowing more growth on the forest floor, do the ground plants have a greater or lesser impact on CO2? How many trees will fall in the next 100 years? Values, please. Will the UK continue to pour milk down the drain? How much milk do other countries pour down the drain? What if they pour it on the ground instead? Does it make a difference if we’re talking cow milk or goat milk? Does putting scraps of cheese down the garbage disposal have a greater or lesser impact than putting in the trash or composting it? Will Iran try to nuke Israel? Pakistan India? India Pakistan? North Korea South Korea? In the next 100 years what other nations might obtain nukes and launch? Your formula will need values. How many volcanoes will erupt? How large will those eruptions be? How many new ones will develop and erupt? Undersea vents? What effect will they all have year by year? We need numbers for all these things. Will the predicted “extreme weather” events kill many people? What impact will the erasure of those carbon footprints have year by year? Of course there’s this little thing called the Sun and its variability. Year by year numbers, please. If a butterfly flaps its wings in China, will forcings cause a tornado in Kansas? Of course, the formula all these numbers are plugged into will have to accurately reflect each ones impact on all of the other values and numbers mentioned so far plus lots, lots more. That amounts to lots and lots and lots of circular references. (And of course the single most important question, will Gilligan get off the island before the next Super Moon? Sorry. 😎
There have been many short range and long range climate predictions made over the years. Some of them are 10, 20 and 30 years down range now from when the trigger was pulled. How many have been on target? How many are way off target?
Bet your own money on them if want, not mine or my kids or their kids or their kids etc.

richardscourtney
September 5, 2013 8:09 am

Steven Mosher:
At September 5, 2013 at 7:53 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408576
you assert

Contrary to popular belief “the models” are not falsified.

Oh, dear! NO!
It seems I need to post the following yet again on WUWT.
None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcings resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
More than a decade ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
He says in his paper:

One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.
The question is:
if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.
Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at
http://www.nature.com/reports/climatechange, 2007)
recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.


And, importantly, Kiehl’s paper says:

These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.

And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Thanks to Bill Illis, Kiehl’s Figure 2 can be seen at
http://img36.imageshack.us/img36/8167/kiehl2007figure2.png
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:

Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.

It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^-2 to 2.02 W/m^-2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^-2 to -0.60 W/m^-2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
Richard

Gene Selkov
September 5, 2013 8:14 am

ferd berple says:
> that really is the crux of the problem. the assumption that natural variability is simply noise around a mean and thus will average out to zero over short periods of time.
This assumption is taught at school but is almost never tested. There is something profoundly counter-intuitive in the way averages are assessed today. I would allow some slack here a hundred, two hundred years ago, when all measurements were tedious, time-consuming, and difficult to track, so we had to replace actual data with the central limit theorem.
There is no such hurdle today, in most cases. Many different types of measurements can be automated and the question of whether they converge or not, and how they vary (chaotically or not), can be resolved in straightforward ways. Instead, everybody still uses estimators, often preferring those that hide the nature of variability.

Jim G
September 5, 2013 8:18 am

I guess Niels Bohr was right when he said, “Prediction is very difficult, especially about the future” . And Yogi Berra, he said, “‘It’s tough to make predictions, especially about the future”, very similar. A philosopher and a scientist in agreement.

richardscourtney
September 5, 2013 8:26 am

Theo Goodwin:
Thankyou for your obviously flattering mentions of me (I wish they were true) in your post at
September 5, 2013 at 8:01 am.
You ask for me to comment on points I disagree in that post. I have several knit-picking points which do not deserve mention, but there is one clarification which needs to be made.
The models parametrise effects of clouds because clouds are too small for them to be included in the large grid sizes of models. Hence, if clouds were understood (they are not) then their effects could only be included as estimates and averages (i.e. guesses).
Also, I have made a post which refutes the climate models on much more fundamental grounds than yours but – for some reason – it is in moderation.
Richard
PS Before some pedant jumps in saying “knit-picking” should be “nit-picking” because nits are insects, I used the correct spelling. Knit-picking was a fastidious task in Lancashire weaving mills. Small knots (called “knits”) occurred and reduced the value of cloth. For the best quality cloth these knits had to be detected, picked apart and the cloth repaired by hand. It was a detailed activity which was pointless for most cloth and was only conducted when the very best cloth was required.

milodonharlani
September 5, 2013 8:26 am

ENSO variability during the Little Ice Age & the “Medieval Climate Anomaly”, as the MWP is now politically correctly called:
http://repositories.lib.utexas.edu/handle/2152/19622
Climate scientists are only now getting around to addressing the question of natural variability that should have preceded any finding of an “unprecedented human fingerprint”.

Jean Parisot
September 5, 2013 8:27 am

“See my comment above on Dansgaard-Oeschger (D-O) events. They are called Bond events during an interglacial.”
What we really need is a tool or decision matrix that attempts to identify the start of one of these D-O or Bond events. All of the effort invested in trying to measure, explain, and manage the change in slope of the global temperature trend isn’t important in comparison to the need for a tool to detect these events as soon as possible. I’ve been impressed with how modern agriculture in the US and Canada responded to this years cooling change – but a global event will take more time.
We know they happen regularly and we know the magnitude, it seems to be a bit more important then a tiny warming trend, regardless of the cause.

September 5, 2013 8:34 am

If Bart said that 2+2 was 3 and Sally said it was 5, would we conclude that “on average” they’d been taught good math skills?
The notion of averaging the output of different models and then comparing them to observations is ludicrous unto itself.

Theo Goodwin
September 5, 2013 8:35 am

richardscourtney says:
September 5, 2013 at 8:26 am
Thanks for your clarification. I look forward to your post. I did not mean to flatter you. You are a tireless and gifted explainer. That is not flattery. (Oh, it occurs to me I can offer a bit of advice. Beware the trolls lest they distract you.)

David S
September 5, 2013 8:38 am

How many times have we seen new evidence that AGW is baloney? Many times of course. And yet the government continues its claim that AGW is a huge problem that must be dealt with. So here’s the problem: We live in something similar to George Orwell’s 1984. Reality and truth no longer matter. The correct answer is whatever the government says it is, reality notwithstanding. Anyone who disagrees gets electric shocks until he does agree. Ok they haven’t started the electric shocks yet but the skeptics are labeled “deniers” and some folks suggest they be sent to re-education camps.