95% of Climate Models Agree: The Observations Must be Wrong

Note: This is a repost from Dr. Roy Spencer’s blog entry last Friday. I’ve done so because it needs the wide distribution that WUWT can offer. The one graph he has produced (see below) says it all. I suggest readers use their social media tools to share this far and wide. – Anthony

by Roy W. Spencer, Ph. D.

I’m seeing a lot of wrangling over the recent (15+ year) pause in global average warming…when did it start, is it a full pause, shouldn’t we be taking the longer view, etc.

These are all interesting exercises, but they miss the most important point: the climate models that governments base policy decisions on have failed miserably.

I’ve updated our comparison of 90 climate models versus observations for global average surface temperatures through 2013, and we still see that >95% of the models have over-forecast the warming trend since 1979, whether we use their own surface temperature dataset (HadCRUT4), or our satellite dataset of lower tropospheric temperatures (UAH):

CMIP5-90-models-global-Tsfc-vs-obs-thru-2013

Whether humans are the cause of 100% of the observed warming or not, the conclusion is that global warming isn’t as bad as was predicted. That should have major policy implications…assuming policy is still informed by facts more than emotions and political aspirations.

And if humans are the cause of only, say, 50% of the warming (e.g. our published paper), then there is even less reason to force expensive and prosperity-destroying energy policies down our throats.

I am growing weary of the variety of emotional, misleading, and policy-useless statements like “most warming since the 1950s is human caused” or “97% of climate scientists agree humans are contributing to warming”, neither of which leads to the conclusion we need to substantially increase energy prices and freeze and starve more poor people to death for the greater good.

Yet, that is the direction we are heading.

And even if the extra energy is being stored in the deep ocean (if you have faith in long-term measured warming trends of thousandths or hundredths of a degree), I say “great!”. Because that extra heat is in the form of a tiny temperature change spread throughout an unimaginably large heat sink, which can never have an appreciable effect on future surface climate.

If the deep ocean ends up averaging 4.1 deg. C, rather than 4.0 deg. C, it won’t really matter.

3 2 votes
Article Rating
174 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
cnxtim
February 10, 2014 11:54 am

AGW is warmist “sciences” Ponzi scheme.

February 10, 2014 11:59 am

It’s not as bad as we thought.
Mistakes were made.
But politically, is it yet worth saying “Don’t Panic”?
“Don’t Panic” sounds like poor guidance (to the fearful and ignorant at least).

eyesonu
February 10, 2014 12:04 pm

Thank you Dr. Spencer.
I love that spaghetti graph.

Latitude
February 10, 2014 12:07 pm

News Flash!……
Hidden unmeasurable heat causes snow….and polar vortexes
…film at 11

Larry Ledwick
February 10, 2014 12:10 pm

95% of climate models agree that they totally missed predicting real temperatures and are unfit for their intended purpose. A large portion of them never even at their lowest projected temperature limit even touch real world measured temperatures.
I think the project proposed in another thread recently to identify and black list the incompetent models and toss out the fraction that never even achieve bad predictions should be pushed forward with all possible speed.
If over a time span of 16 years a plot of the model never once crosses the plot of real measured temperatures it obviously is a completely incompetent model and not worth the power bill to run its simulations. It only serves to inflate the range of predictions toward the warm side, and serves no other useful purpose.

Ebeni
February 10, 2014 12:10 pm

Ah!! That pesky Mother Nature!! She is SUCH a denier.

Tim Obrien
February 10, 2014 12:15 pm

A bad model is a bad model is a bad model. They are failing to prove their point and need to go back to square one.

eyesonu
February 10, 2014 12:22 pm

Larry Ledwick says:
February 10, 2014 at 12:10 pm
…. 95% of climate models agree that they totally missed predicting real temperatures and are unfit for their intended purpose. A large portion of them never even at their lowest projected temperature limit even touch real world measured temperatures.
I think the project proposed in another thread recently to identify and black list the incompetent models and toss out the fraction that never even achieve bad predictions should be pushed forward with all possible speed.
=============
That fraction that you would “toss out” would be 95/100 or 95%. Sounds good to me.

Larry Ledwick
February 10, 2014 12:26 pm

It would be nice to have an image where you could de-select plots for certain models so people could see how the ensemble of model predictions changes as you drop the worst models from the plot. If a model was consistently biased toward the warm side it would presumably grow more and more out of touch with real world temps. If you could drop the worst 10% worst 20% and worst 50% of the model and do a visible comparison (animated gif?) it would be a great visual tool to show people how bad some of the models are.
The best would be something like wood for trees where you could select and de-select model runs at will to see what was even in the same ball park as reality.
A quick calibrated eyeball evaluation of that mess of spaghetti seems to me that only about 5 or 6 are even in the running for reasonable approximations of reality.

richardscourtney
February 10, 2014 12:30 pm

Friends:
It seems sensible to copy two posts from the thread discussing the superb article by Roger A. Pielke Sr. It is here.
The first post I here quote was from Roger A. Pielke Sr. in reply to me and says
——————
Roger A. Pielke Sr. says:
February 8, 2014 at 2:40 pm
Hi Richard
Thank you for your follow up. We are in complete agreement, as you wrote, that
Hence, the models are excellent heuristic tools. And they should be used as such.
But there is no reason to suppose that any of them is a predictive tool. And averaging model predictions (e.g. CMIP5) is an error because average wrong is wrong.

The bottom line, based on our perspective of the models, is that IPCC Annex 1 results are fundamentally flawed..
Roger
——————
The importance of that “bottom line” is the subject of this thread, and is spelled-out in the second post I copy from that thread which is from me to dbstealey.
———————
richardscourtney says:
February 8, 2014 at 3:35 pm
dbstealey:
In your post at February 8, 2014 at 3:16 pm you say

The public wants correct answers. But we aren’t getting them. We are getting wrong answers instead, based on the preconcevied assumption that there is a “carbon” crisis.

Indeed so.
I point out that
(a) in this thread we are discussing that the climate models are being used as predictive tools when they have no demonstrated predictive skill
and
(b) in another thread we are discussing that the statistical methods used by so-called ‘climate science’ are not fit for purpose
and
(c) in past threads we have discussed the problems with acquisition of climate data notably GASTA
and
(d) in another thread there is discussion of climate sensitivity which is a reflection of the problem of an inadequate theory of climate change.
Simply, the only thing about climate which is known with certainty is that nothing about climate behaviour is known with sufficient certainty to assist policy making. It is better to have no information than to be influenced by wrong information when formulating policy.
Richard
______________________________
Richard

eyesonu
February 10, 2014 12:33 pm

To expand further on Larry Ledwick’s comment above. How about attaching the names of the so-called “climate scientists” to their individual model plots with a comparison of the observed data.
That would probably be cause for alarm within the ranks for the “cause”.

Walter Allensworth
February 10, 2014 12:34 pm

“If the deep ocean ends up averaging 4.1 deg. C, rather than 4.0 deg. C, it won’t really matter.”
First, I’m a CAGW skeptic, so you’re singing to the choir here a little, but when I see things like the statement above in quotes, I cringe.
REALLY?
How do you know it won’t really matter? Have you done a dynamic energy balance study?
Can you cite a peer reviewed reference that shows we can dump 10^22 joules of energy every year into the ocean and it won’t really matter? It won’t change circulation? It won’t cause long-term adverse effects in the thermohaline cycle?
I would love to see this reference and be convinced, because it would be a great way to defuse the whole CAWG meme.

February 10, 2014 12:34 pm

It’s worth keeping in mind that the model runs documented by the IPCC were not alone. Other model runs — thousands of them — no doubt showed reasonable temperature spans into the future. But those runs were tossed out, never shown, as they were not ideologically correct.
We think of a model run as “put in the parameters, let it run and see what comes out.” But in fact, it is an iterative process, run over and over again with tweaks to the “immutable physics” and “known observations” and continuous “tuning” of various algorithms to produce a result that makes the modelers happy.
Only then does it get published. We see only this final result. The Harry_Read_Me.txt file and other ClimateGate documents show a lot of “behind the scenes” tweaks and bodges to the input to produce the desired output from models.
===|==============/ Keith DeHavelle

Jack
February 10, 2014 12:34 pm

Can’t remember who did the clip, but they examined the temperature record. They exposed that the warmists had actually tilted the x axis back, so they could show the graph as rising. They did not start at zero ( start point 1978) but below zero to accentuate the rise ( hahaha).
With all that, the temperature was still the same in 2013 as 1978. The debunkers of this graph also noted that 1978 was chosen as the starting point because it was when the world is going to freeze scare started.
Those were among many other faults with the graph.
Point is that anyone that believes in the graphs the models produce is being well and truly suckered.

MattS
February 10, 2014 12:36 pm

Who runs the 4 models that are at or below UAH?

February 10, 2014 12:42 pm

The politics and Press are shameful.
How many people know England lost 30,000 elderly to last winter’s cold? There is Socialism for you.
How many people know South Dakota lost 20,000 head of cattle and thousands of other farm animals to a snow blizzard storm in October 2013?
How many people know New Zealand and Scotland lost thousands of lambs due to early winter storms?
Now, northern Indian Reservations and I assume non-reservation are short on propane gas. The number of people without electric is now lapsing over to the next storm
The death toll is piling up while the Media including FOX fail to report the lost of life due to a colder period from a sunspot minimum.
Instead of in bracing what is going on the media and the US government is in goose step March right down the road to destruction with the hypothesis of Man-Made Global Warming.
Shameful.
Paul Pierett

Darren Potter
February 10, 2014 12:51 pm

MattS: “Who runs the 4 models that are at or below UAH?”
Great observation and great question.
Don’t be surprised if those “Who” suddenly disappear once AGW Climate Cabal gets wind of their models. The AGW CC can’t have any dissenting views, especially models that potentially go along with Mother Nature… 😉

David in Texas
February 10, 2014 12:51 pm

Ok, I have to ask. Why does the graph being in 1983, but the label says “(’79-2013)”? Anyone?

Larry Ledwick
February 10, 2014 12:56 pm

David in Texas says:
February 10, 2014 at 12:51 pm
Ok, I have to ask. Why does the graph being in 1983, but the label says “(’79-2013)”? Anyone?

I believe it is due to the plot being a running 5 year mean so it begins 5 years after the data begins.

richardscourtney
February 10, 2014 1:01 pm

Walter Allensworth:
Your post at February 10, 2014 at 12:34 pm

“If the deep ocean ends up averaging 4.1 deg. C, rather than 4.0 deg. C, it won’t really matter.”

“If the deep ocean ends up averaging 4.1 deg. C, rather than 4.0 deg. C, it won’t really matter.”
First, I’m a CAGW skeptic, so you’re singing to the choir here a little, but when I see things like the statement above in quotes, I cringe.
REALLY?
How do you know it won’t really matter? Have you done a dynamic energy balance study?
Can you cite a peer reviewed reference that shows we can dump 10^22 joules of energy every year into the ocean and it won’t really matter? It won’t change circulation? It won’t cause long-term adverse effects in the thermohaline cycle?
I would love to see this reference and be convinced, because it would be a great way to defuse the whole CAWG meme.

Why do you want a reference? Do you accept everything you are told?
You have a brain, why not use it instead of accepting things that are “referenced” to someone else?
The thermal capacity of water is more than a thousand times greater than the thermal capacity of air. So, heat that goes into the ocean raises the ocean temperature much less than if it had gone into the air.
The transfer of heat is from hot to cold. So, a tiny rise in ocean temperature makes little or no difference to the rate at which the oceans can release heat to the air. In other words, if heat is being pumped into the oceans (and there is NO evidence that it is) then that effectively removes that heat as a possible cause of discernible global warming.
In other words, “If the deep ocean ends up averaging 4.1 deg. C, rather than 4.0 deg. C, it won’t really matter” because it can’t.
Richard

February 10, 2014 1:01 pm

“Simply, the only thing about climate which is known with certainty is that nothing about climate behaviour is known with sufficient certainty to assist policy making.”
well, that bears some skepticism.
The person who gets to decide if a wrong model is still useful is NOT a blog commenter.
The person who gets to decide is a policy maker.
Suppose I am a policy maker. Policy making is not science. Policy making can be guided by science or informed by science, but in the end it not making hypotheses and predictions.
It’s making decisions based on many factors: science, economics, self interest, lobbying, principles, constituents interests, bribes, etc
As a policy maker I am well within my rights to look at model that is biased high and STILL USE IT
For example. Suppose I ask you to predict sea level rise in the next 100 years.
Party A, tells me to extrapolate from the past and to expect 20cm rise.
Climate modeller tells me to expect 1meter.
Historian tells me that the past has seen sea levels at least 20 meters above the current.
All of these can assist the policy maker. None of them can DETERMINE policy with the iron fist of logic or the soft prod from induction. None of them spits out a policy. In the end the policy maker will have to weigh the uncertainty of each of these disciplines and the costs and benefits.
A cautious policy maker may look at history and argue that he wants to be really safe
http://www.dailymail.co.uk/news/article-1386978/The-Japanese-mayor-laughed-building-huge-sea-wall–village-left-untouched-tsunami.html
Blog commenters do not get tell policy makers what information assists them.

YEP
February 10, 2014 1:02 pm

David in Texas says:
February 10, 2014 at 12:51 pm
Ok, I have to ask. Why does the graph being in 1983, but the label says “(’79-2013)”? Anyone?
*********************
Perhaps because the graph has 5-year running means? The first such mean from data beginning in 1979 would be 1983.

Russ R.
February 10, 2014 1:03 pm

Two questions:
1. Spencer states: “we still see that >95% of the models have over-forecast the warming trend since 1979”. Why does the chart have a start date of 1983 rather than 1979?
2. Is one of the RCP scenarios being modeled here (e.g. RCP 4.5, RCP 6, RCP 8.5)? If not, what GHG concentration data are being fed into the models? How do those concentrations compare to the actual observed conditions?

NotTheAussiePhilM
February 10, 2014 1:07 pm

It is articles like this that make me not completely give up on WUWT
– an actual professional scientist has done some analysis..
Here is another one that you linked to in the weekly round up
– it contains a fairly similar message – the models are over cooked…
http://www.c3headlines.com/2014/01/2013-nasa-hansen-climate-model-prediction-global-warming-reality-those-stubborn-facts.html
Both worthwhile, IMHO…
– unlike, ahem, this drivel..
http://wattsupwiththat.com/2014/02/07/friday-funny-two-guys-with-a-ruler-blow-up-the-white-house-global-warming-video-claims/
– IMO, publishing mindless drivel like this one, which some may find humorous (I don’t because it’s just too moronic for my tastes), dilutes the more intelligent content of WUWT….

February 10, 2014 1:08 pm

I appreciate the honestly of this graph. Though many have said there has been no increase in temperature in the last 15 years, this graph actually shows differently, no?
I’m no alarmist but it shows between a .2 and .3 increase. Over 100 years, if that is constant, we’re talking almost a 2 degree Celsius difference, no? And that’s if it says steady. There’s every reason to believe it could increase more when the sun starts showing more activity.
I mention this out of wonderment and not out of being contrary. I’d love an explanation of how people are saying the temp hasn’t changed and how we shouldn’t be worried about this continuing. Thank you!

Theo Goodwin
February 10, 2014 1:12 pm

richardscourtney says:
February 10, 2014 at 12:30 pm
Friends:
“It seems sensible to copy two posts from the thread discussing the superb article by Roger A. Pielke Sr. It is here.
The first post I here quote was from Roger A. Pielke Sr. in reply to me and says
——————
Roger A. Pielke Sr. says:
February 8, 2014 at 2:40 pm
Hi Richard
Thank you for your follow up. We are in complete agreement, as you wrote, that
Hence, the models are excellent heuristic tools. And they should be used as such.”
The differences between hypotheses and heuristic tools is of great importance. Darwin’s claim that similarity of morphology indicates a common ancestor had to be down graded from a scientific truth to a heuristic tool. Because the vast majority of scientists are common sense realists, the distinction is doubly important. Computer models can serve as heuristic tools only.

Theo Goodwin
February 10, 2014 1:15 pm

Steven Mosher says:
February 10, 2014 at 1:01 pm
Totally agree with everything you wrote. But having written it, you cannot say that science supports your decision as policy maker. So, would you please inform the policy makers of your reasoning.

richardscourtney
February 10, 2014 1:15 pm

NotTheAussiePhilM:
Thankyou for telling us at February 10, 2014 at 1:07 pm

Both worthwhile, IMHO…
– unlike, ahem, this drivel..
http://wattsupwiththat.com/2014/02/07/friday-funny-two-guys-with-a-ruler-blow-up-the-white-house-global-warming-video-claims/
– IMO, publishing mindless drivel like this one, which some may find humorous (I don’t because it’s just too moronic for my tastes), dilutes the more intelligent content of WUWT….

It is good to be informed that we are in the presence of a genius because, otherwise, some of us may not have noticed.
Thankyou for the information.
Richard

NotAGolfer
February 10, 2014 1:17 pm

PLEASE QUIT implicitly validating their data sets as anything but trash!
The models are wrong DESPITE the fact that a false warming trend has been added onto the raw data via invalid adjustments and homogenizations. Even with this head start, they are wrong. There is hardly a thing right, in fact, in this field called “climate science.”
Please quit throwing them a bone by saying things like “And if humans are the cause of only, say, 50% of the warming …” They have tortured the data beyond use. We should start all over with the raw data, or with better experiments.

jono1066
February 10, 2014 1:20 pm

I have a book called `Oceans` at home, written in the 70`s, as the closing piece it wistfully talks of the future, where consideration is now being given to sink nuclear thermal power plants into the deep oceans to warm them from their inhospitable cold, to promote and sustain living organsims.
how times change

Pamela Gray
February 10, 2014 1:25 pm

I also believe this is in essence, a Ponzi scheme meant to put into power and enrich a group of elite rich hippies who remembered their days of eating top ramen noodles, being dismissed by the media, and driving barely legal rickety vans, and who think themselves benevolent. When these hippies grew up and got jobs and owned/led major businesses and corporations, they suddenly became a not insignificant source of campaign cash that many politicians on both sides drooled after. So they were allowed to hold sway on who to tax next and who should win contracts built on subsidies and who got research grants. We get what we pay for. So in every country vote out any who did not fight tooth and toenail against this greed-without-work, anti-freedom mindset.

Mindert Eiting
February 10, 2014 1:27 pm

Larry Ledwick: ‘A quick calibrated eyeball evaluation of that mess of spaghetti seems to me that only about 5 or 6 are even in the running for reasonable approximations of reality’.
No, we need an integral judgement. If you take a multiple choice exam and you answer 95% of the items incorrectly, what would you achieve with the five percent you answered correctly? The judgement is that you failed miserably, like Spencer concluded, and the five percent is only correct by chance. The climate scientists are in the mourning phase of negotiating about a few correct models, missing heat hiding in the deep ocean, and a temperature development which is only a pause, as their hoped future will show. They may get some help from family and friends but this is not our task.

Pamela Gray
February 10, 2014 1:29 pm

richardscourtney! LMAO!

Mark Hladik
February 10, 2014 1:30 pm

While somewhat off-topic, it is also indicative:
Prior to the SuperBowl on 02 February 2014, I read that some wags used the “Madden NFL” game to run a series of simulations (ensemble, anyone?) to ‘predict’ the winner of SuperBowl 48.
Dozens of simulations, and the “consensus” of the simulations was Denver, most often by 3 or 3.5 points (yes, I know, there is no ‘half-point’ in football). I checked the odds just before kickoff, and sure enough, the odds-makers had Denver winning by at least three points.
Highly instructive. One cannot “model” a stochastic system (unless a plethora of assumptions are made … … … )

February 10, 2014 1:30 pm

Obviously, things were going just great in the 80’s and 90’s as they got the global temperature right…………for the wrong reasons.
One issue with climate models. The testing period to validate or invalidate it takes years but what is inexcusable, relates to the fact that climate scientists and model builders were convinced they had it right from the beginning.
So right, that when it become obvious the models had it wrong/were too warm, rather than make appropriate adjustments to the sensitivity and feedback equations, they instead came up with creative explanations to justify why the models really are right…………..but something else that was unexpected is temporarily interfering.
This strategy might have worked quite effectively if it was a laboratory experiment and time expired for the testing period several years ago. However, time continues to elapse here in the real world and time continues to harshly judge the increasing disparity between observations and models.
The modelers, climate scientists and politicians can’t shut down the experiment with statements like “the science is settled” or “the debate is over” because they can’t stop time from ticking on and with time, comes fresh empirical data.
This data is the only way to judge all theories and science to see if they can stand up to the test.
Global climate models appear to be a catastrophic failure and those justifying them as evidence to make critical decisions regarding governmental policies only look more more foolish and dishonest with time.

rgbatduke
February 10, 2014 1:32 pm

I’ve been hammering exactly the same point on two of yesterday’s and the day before’s threads. Roy’s figure doesn’t do it justice. If one compares to figure 9.8a of the IPCC AR5, one notes that the leftmost part of his graph includes part of the training data, the “reference period” from 1961 to 1990 used to initialize and pretend to validate the CMIP5 models. That is, the models and HADCRUT4 are virtually constrained to come together in 1990, not the starting point in Roy’s graph (which looks like a redrawn variation of AR5’s infamous figure 1.4 from the SPM.
I’ve been reading over chapter 9 of AR5 in some detail, as it deals with the statistical basis for claims of validation and accuracy of model predictions. It is interesting to note that in sections 9.2.2.2 and 9.2.2.3, AR5 openly acknowledges that the Multimodel Ensemble (MME) mean is, well, dubious at best, utterly meaningless at worst. To quote (again) from section 9.2.2.3:
…collections such as the CMIP5 MME cannot be considered a random sample of independent models. This complexity creates challenges for how best to make quantitative inferences of future climate…
To put it bluntly, it doesn’t “create challenges”. The correct statement is that there is no possible basis in the theory of statistical analysis for assigning a meaning to the MME mean! Specific problems that they mention in section 9.2.2 with this mean include:
a) The models in this “ensemble” (it isn’t an ensemble in any sense that is meaningful in statistics, so we must presume that they really intended the term “collection” or “group”) are not independent. This means that even if the model results were in some defensible sense samples drawn from a statistical distribution “of models” the variance and mean cannot be quantitatively analyzed using e.g. the central limit theorem and the error function. Any assignment of “confidence” on the basis of MME mean results is pure voodoo with no defensible basis in statistics.
b) The models in this ensemble do not all contribute the same number of “perturbed parameter” runs from the per model perturbed parameter ensemble (PPE) of outcomes when tiny changes are made to initial conditions and model parameters. These results do constitute a defensible statistical sampling of outcomes — for that one model, per model — to the extent that a valid statistical method for doing a Monte Carlo sampling of the phase space of possible initial conditions is used. The PPE simultaneously tells one how robust the model results are and what the statistical spread of results around the PPE mean is, which in turn can be used in an ordinary hypothesis test to gauge the probability of observing the actual climate given the null hypothesis “this is a perfect model”. Still, when one model only generates 10 PPE runs and another generates 160 and the two PPE means are given equal weight in the meaningless MME super-mean, this is simply a statistical absurdity. One is expected to have 4 times the variance of the other and even the crudest of chi-square methodology would discount the lesser model’s statistical relevance in the final number.
c) Finally, 2.2.3 openly acknowledges that mere model performance is ignored in the construction of the MME mean. That is, the IPCC is perfectly happy to average in obviously failed models that are run far too hot as long as it keeps the MME mean equally high, even though I literally cannot imagine any sort of statistical analysis were such a practice could be justifiable.
This decision is not arbitrary. One has (or should have) direct access to the PPE data, and can directly compare per model the degree to which the actual predictions of the model with perturbed parameters overlaps the observed temperature and interpret this as the probability of the natural occurrence of the observed temperatures if the model were a perfect model and all variation is due to imperfect specification of model parameters and initial conditions. That is, one can perform a perfectly classic hypothesis test using the PPE data, per model to clearly reject failed models (p-values less than 0.05, to call into question model with low p-values (given an “ensemble” of model results, Bonferroni corrections mean that rejection should occur for substantially higher p-values given all of the chances to get an acceptable one and the known/expected overlap in the various model lineages), and to include at most the models that have a reasonable p-value in any sort of collective analysis.
These are the errors they acknowledge. Ones they make no mention of include the fact that all of the models are effectively validated against the reference period, and that the MME mean utterly fails to describe the entire thermal history of the last 155 years in HADCRUT4 as it stands!
This is perfectly obvious from a glance at figure 9.8a in AR5. The black line (actual HADCRUT4 measured/computed surface temperature) lies above the red line MME mean) a grand total of perhaps 25 years out of 155, including the training set! If one just estimates the p-value for this assuming a roughly 5 year autocorrelation time and random discursion in both cases from some sort of shared mean behavior with equal probability of being too high or two low, the p-value for the overall curve is order of 0.0001 or less. Less because there are two clearly visible stretches — from 1900 to 1940 and from 2000 to the present — where the MME mean is always greater than the actual temperature.
The stretch from 1900 to 1940 is especially damning, since in the 20th century the warming visible in HADCRUT4 in 1900 through 1950 exactly matches the warming observed from 1950 through 2000, so much so that only experts sufficiently familiar with HADCRUT4 to be able to pick up specific features such as the 1997-1998 Super ENSO spike at the right of the latter record would ever be able to differentiate them. The MME mean completely smooths over this entire 50 year stretch, effectively demonstrating that it is incapable of correctly describing the actual natural, non-forced warming that occurred over this period!
Even before one looks at the CMIP5 models one at a time, and fails to validate most of them one at a time for a variety of reasons (not just failure to get the global mean surface temperature anywhere near correct, but for failure to get weather patterns, rainfall, drought, tropospheric warming, temperature autocorrelation and variance, and much more right as well) nobody could possibly look at 9.8a in AR5 and then assert a prediction, projection, or prophecy of future climate state of the Earth based on the MME mean with any confidence at all!
If one eliminates the obviously failed models from CMIP5 from playing any role whatsoever in forecasting future warming (because there is no defensible basis for using failed models to make forecasts, is there?), if one takes the not-failed yet models and weights their contribution to mean and variance of the collective model average on the surviving residual models, if one accounts for the fact that the surviving models are all clearly still consistently biased on the warm side and underestimate the role of natural variability when hindcasting the bulk of the 20th century outside of the training/reference interval, there would be little need to add a Box 9.2 to AR5 — basically a set of apologia for “the hiatus”, what they are calling “the pause” because ordinary people know what a pause is but are a bit fuzzy on the meaning of hiatus and neither one is particularly honest as an explicit description of “a period of zero temperature increase from 1997 to the present”.
Although the remaining models would still very likely be wrong, the observed temperature trend wouldn’t be too unlikely given the models and hence it cannot yet be said that the models are probably wrong.
And I promise, the adjusted for statistical sanity CMIP5 MME mean, extrapolated, would drop climate sensitivity by 2100 like a rock, to well under 2 C and possibly to as low as 1 C.
Where is the honesty in all of this? Is not the entire point to educate the poor policy makers in the limits in the statistical confidence of model projections? How can one possibly publish chapter 9, openly acknowledge in one single numbered paragraph that the MME mean is a meaningless quantity that nobody knows how to transform into confidence intervals because it is known to be corrupted by multiple errors that they do not bother to try to accommodate, and then make all sorts of bold statements of high confidence in the SPM?
High confidence based on what, exactly? Somebody’s “expert opinion”? A bogus average of failed models that artificially raise climate sensitivity by as much as 2 C over any sort of sane bound consistent with their own observational data? Or the political needs of the moment, which most definitely do not include acknowledging that they’ve been instrumental in the most colossal scientific blunder in recorded history, one that cost enough money to have ended world poverty three times over, to have treated billions of the world’s poorest people for easily preventable diseases, to have built a system of universal education — I mean, what can one do with a few trillion dollars and the peacetime energies of an entire global civilization, when CAGW is no longer a serious concern?
We may, possibly, soon find out.

Adam
February 10, 2014 1:39 pm

Instead of being so negative about the climate models why not focus your attention on changing the observational record to close the gap? [/sarc]

negrum
February 10, 2014 1:40 pm

Steven Mosher says:
February 10, 2014 at 1:01 pm
” …Blog commenters do not get tell policy makers what information assists them. …”
—-l
That seems to be the preise attitude which resulted in the CAGW meme. Perhaps if policy makers paid more attention to blogs, they would be able to make better decisions?

AlexS
February 10, 2014 1:44 pm

Another post based on faulty “managed” temperature databases…

Not Sure
February 10, 2014 1:45 pm

A cautious policy maker may look at history and argue that he wants to be really safe
http://www.dailymail.co.uk/news/article-1386978/The-Japanese-mayor-laughed-building-huge-sea-wall–village-left-untouched-tsunami.html

FTFA:

But 10-term mayor Wamura never forgot how quickly the sea could turn. Massive earthquake-triggered tsunamis flattened Japan’s northeast coast in 1933 and 1896. In Fudai, the two disasters destroyed hundreds of homes and killed 439 people.
‘When I saw bodies being dug up from the piles of earth, I did not know what to say. I had no words,’ Wamura wrote of the 1933 tsunami in his book about Fudai, ‘A 40-Year Fight Against Poverty.’
Read more: http://www.dailymail.co.uk/news/article-1386978/The-Japanese-mayor-laughed-building-huge-sea-wall–village-left-untouched-tsunami.html#ixzz2sxRsQMIg
Follow us: @MailOnline on Twitter | DailyMail on Facebook

Where are the bodies of those killed by human-caused climate change again? Where is this “history” we should be learning from?

Blog commenters do not get tell policy makers what information assists them.

Inasmuch as lowly “blog commenters” are voters they most certainly get to tell policy makers what the government’s policy should be. Or at least they should, in a democracy.

Duster
February 10, 2014 1:47 pm

Jack says:
February 10, 2014 at 12:34 pm
…1978 was chosen as the starting point because it was when the world is going to freeze scare started. Those were among many other faults with the graph. Point is that anyone that believes in the graphs the models produce is being well and truly suckered.

Not at all. The “new ice age” scare began in the late ’60s and early ’70s. Winkless and Browning, for instance, published “Climate and the Affairs of Men” in 1975, which looked for a new ice age of at least the same magnitude as the LIA. Their ability to forecast events proved to be just about as sound as the “team’s”, i.e. not sound at all.
A fairly severe drought in the early to mid-1970s interrupted that kind of thinking in the western US. In fact, in California the state initiated a study of “paleo” rainfall evidence. The study concluded that under the worst cases supported by the available evidence – 200 years of lower than “normal” rain and snowfall in the Sierra – California would not receive enough rainfall to support the population of the time. No number of dams or reservoirs will impound water that does not fall. Similar results continue to be published, e.g.: http://www.academia.edu/3634903/New_Evidence_for_Extreme_and_Persistent_Terminal_Medieval_Drought_in_Californias_Sierra_Nevada

Berényi Péter
February 10, 2014 1:47 pm

Yep, spot on.

The Great Helmet Debate
So why do we have people campaigning for mandatory helmet laws if there is scientific evidence that they may be harmful?
Common sense suggests that helmets should save lives. It is reasonable for people to have preconceptions based on common sense. Unfortunately many people, particularly those without scientific backgrounds, become quite distressed when scientific observations challenge their preconceptions. Even more so if those preconceptions are based on common sense.
A scientist, with an open mind, will become curious and start looking for mechanisms to explain the unexpected observations. A non-scientist is more likely to close his mind and assume that the observations must be wrong. People start cherry picking the observations that support their preconceptions and dismissing the observations that challenge their preconceptions. This is scientific fraud and the intellectual equivalent of sticking your fingers in your ears and saying: “La la la”.
Unfortunately there are a lot of people out there who are unwilling to have their preconceptions challenged and prefer to say: “La la la”.

KNR
February 10, 2014 1:51 pm

First rule of climate ‘science ‘ when the models and reality differ in value , its reality which is wrong . takes cares of this issue.
Stop thinking science and start thinking religion and you will see how this works in practice.

February 10, 2014 1:55 pm

eyesonu says:
February 10, 2014 at 12:33 pm
To expand further on Larry Ledwick’s comment above. How about attaching the names of the so-called “climate scientists” to their individual model plots with a comparison of the observed data.
———————————————————————
That is a great thought. They could then ‘proudly’ show the quality of their work for all to marvel at.

John Tyler
February 10, 2014 2:04 pm

Let’s say you have 100 mathematical models that purport to determine 2 + 2.
These models produce results that vary from 35 to 45, and when averaged, they average out to about 40.
Then some gaggle of “experts” concludes that 2+2= 40.
Conceptually, this sort of “analysis” is performed by climate “scientists.”
By the way, please tell me, what is it about today’s climate that the planet earth has not experienced over the 250,000 years prior to, say , 1850 (prior to the industrial revolution) ?
Check out the comments of Maurice Strong ; he spilled the beans long ago regarding the true purpose of the AGW / CO2 scam.

rgbatduke
February 10, 2014 2:04 pm

95% of climate models agree that they totally missed predicting real temperatures and are unfit for their intended purpose. A large portion of them never even at their lowest projected temperature limit even touch real world measured temperatures.
Larry, it is worse even than that. In Box 9.2 in AR5, I quote:
However, an analysis of the full suite of CMIP5 historical simulations (augmented for the period 2006–2012 by RCP4.5 simulations, Section 9.3.2) reveals that 111 out of 114 realizations show a GMST trend over 1998–2012 that is higher than the entire HadCRUT4 trend ensemble…
That is, it is 97.4% of CMIP5 simulation and they know it but did not moderate the confidence of their projections or alter their presentation of figure 1.4 in the SPM in any way except to hide this fact from policy makers, who of course are most unlikely to read, or correctly interpret, paragraph 9.2.2.3 or figure 9.8a.
Oh, and the 97.4% is as of 2012 and does not include the last two years of a continued lack of warming. I suspect that we are out there at 99% at this point. As I said, a proper analysis of figure 9.8a already would produce a p-value for the null hypothesis “The MME mean is a meaningful predictive quantity” of practically zero, under no reasonable circumstances higher than 0.01 and IMO (without doing a full, detailed computation) more likely to be around 0.00001 to 0.000001 once one does the integrals for some sort of Kolmogorov-Smirnov test with some reasonable assumption of autocorrelation and unbiased excursion from a correctly represented mean.
Mentally compare the time integrals of T_{mme} - T_{hadcrut4} for T_{mme} - T_{hadcrut4} > 0 to those of T_{mme} - T_{hadcrut4} for T_{mme} - T_{hadcrut4} < 0. K-S tests determine whether it is plausible that these two curves could be samples of some sort of the same underlying process. Obviously, if they were the total integral of the two should be close to zero. There are other methods one might use to compare them, but they are all going to give the same general result. No, the CMIP5 MME mean is not a good representation of HADCRUT4.
Interestingly, the EMIC simulations (figure 9.8b) in AR5 do much better from 1961 to 2005. They still are not convincing as predictors of the future, however, because across the entire span they run too cool across the critical stretch of 20th century warming from 1920 to 1940 where the temperature change most closely mirrors the reference period. They are certainly a lot more convincing than CMIP5, however, perhaps because some of the EMIC models do actually run cooler than observation even as some run warmer. It would be interesting to compare the systematics of this — from a glance a 9.8b it appears that they manage this magic trick by mixing many models that are far too flat and were too warm in the past that crossover to too cool in the present, with models that were too cool in the past and crossover in the reference period to models that are too warm in the present. All of them are too smooth and fail to reproduce the excursions of the actual thermal record qualitatively or quantatively, have terrible autocorrelation times (they all appear to be heavily smoothed over decadal intervals where the actual climate has substantial variations — zigs and zags up and down — over segments of rough 5 years, suggesting that the EMIC fail to capture the local climate dynamics that hold the climate to a semi-stable centroid while also failing to correctly locate the centroid in almost all cases. I cannot tell from the figure if there exist models between the two crossover extremes that are close to “just right” — not too hot, not too cold — but if there are, obviously they should be given the greatest weight in any future projection of the climate.
rgb

[GMST is Global Mean Surface (or Sea) temperature? Earlier, you used GAST. Mod]

curiousnc
February 10, 2014 2:05 pm

Would someone mind please explaining what the black data line represents?

rgbatduke
February 10, 2014 2:05 pm

Mod, sorry, failed to correctly close a boldface tag again. Please help. I didn’t mean to shout/emphasize the entire latter half of the previous comment.
[But at which point? All should be emphasized! 8<) Mod]

RACookPE1978
Editor
February 10, 2014 2:07 pm

Berényi Péter says:
February 10, 2014 at 1:47 pm
A scientist, with an open mind, will become curious and start looking for mechanisms to explain the unexpected observations. A non-scientist is more likely to close his mind and assume that the observations must be wrong. People start cherry picking the observations that support their preconceptions and dismissing the observations that challenge their preconceptions. This is scientific fraud and the intellectual equivalent of sticking your fingers in your ears and saying: “La la la”.

EVERY honest observer, with an open mind, will become curious and start looking for mechanisms to explain the unexpected observations. EVERY self-proclaimed climate scientist MUST close his mind and assume that the observations must be wrong. ALL politicians, bureaucrats and their laity who need “climate scientists” for their agendas and their religious dogmas WILL start cherry picking the observations that support their preconceptions and dismissing the observations that challenge their preconceptions. Published and propagandized “Climate Science” as it is today IS scientific fraud and IS the intellectual equivalent of sticking your fingers in your ears and saying: “La la la”.

Admin
February 10, 2014 2:11 pm

Phil Jones once admitted in a Climategate email that he wants the world to burn, to vindicate his ego.
http://www.ecowho.com/foia.php?file=1120593115.txt
As you know, I’m not political. If anything, I would like to see the climate change happen, so the science could be proved right, regardless of the consequences. This isn’t being political, it is being selfish.
Prioritising one’s ego above unimaginable pain and suffering on a global scale – draw your own conclusions.

george e. smith
February 10, 2014 2:13 pm

Ah, but Dr. Roy All of those climate models predict; excuse me; project, that the warming will continue on apace till 2030. Maybe longer, if they can continue to get grant funding to keep on modeling way out there; despite Mother Gaia’s refusal to keep up with them.

February 10, 2014 2:22 pm

Steven Mosher says:
February 10, 2014 at 1:01 pm
Blog commenters do not get tell policy makers what information assists them.
————————————————————————————————
Commenters can certainly speak their mind, though, and hope that their message is heard.
Your example of the Japanese town saved by the building of a flood barrier is a poor example in one respect. There was historical evidence for the lurking danger. That mayor did not make his decisions to build a flood barrier from data derived from models. Not that I presume to know what he actually based his decision on, but there are historical markers set back into the hills that show the height of a past tsunami event. Those inscribed markers were placed there as future warning for the inhabitants of the area, hundreds of years before the Great Tohuko Quake and Tsunami. The fault then lies with the policy makers for not knowing their own history. The policy makers probably relied on models, and of course on the money interests involved with the building of the reactors.

artwest
February 10, 2014 2:22 pm

Paul Pierett says:
February 10, 2014 at 12:42 pm
“How many people know England lost 30,000 elderly to last winter’s cold? There is Socialism for you.”
———————————————
Britain does not have a Socialist government. It has a Conservative-led coalition government. How much the previous New Labour governments were socialist is debatable but in any case ALL parties who have formed, or are likely in the near future to form, a government bought into CAGW wholesale, voted for insane “environmental” policies, and all bear the responsibility.
Throwing around inaccuracies doesn’t help the fight against CAGW.
Guilty parties shouldn’t be let off the hook because they aren’t Socialist either.

Larry Ledwick
February 10, 2014 2:24 pm

rgbatduke says:
February 10, 2014 at 2:04 pm

Larry, it is worse even than that. In Box 9.2 in AR5, I quote:

Thanks for the elaboration!
After reading your comments in the other thread, the light bulb goes on and you know what to look for in that pile of spaghetti. As I commented above, even a naive casual observer can easily see that the model plots in no way correspond to the real world once they understand what they should be looking for.
I wish I had the skills to assist you with that project you outlined, but do, not so have to settle for urging people to make graphs and visual aids that would communicate your line of reasoning effectively and more easily show how awful the models are to the casual non-statistically educated observer. Most people know they smell a dead rat but have no clue where to look for it.
You have certainly helped open a new avenue of discussion with my pro-AGW friends. I recently tried to point out to one of them that the real data is literally falling out the bottom of even the best case lower limit of the envelope defined by that assembly of plots. Unfortunately he is one of those folks that thinks that since the guys who made the plots have phd’s Their judgement is far superior my common sense observation that even a blind rat would find the cheese occasionally. When all the blind rats are heading away from the cheese you know something is wrong.
I submit that your line of attack would also be a good basis for the legal presumption of malicious intent and intentional fraud. It is a bit much to presume that a large number of well educated scholars all are clearly producing nonsense data that violates good statistical practice and common testing methods used in other branches of science to validate models without begining to ponder if it is intentional or they all without exception are profoundly incompetent.
You literally have to be very very lucky to be that bad at projecting future climate, since even a simple persistence model with a bit of noise, out scores their projections by a few orders of magnitude.

rgbatduke
February 10, 2014 2:27 pm

Would someone mind please explaining what the black data line represents?
I’m not certain, but it is very probably the so called MultiModel Ensemble mean, the MME mean. This mean is constructed by a simple arithmetic average of the 36 equally weighted CMIP5 models, without regard to whether or not the model in question has failed a hypothesis test when compared to the real world data, without regard to whether or not the model result being averaged represents 100 or more individual model runs or 10 or less model runs (the models are generally run many times for perturbations of their initial conditions and/or parameters, but are not all run the same number of times to generate their “mean behavior” that is averaged with equal weight into the MME mean, and without regard to the fact that whole families of CMIP5 models share actual code and/or are descended from common code “ancestors” and hence could easily share biases or even occult undiscovered numerical errors.
That is, the actual data isn’t independent, it isn’t selected from a common distribution of “valid climate models”, it isn’t equally precise, and it is corrupted by the inclusion of models that produce a “predicted” warming of around 0.5 to 0.6 C over the last 17 years where no warming at all occurred (where any reasonable person might have said “uh-uh, should just leave that one out as it is almost certainly broken as all hell” out of sheer common sense, if not a proper hypothesis test in statistics) but it is then turned into a straight arithmetic average anyway and used as if it is some sort of Gaussian mean.
The MME appears to be the basis for AR5’s general claims for climate sensitivity, and for the “confidence” placed in its various intonings of disaster. Dr. Spencer’s post above is yet another cry in the statistics-ignorant wilderness for some sort of sanity to prevail and for the entire statistical analysis chapter of AR5 to be overturned for the reasons that very chapter explains and then ignores in such a way that they propagate back to a rewritten Summary for Policy Makers that makes it perfectly clear that at this particular moment we have no good idea of what the climate sensitivity really is, what the temperature in 2100 will be assuming doubled CO_2, but that it is likely to be somewhere between 0C higher and 3 C higher, most likely well under 2 C higher. To do any better requires fixing the broken, failed GCMs so that they can (for starters) correctly represent the temperature variations of HADCRUT4 in the interval from 1900 to the present, including both the stretch of strong natural warming in the first half of the 20th century and the last 17 years with little to no warming at all.
Is there anyone that can “doubt” that a model that correctly represents these two critical intervals will a) ascribe, as Roy notes, a lot more warming to natural variation and consequently a lot less to CO_2 — absolutely necessary to fit the 1900 to 1950 interval, for example; and b) drop any sort of collective estimation of climate sensitivity by a factor of 2 or more?

rgbatduke
February 10, 2014 2:33 pm

Dearest Mod,
Please close after the word warming. That hopefully will do it, although pesky tags are easy to miss.
As for GASTA vs GMST, I’m trying to adhere to the terminology used in AR5 for comparison purposes, although I’m not doing it very well. In AR5 they call it the Global Mean Air Surface Temperature (Anomaly), which would be GMASTA or something equally horrid.
Sigh.
rgb

Chris Edwards
February 10, 2014 2:33 pm

Hang on isn’t the deep ocean at 4.0c because if it were warmer or cooler it would rise as it got lighter? so the heat aint in the deep ocean is it?

NotTheAussiePhilM
February 10, 2014 2:33 pm

@ richardscourtney says:
– I just think WUTW should refrain from publishing stuff that confuses people who have difficulty understanding a simple graph and which is made by people who clearly have difficulty understanding a simple graph….
LoL!
If WUTW has a purpose, then surely is it to point out the weaknesses of the AGW argument, which isn’t helped by an ‘any old rubbish’ goes mindset, IMO

NotTheAussiePhilM
February 10, 2014 2:33 pm

WUWT that is…

Pat Kelly
February 10, 2014 2:33 pm

Silly me, but haven’t we heard repeatedly that 1998 was the hottest year on record? So then why doesn’t this chart show just that? just wondering…

david dohbro
February 10, 2014 2:35 pm

To summarize this:
1) All models are wrong, but some are useful (George Box)
2) Mistakes were made, but not by me (Carol Tavris and Elliot Aronson’s )
3) The numbers tell the tale

richardscourtney
February 10, 2014 2:38 pm

NotTheAussiePhilM:
Your post at February 10, 2014 at 2:33 pm misleads by seeming to quote me but is stating your words.
I find it difficult to accept that such a misleading layout was an accident when conducted by a self-proclaimed superior mind.
Rochard

NotTheAussiePhilM
February 10, 2014 2:46 pm

@ richardscourtney
– you’re right
– I apologise profusely for my mistake!
– if I could, I would go back & correct it!

February 10, 2014 2:55 pm

NotTheAussiePhilM says at February 10, 2014 at 1:07 pm…
That post made a valuable point that added to the world’s knowledge.
It showed a new way to lie with graphs.
Many graphs lie but the two right-wingers did spot a new way to lie with graphs. That needed to be highlighted.
Specifically, the X-axis was tilted up from the horizontal so as upward trends appeared emphasised. That isn’t just bad presentation as they put the graph next to a globe (a circle) so as the Y-axis “appeared” to be vertical by an optical illusion.
The post showed a new way that people can deceive.
It was worthwhile.

John F. Hultquist
February 10, 2014 2:55 pm

David in Texas says:
February 10, 2014 at 12:51 pm
Ok, I have to ask. Why does the graph being in 1983, but the label says “(’79-2013)”? Anyone?
Larry Ledwick says:
February 10, 2014 at 12:56 pm

Go to Roy’s site and look at the current “Latest Global Temps” chart. Note where the data are for 1979 through 1983. These are a bit below the 1981-2010 line (black, horizontal). The chart at the top of this post sets the horizontal line at the 1979-’83 average and then shows models and temperature from there. Neither the charts nor the intended use thereof are the same. The purpose of the one here is to accompany a critique of climate models. If you just want to look at 34 years of temperature data go to Roy’s site.
I have the chart from Dr. Spencer’s site open. I can take another window and use the top as a straight-edge and put it along the top of the red line (13 month average). The right end of the red line ends about mid-2013. Going back in time, say to mid-2002, there is almost no slope to the temperatures. If you go back more — at mid-1999 then the slope is up, but at mid-1998 to now, the slope is down. Respectively then, we get flat, up, and down for 11, 14, and 15 years (approximately). So, is the temperature going up, down, or sideways?

Gary Hladik
February 10, 2014 3:01 pm

Steven Mosher says (February 10, 2014 at 1:01 pm): “Blog commenters do not get tell policy makers what information assists them.”
Actually they do, but the “policy makers” (PMs) need not listen. The PMs may, in fact, cite ouija board, tarot card, and chicken entrail results to justify the decisions they already prefer.
In fact, that’s basically what they’ve been doing…

February 10, 2014 3:02 pm

@Aussie Phil2.33
So WUWT should not publish any of the consensus climate science?
Or just anything that confuses you?

Gary Hladik
February 10, 2014 3:03 pm

John F. Hultquist says (February 10, 2014 at 2:55 pm): “Respectively then, we get flat, up, and down for 11, 14, and 15 years (approximately). So, is the temperature going up, down, or sideways?”
Yes.
You’re welcome. 🙂

david dohbro
February 10, 2014 3:07 pm

re: Michael Wonders. Using HadCrut4 data; the rate of warming from 1976 to 2007 was +0.019°C/yr. There has been no warming since 2007, likely since 2001 and even since 1997… The rate of the previous ~34yr warming cycle (1911-1945) was 0.014°C/yr. Prior to that GSTA decreased by 0.008°C/yr from 1879 to 1911. Between 1945 and 1976, GSTAs decreased by 0.002°C/yr (not rounded figures to show similarities). Please note the ~30+yr periods of well-documented 60 yr natural cycles, as well as the fact that the warming in the previous warming cycle was only 0.005C/yr more (well within measurement error) than the one prior.
Assuming linear data, applying linear regression through the entire HadCrut4 data set (1850 till now) produces a slope of 0.005C/yr. The IPCC predicts that by 2100 GSTAs are on average 1.2 to 3.5C higher than what they are now, based on different CO2 emission scenarios. That would require unabated/continuous (!!) warming of 0.01-0.04C/yr starting today. The lower warming rate is about the same as what was experienced in the previous two warming periods. Nothing unusual in other terms. But it does also require unabated warming starting today, whereas the data record clearly shows 30+ yr long periods of alternating warming and cooling; of which the Earth is likely now in a cooling period. So the subsequent warming periods need to become even warmer… However, and in addition, these “needed*” warming rates are also 3.0-8.5x higher than the average warming the earth has experienced over the past 160+yrs… Does that make any sense? In all honesty; the numbers really suggest not, and that these predictions don’t make all that much sense. There is neither no president for it, even with CO2 levels now ~120ppm (40+%!!!) higher than in pre-industrial times… Given the now well-document “pause in global warming” since the early 2000s, likely lasting till the late 2020s, early 2030s, based on know natural cycles, these predictions by the IPCC’s computer models become even more erroneous.
* needed in the sense of getting the GSTAs to the by the IPCC predicted levels by 2100

NotTheAussiePhilM
February 10, 2014 3:07 pm

@ John Robertson
– I’m NOTTheAussiePhilM
WUWT shouldn’t publish nonsense, IMHO…
😉

Gail Combs
February 10, 2014 3:10 pm

Paul Pierett says: @ February 10, 2014 at 12:42 pm
The politics and Press are shameful.
>>>>>>>>>>>>>
Yes it is.
You forgot these horrors. These people have even less resources that the UK, EU or the USA:
February 10, 2010 – Mongolia: The Disaster You Haven’t Heard Of
tens of thousands of people, and millions of animals, are right now in a daily struggle between life and death, and many have already lost. I’m speaking of our brothers and sisters — two- and four-legged — caught in the most catastrophic winter the country of Mongolia has seen in at least 30 years…. As of this writing, Mongolian and international aid agencies estimate that more than 2 million domestic animals have perished so far in this dzud. Ten to twelve million died in the last disastrous episode ten years ago, and this dzud is regarded as far worse. Some fear that up to 20 million animals — half of Mongolia’s total herd — may succumb before tolerable weather arrives in late May.
And it happened again a short three years later:
February 26, 2013 – Tibetan nomads in Ladakh call out for help, Thousands of livestock perish
And this February in a different part of China
February 9, 2014 China on blizzard alert – Snowstorms kill livestock, disrupt lives
Continuous snowstorms in northwest China’s Xinjiang Uygur Autonomous Region have killed more than 300 livestock
South America is not immune either:
September 2013: More than 25 000 animals killed in southern Peru, Drugs sent to combat illnesses caused by cold and snow – “After their livestock died, I am afraid that people are going to die now if this bitter cold continues,” says reader Argiris Diamantis. “Many people are already sick, 500 kg of medicines is being sent to them.”
October 2013: Chile – One billion dollars damage to fruit crops – Worst cold spell in 80 years hammers Chile fruit crops
October 1 2013: Argentina – 2,200 cattle die in snowstorm

Paul Watkinson
February 10, 2014 3:15 pm

Walter Allensworth:
Your post at February 10, 2014 at 12:34 pm
Further to richardscourtney’s mild criticism of your post I would ask you to consider this theory of mine.
I believe that the reason the deep oceans have temperatures of 4.0 deg. C is that sea water gets to it’s maximum density at that temperature. The water with the greatest density sinks to the bottom. This in turn means that there is sea water at both higher and LOWER temperatures above that densest base layer,.particularly in the polar regions which have substantial quantities of (sub 4.0 deg. C) water to feed levels above the base layer, and possibly across all oceans at all latitudes, fed by the ocean currents whose circulation passes near the polar regions.
This colder water is thus nearer the ocean surface and the source of heat, so it’s warming will take precedence over the warmer water below it simply because any heat penetrating that far down will meet the colder water first.
I submit that this unconsidered heat sink (the sub 4.0 deg. C water layer) above the base layer is sufficient to absorb all the energy ascribed by warmistas to have entered the deep oceans and all that will result is an increase in the depth of the 4.0 deg.C base layer.as colder water above it warms up to that temperature.
I will undertake to explore this theory by computer modelling if any government funding is available?
(sarc)
Paul Watkinson.

observa
February 10, 2014 3:16 pm

“If the deep ocean ends up averaging 4.1 deg. C, rather than 4.0 deg. C, it won’t really matter.”
Won’t you think of saving poor Nemo, if it’s not the grandkiddies?

Brian H
February 10, 2014 3:16 pm

The deep ocean is not “4.0” or “4.1” °C. That “densest temperature” applies only to fresh water, preferably distilled. Saltwater is denser “all the way down” to sub-0 temps, and denser than 4°C freshwater. The very deepest trenches tend to fill with the saltiest, coldest water, streaming off the Antarctic as surface freezing dumps salt as sea ice forms.

markx
February 10, 2014 3:17 pm

Yep…. you sum it up perfectly…. “global warming” then “climate change” was all about surface warming. To turn around and explain that it has all gone into the oceans is the equivalent of saying, “Sorry, we were wrong about that”.
Such a settled science.

Gail Combs
February 10, 2014 3:21 pm

Steven Mosher says:
February 10, 2014 at 1:01 pm
” …Blog commenters do not get tell policy makers what information assists them.
>>>>>>>>>>>>>>>>>>>>
Yeah, but we DO get to kick their a$$es out of office and if the government continues to insist on ignoring reality and what is best for the country there is always Defenestration.

Chad Wozniak
February 10, 2014 3:21 pm

What I see missing from this discussion is the realization that the modelers actually, really, totally, unquestioningly, sincerely (if that latter word attaches to their sort of mindset) believe their models are right and the observations are wrong. For them this is not a sarcastic joke – it’s their true belief, however convoluted and irrational and delusional it may be. Their ideology, which they are forbidden by its first and overarching principle to question, says their models are right, so there is no possibility in their minds of the models being wrong – only that which does not comport with the models is wrong. They will most assuredly cling to this belief no matter how much contrary, physical, upside-the-head evidence comes along to refute it. They’ll still claim global warming is the cause when the next ice age descends upon their heads and scrapes away they ivory towers they inhabit.. And obviously they don’t mind at all that 33,000 people died from hypothermia in 2013 in the UK thanks to the carbon policies they promote.

garymount
February 10, 2014 3:32 pm

Walter Allensworth says:
February 10, 2014 at 12:34 pm
“If the deep ocean ends up averaging 4.1 deg. C, rather than 4.0 deg. C, it won’t really matter.”
First, I’m a CAGW skeptic, so you’re singing to the choir here a little, but when I see things like the statement above in quotes, I cringe.
REALLY?
How do you know it won’t really matter? Have you done a dynamic energy balance study?Can you cite a peer reviewed reference that shows we can dump 10^22 joules of energy every year into the ocean and it won’t really matter? It won’t change circulation? It won’t cause long-term adverse effects in the thermohaline cycle?
I would love to see this reference and be convinced, because it would be a great way to defuse the whole CAWG meme.

– – –
Calculations of energy with temperature must be converted to Kelvin before usage.
4 C is 277.15 K, then 4.1 C is 277.25
To summarize :
Temperature increase from 277.15 to 277.25 (all in Kelvin).

Brian H
February 10, 2014 3:32 pm

Chad;
That 33K hypothermia death figure should be used with caution.
I believe that was total UK deaths in the cold months, of which about 5-6K were maybe “excess deaths”.
Further, residents of “temperate” (non-tropical) climes tend to have better coping skills and tolerances for temperature swings. Most hypothermic deaths occur in sub-tropical areas hit with unexpected cold snaps. It’s a lot harder to kill a Brit or German with cold than an Algerian.

Robertv
February 10, 2014 3:38 pm

If only climate would be the problem. We are governed by power addicts. Agenda 21 Big Brother. We all know it is like that but most don’t want to think or talk about it. Your government never would do things like that. Think again.Those who govern hate freedom because you can’t govern free people. Slavery was never abolished it just changed its appearance.

Phil's Dad
February 10, 2014 3:38 pm

Steven Mosher says: at February 10, 2014 at 1:01 pm “Suppose I am a policy maker.”
I am supposing from your comments that you are not.
There is a huge difference between using information you can not be sure of and using information that has been proved wrong . You conflate the two in your lecture.
As for your conclusion – “Blog commenters do not get (to) tell policy makers what information assists them.” – speaking as a policy maker I am open to all suggestions via all forms of communication. Certainly “97% of scientists” and other so called experts do not get a monopoly of access based on their record to date.
Don’t be so dismissive Mr Mosher. One person, one voice.

February 10, 2014 3:39 pm

Steven Mosher says:
February 10, 2014 at 1:01 pm
“Suppose I am a policy maker.”
I tried, but was unable to suspend disbelief.
“Policy making is not science. Policy making can be guided by science or informed by science, but in the end it not making hypotheses and predictions. It’s making decisions based on many factors: science, economics, self interest, lobbying, principles, constituents interests, bribes, etc”
Ok, that’s not bad. Then you mess it up with
“Blog commenters do not get tell policy makers what information assists them.”
Reminds me of some advice I was given before I became a policy maker:
“What you need to understand about this game is that there a number of players, but thousands of referees”.

Janice Moore
February 10, 2014 3:46 pm

Dear Professor Robert G. Brown,
Thank you for your fine lectures today at 1:32pm and 2:04pm (even though in the latter one you yelled your head off at us through the entire last half (just kidding — I think your request for that close-bold got lost between the change of Mod shifts — and, hey, at least that kept us awake in our first class after lunch). Heh, heh, no doubt, God made that Mod-shift timing happen, for, anyone with your high intelligence needs SOMETHING to keep him or her humble —
#(:))…. yeah, I know you don’t think God exists, that’s one reason why I said it, lol ;).
Anyway, your students are blessed. Thank you for writing so that I could understand what you were saying.
Your grateful student,
Janice
P.S. With all that brainpower, you are quite verbose… . Sometimes, that is NOT a good thing. Here is some advice for you for your Valentine’s Day date this Friday (if you want her to think you are a wonderful fellow, that is — if you don’t care, meh, go ahead and talk!).
Have a lovely time and remember:

Janice Moore
February 10, 2014 3:51 pm

“There is a huge difference between using information you cannot be sure of and using information that has been proved wrong“… . (Phil’s Dad at 3:38pm today)
Excellent point, thus, repeated with emphasis.
Phil, you have a dad to be proud of, young man. #(:))

Phil's Dad
February 10, 2014 4:07 pm

Ta. Just trying to avoid Gail’s defenestration. 😉

garymount
February 10, 2014 4:17 pm

I would like to make a confession… I’ve been cheating. I’ve been using Windows Live Writer to compose my comments that contain block quotes, bolding and other decorations. Yes, I’ve been using WYSIWYG software. Look, I’m a computer scientist, and just like how the best barber has the worst hair cut, I just don’t do html well, I’m a C++, C# app developer and not a web guy, or a script kiddy.
Warning, if you use Writer, it will not properly translate your jokes to proper html. By my experience it some how flattens my jokes after submitting :-).
[The mods will refrain from fattening them back up. Mod]

February 10, 2014 4:34 pm

None of the modeling tools can be predictive tools. The Navier Stokes differential equations describe fluid flow with changes in temperature and density. They are nonlinear, chaotic, with sensitive dependence on initial conditions. Because of that, no finite set of past states is sufficient to predict future states. This has been known since Edward Lorenz’s paper “Deterministic Nonperiodic Flow” in 1963. A butterfly can flap his wings in Beijing and change the weather in Omaha. That means you can never predict the weather (or climate) in Omaha until your measurement grid is infinte. When anyone attempts to predict the future from past states, first ask “Is his record of past states finite?” if it is, then he is incompetent or a fraud. See Michael Mann. Lather, rinse repeat.

Reply to  Don Meaker
February 10, 2014 5:00 pm

Dear Don,
The “chaotic” aspect you talk about means that chaotic systems such as weather are not predictable at all, as a matter of principle, beyond relatively short-term horizons. It is not a matter of finite data; it is simply that small changes in such systems create patterns of change over time that are inherently unpredictable. We have a sense of this, now, at the quantum level, and realize that we can work only in probabilities at best — and that’s for the very “simple” world of particle position.
At the global climate level, gross guesses can be made, of course, and that’s what models are trying to do. But a model that gets it about right — with a very low sensitivity of CO2 — would be rejected by the catastrophists before it debuts on the world stage.
===|==============/ Keith DeHavelle

geo
February 10, 2014 4:37 pm

Has anyone examined the 4 models (two higher, two lower) closest to the obs to see if they are actually doing something right worth noting? Yes, it could be “blind man’s walk” or “Texas Sharpshooter” on those four. . .but then again, perhaps not.

February 10, 2014 4:39 pm

One odd thing about chaotic nature of weather and climate is that a 10th of a degree can be significant in that it gives you a different trajectory over time. The measured temperature difference increases until it gets to the outside of the envelope, and then dives back into the middle. Two different trajectories initially differing by a 10th of a degree will have wildly different results. You just don’t know which one will be warmer or colder at an arbitrary time in the future. It is Chaos!

garymount
February 10, 2014 4:57 pm

“a lot of work”. What Colonel Young often says after having a conversation with Dr. Rush in the show Star Gate Universe.
This is what I am reminded of every time Mosher makes a comment on WUWT.

Werner Brozek
February 10, 2014 5:13 pm

Walter Allensworth says:
February 10, 2014 at 12:34 pm
“If the deep ocean ends up averaging 4.1 deg. C, rather than 4.0 deg. C, it won’t really matter.”
How do you know it won’t really matter? Have you done a dynamic energy balance study?

We know that fresh water is densest at 3.98 C, but this does not apply to salt water. It gets denser right to its freezing point at -1.94 C. See:
http://www.windows2universe.org/earth/Water/density.html&edu=high
So even if we had a small layer at 4.1 C for a split second at a depth of 1000 m, the extra heat would quickly dissipate so all of the nearby ocean would have the same temperature. Sea water is denser than fresh water at the same temperature, however it is possible that very warm sea water is less dense than cold fresh water, but a change of 0.1 C will not be enough to make any difference. If it did, ocean levels would be rising quickly.

Clay Marley
February 10, 2014 5:21 pm

I have to wonder if behind all of these models they can be reduced to nothing more than a random walk with a constant forcing. At least, I can program up a random walk and reproduce these 90 model runs almost exactly. Take white noise, yearly data, bias the mean by 0.03 (CO2 forcing), set sigma to 0.03, then integrate. All you have to do is reduce the forcing in the mid-late 90’s to account for Pinatubo and you can reproduce the entire set almost exactly. Except for Pinatubo its a straight line fit; they aren’t accounting for any periodic/cyclical affects at all.

February 10, 2014 5:22 pm

Pat Kelly says:
February 10, 2014 at 2:33 pm
Silly me, but haven’t we heard repeatedly that 1998 was the hottest year on record? So then why doesn’t this chart show just that? just wondering…
This is a good question. The answer is because a 5 year mean was plotted and the average of the last 5 years is higher on UAH than the average from 1996 to 2000. See the two graphs below. One is drawn with a mean of 12 months and the other with a mean of 60 months. Note the difference at 1998. Note how the La Ninas on either side of 1998 drag 1998 down. That is why I do not agree with those that say we are cherry picking if we go before 1998 in a trend.
http://www.woodfortrees.org/plot/uah/from:1979/mean:12/plot/uah/from:1979/mean:60

Janice Moore
February 10, 2014 5:30 pm

From one of the “others…”:
Re: “… climate may well be as predictable as are those orbits.” (V.P. at 5:02pm)
That may be.
Prove it.
— no, no, (chuckle) don’t make bald-faced assertions, PROVE it with data and evidence for a causal mechanism.
Note: Saying, “I can prove it, just buy my book,” is not proving anything except your gift for salespersonship. Yes, yes, I remember you said that you will be giving away a lot of copies and that your mission is more important than your sales volume. But, come now, dear V.P., you are still pushing your book… .
Oh, and “But, I know that it is true” won’t work, either.
Re: “No one can answer my questions.” So WHAT? I’ll bet I could come up with hundreds of questions that no one can answer. And the mere fact that they are not answerable would prove exactly: nothing except the obvious — people are not omniscient…. oh, you know somebody who is…. That wouldn’t be somebody who will soon be publishing a book, would it?
V.P. — you are so much fun. Unlike some of the other super-sincere believers in hocus-pocus ideas around here, I have no qualms about teasing you, thus, I do not refrain as I do with them, for I can see that your arrogance is an impenetrable shield to any hurt I might risk causing. So, yeeeeehaw, thank you, Mr. P.!!! This is fun.
Now, tell us some more about that book.

Luke Warmist
February 10, 2014 5:30 pm

rgbatduke says:
February 10, 2014 at 1:32 pm
……………….
I always look forward to your posts. Clear, concise, and well reasoned. Your students are indeed fortunate.
Thank you.

garymount
February 10, 2014 5:37 pm

Dear Janice. Do you realize how much of my valuable time has been wasted with some of your links? Do you not realize the addictive power Happy Hamster Dance videos have? When I finally snapped out of my happy hamster dance induced transitory state, I realized that I had just spent the previous couple of hours watching blooper out-takes from Star trek Enterprise. How I went from hamsters to Tribbles I’ll never know. 🙂 “where they will be no Tribble at all”, still my favorite line.
Janice says: “ Mount — I have a software engineer brother (thus, I realize how highly intelligent you are). I”
Can I email your comment to friends and acquaintances of mine ?
V.P. : I’m adding you to my list of “a lot of work” follow up.

eyesonu
February 10, 2014 5:45 pm

Dr. Robert Brown (rgbatduke) I don’t know how to express my appreciation to you for your participation in the discussions here at WUWT other than to say Thank You.
I am here by choice and only have knowledge to gain. That is my only motivation. I follow your comments very closely and often re-read carefully to make sure I haven’t missed anything. You provide the “brain candy” that I am here for. I’m sure many others feel the same way.
So again let me say Thank You.

RoHa
February 10, 2014 6:05 pm

“If the deep ocean ends up averaging 4.1 deg. C, rather than 4.0 deg. C, it won’t really matter.”
You say that now, but when it wakes up Godzilla, you’ll change your tune.

TimFritz
February 10, 2014 6:06 pm

I know I am late to this thread, but I have to respond to Steve Mosher’s comments about policy makers. Most policy makers today seek to find the “science” which supports their political agenda and objectives. They have no consideration other than winning a political battle. This means that there can be no science based on models, it has to be based on empirical data that has been verified and validated. Your statement that policy makers are free to choose the model they desire is probably the most telling statement ever made by a person purporting to be a scientist. Shame on you.

Gail Combs
February 10, 2014 6:13 pm

Phil’s Dad says: @ February 10, 2014 at 4:07 pm
Ta. Just trying to avoid Gail’s defenestration. 😉
>>>>>>>>>>>>>
Just pass that advice on to other policy makers. please
(Now I have to clean off my monitor again)

February 10, 2014 6:58 pm

Policy makers are in a hurry and tend to follow the loudest person with the most defined argument. They don’t have time or inclination to dig for the truth. Just get on E span and watch the congressional committees at work. A group of policy makers has an IQ lower then the lowest member of the Group.
The Climate science and climate policy makers were done with their work before Blog writers had any chance for input. We are merely pointing out their error and lies. pg

michael hart
February 10, 2014 6:58 pm

michaelwonders says:
February 10, 2014 at 1:08 pm
“I appreciate the honestly of this graph. Though many have said there has been no increase in temperature in the last 15 years, this graph actually shows differently, no?”

Michael, I think Dr. Spencer may have ‘smoothed’ , or otherwise processed, the observed data to some extent for presentation purposes. For example, if you plot the same UAH satellite data (which he is the lead scientist for) with a 12-month, or 60-month smoothing you can get quite different looking graphs. In this way, the data from before 15 years ago will visually appear to influence the more recent data. Thus:
http://www.woodfortrees.org/plot/uah/mean:60/plot/uah/mean:12
The alternative satellite data (RSS) shows a cooler recent trend than UAH.
You’d need to ask him exactly what he did, but Dr Spencer clearly doesn’t feel much pressure to present his data with a blatant bias towards his expressed opinions, unlike many of his critics.

Werner Brozek
February 10, 2014 7:00 pm

RoHa says:
February 10, 2014 at 6:05 pm
“If the deep ocean ends up averaging 4.1 deg. C, rather than 4.0 deg. C, it won’t really matter.” You say that now, but when it wakes up Godzilla, you’ll change your tune.
How long will it take to make a difference? The 0.1 took 60 years assuming the numbers are right. So to raise the ocean to the average air temperature of 15 C would take over 6000 years. There are not that many hydrocarbons for us to do that. Besides, technology will advance. Keep in mind that the stone age did not end because Earth ran out of stones.

February 10, 2014 7:08 pm

michaelwonders says:
February 10, 2014 at 1:08 pm
I appreciate the honestly of this graph. Though many have said there has been no increase in temperature in the last 15 years, this graph actually shows differently, no?
————————————————
I’m no expert Michael, but my guess is it is because the graph plots the global temperature average, running 5 year mean, which is not the same thing as a plot of the 15 year global annual temperature trend line.

OssQss
February 10, 2014 7:08 pm

In defense of Mosher, he is on target. No question…..
However, he also exposes a serious issue in of its own.
Policy makers have to tune in, if you will.
They have been subjected to such officially funded slanted science over so long, reality vs modeling comparisons are a big blow to such policy. They know where their bread is buttered.
They know also when the cards of ” fact” are stacked against them.
Just sayin, you gotta wonder what folks like Mosher and Stokes see standing on their opposite shoulders and who they listen to. Devils and Angels look alike sometimes…….

OssQss
February 10, 2014 7:23 pm

My compliments to Dr. Spencer!
I almost forgot what I wanted to post after reading the comments. My Bad
Thank you for doing what you do!
It is appreciated and is too often left unsaid……

Roy Tucker
February 10, 2014 7:36 pm

The Scientific Method
1. Observe a phenomenon carefully.
2. Develop a hypothesis that possibly explains the phenomenon.
3. Perform a test in an attempt to disprove or invalidate the hypothesis. If the hypothesis is disproven, return to steps 1 and 2.
4. A hypothesis that stubbornly refuses to be invalidated may be correct. Continue testing.
The Scientific Computer Modeling Method
1. Observe a phenomenon carefully.
2. Develop a computer model that mimics the behavior of the phenomenon.
3. Select observations that conform to the model predictions and dismiss observations as of inadequate quality that conflict with the computer model.
4. In instances where all of the observations conflict with the model, “refine” the model with fudge factors to give a better match with pesky facts. Assert that these factors reveal fundamental processes previously unknown in association with the phenomenon. Under no circumstances willingly reveal your complete data sets, methods, or computer codes.
5. Upon achieving a model of incomprehensible complexity that still somewhat resembles the phenomenon, begin to issue to the popular media dire predictions of catastrophe that will occur as far in the future as possible, at least beyond your professional lifetime.
6. Continue to “refine” the model in order to maximize funding and the awarding of Nobel Prizes.
7. Dismiss as unqualified, ignorant, and conspiracy theorists all who offer criticisms of the model.
Repeat steps 3 through 7 indefinitely.

February 10, 2014 7:42 pm

michaelwonders says:
February 10, 2014 at 1:08 pm
Though many have said there has been no increase in temperature in the last 15 years, this graph actually shows differently, no?
When that statement was made, it applied from 1998 to 2012 on HadCRUT4 for statistically significant warming. Now it would be 16 years from 1998 to 2013. In general, there are two additional things that need to be known. Namely which data set is used and are we talking about no warming or no statistically significant warming? See my latest post for the latest numbers here for a number of sets:
http://wattsupwiththat.com/2014/01/25/another-year-another-nail-in-the-cagw-coffin-now-includes-december-data/

Mac the Knife
February 10, 2014 7:55 pm

richardscourtney says:
February 10, 2014 at 1:15 pm
Ole – El Torero!
A gracefully artful sweep of the linguistic cape….

Janice Moore
February 10, 2014 7:59 pm

Mr. Gary Mount, of course you may quote me on that, and LOUDLY, too! And I’ll even add something to it here: “Gary Mount — I have a software engineer brother (thus, I realize how highly intelligent you are). In fact, I’ve known several software engineers and they are ALL not only super-intelligent, but their intelligence is broad as well as deep. As a class, they are among the finest thinkers I know, combining impeccable global logic and analogizing with mastery of more linear, technical, subject matter. They also, with only a few exceptions (and all but a few of these make up for their social obtuseness with their adorably geeky personalities), are remarkably helpful, kindhearted, FUN, people.
How’s that? #(:))
“WASTE” your time?!!! I — beg — your — pardon. lol — glad you had fun (I think?) with the Hampster Dance Song — I LOVE IT. “Yeeeeeeehaww!” Makes me happy every time I hear it.
If you can stand to “waste” some more of your time, here is the…
Best Dance Song — Evah! — “Sing, Sing, Sing” (Benny Goodman)
(2nd is “Mony, Mony” by Billy Idol, but that’s just for the music; lyrics are too raunchy to post — I just try to ignore them and DANCE!!!)

Hint (to prevent another Star Trek, oh, brother!, disaster): Play the above tune the next time you have to sweep. You will sweep faster (which will make up for all the time “wasted” dancing with the broom) and it will keep you occupied so you won’t load and watch The Trouble with Tribbles (or whatever, lol).
#(:))
**************************************
Speaking of wasting time, Mr. V. P., I think I’d better refrain from any further repartee, delightful as it has been; I’ve already wasted enough of the other commenters’ time today. Thanks for getting back to me, though. Good luck with that book!

Janice Moore
February 10, 2014 8:31 pm

To allay any concerns about sexism, I called V. P. “Mr.” based on the research by a commenter recently that strongly indicated that V.P. is a “Mr. ___ ” (I’ve forgotten the name). V.P.’s response to this was not to deny it but to say something like, “You would be better served sticking to the main topic, here… .”

Janice Moore
February 10, 2014 9:08 pm

Okay, this thread is about kaput, so, V.P., I will answer your question (of 6:42pm):
No.

RoHa
February 10, 2014 9:11 pm

@ Werner Brozek
“How long will it take to make a difference?”
Not long, I would imagine. Godzilla seems pretty sensitive to environmental changes.
http://www.imdb.com/title/tt0067148
“Besides, technology will advance. Keep in mind that the stone age did not end because Earth ran out of stones.”
I should hope so. Throwing stones isn’t going to be much use against a 100 meter tall dinosaur with radioactive breath.

February 10, 2014 10:58 pm

The person who gets to decide if a wrong model is still useful is NOT a blog commenter.
The person who gets to decide is a policy maker.
Suppose I am a policy maker. Policy making is not science. Policy making can be guided by science or informed by science, but in the end it not making hypotheses and predictions.
It’s making decisions based on many factors: science, economics, self interest, lobbying, principles, constituents interests, bribes, etc
As a policy maker I am well within my rights to look at model that is biased high and STILL USE IT

Steven
The policy maker seeks input from “experts” as part of the many variables of public policy. There are politicians (Albert Gore for example), who will take an expert opinion and use that as a bludgeon against the body politic as a means to gain power. This is no different than public policy makers using studies about smoking and cancer as a means to power by regulating and suing tobacco companies (and getting on television to proclaim how concerned they are about the public). These are called appeals to authority and that appeal is used as the basis for their crusade.
Thus any public policy maker who uses wrong science as a basis for implementing public policy is building a house on sand. This is especially the case when it becomes obvious that the authority was wrong. It erodes public confidence in both science AND public policy. It gets worse as the politician does not want to be seen as wrong and will persist in defending (and voting for) stupid policy long after everyone in the real world has seen its failure (ACA as another example).
This is where we are at today. AGW was and still is being used as a bludgeon by the leftist hippy generation as a means to help force the deindustrialization of the world and plant flowers, build solar panels and wind turbines and bio fuels (another stupid public policy driven by an appeal to authority). Since the appeal to authority was to NASA and climate scientists it was especially pernicious
So we have a double whammy, public policy built as an appeal to authority that was wrong, and then a further wrong solution which was fed to the climate scientists by the politicians whereupon they gleefully spouted solar panels and wind turbines as their hippy generation masters wanted. We have wasted half a generation now on this stupid policy while not investing in future power sources that are needed to support a world of nine billion humans.
A politician may have a right to use a wrong appeal to authority, but the consequences both near and far term may be dire to us all.

February 10, 2014 11:57 pm

M Courtney says:
February 10, 2014 at 2:55 pm
Many graphs lie but the two right-wingers did spot a new way to lie with graphs. That needed to be highlighted.
————————
That was a bit ironic though, in the way the discovery of the slanted 0 line is first observed. The main narrator carries on for half of the video without taking note of the mispositioned zero line. Then his seated companion brings attention to it. I found that a bit amusing. I would have started the presentation by first pointing to that discrepancy. The two of them could have used more rehearsing. The flow of their presentation was somewhat stiff and disjointed to my mind.

Bill from Nevada
February 11, 2014 12:01 am

[snip – more slayers junk – mod]

February 11, 2014 12:02 am

Janice Moore says:
February 10, 2014 at 5:12 pm
Full moon coming up. Will that affect your calculations?
———————————————————————-
It usually affects mine!!!

February 11, 2014 12:21 am

Janice Moore says:
February 10, 2014 at 5:30 pm
—————————————-
Someone else might be feeling a bit of the moon!
Funny, I was also influenced to make a comment on this new theorist….http://wattsupwiththat.com/2014/02/06/satellites-show-no-global-warming-for-17-years-5-months/#comment-1564553

BruceC
February 11, 2014 1:34 am

OT, but in response to Janice Moore @ 7:59 (Feb, 10)
Great clip and I love Benny Goodman, especially with Gene Kruppa on the drums. But the question remains, who was better, Gene Kruppa or Buddy Rich (the Andrew Sisters, Artie Shaw, Harry James, to name but a few)?
Enjoy;

BruceC
February 11, 2014 1:37 am

Oops. Seems to be a malfunction here (like climate models).
Try again.

BruceC
February 11, 2014 1:40 am

Forget that.
Youtube Buddy Rich stick trick solo performance. Hope that works.

Bill from Nevada
February 11, 2014 2:14 am

The people who tried to hijack science with the entire voodoo load of crap – it’s not even science it’s a hodgepodge of schemes to make global temperature look warmer and it’s well known how it happens, willful, public employee fraud –
have looted scientific reality into dogma ginned up in government ”press release” alarm industry feeding ladles.
There’s no ”science” when people can’t predict which way a thermometer will go.
No matter how many public employees, or alarm industry hacks say there is.
If there was they wouldn’t lock up like deer in headlights at the mention you know which way a thermometer moves and you’ll take the pepsi challenge with their voodoo on the spot.
============================
Roy Tucker says:
February 10, 2014 at 7:36 pm
The Scientific Method
1. Observe a phenomenon carefully.
2. Develop a hypothesis that possibly explains the phenomenon.
3. Perform a test in an attempt to disprove or invalidate the hypothesis. If the hypothesis is disproven, return to steps 1 and 2.
4. A hypothesis that stubbornly refuses to be invalidated may be correct. Continue testing.
The Scientific Computer Modeling Method
1. Observe a phenomenon carefully.
2. Develop a computer model that mimics the behavior of the phenomenon.
3. Select observations that conform to the model predictions and dismiss observations as of inadequate quality that conflict with the computer model.
4. In instances where all of the observations conflict with the model, “refine” the model with fudge factors to give a better match with pesky facts. Assert that these factors reveal fundamental processes previously unknown in association with the phenomenon. Under no circumstances willingly reveal your complete data sets, methods, or computer codes.
5. Upon achieving a model of incomprehensible complexity that still somewhat resembles the phenomenon, begin to issue to the popular media dire predictions of catastrophe that will occur as far in the future as possible, at least beyond your professional lifetime.
6. Continue to “refine” the model in order to maximize funding and the awarding of Nobel Prizes.
7. Dismiss as unqualified, ignorant, and conspiracy theorists all who offer criticisms of the model.
Repeat steps 3 through 7 indefinitely.

climateismydj
February 11, 2014 2:32 am

Looks like my post last night didn’t make it through the WUWT Hypocrisy Filter. Shame, but not surprising.

dikranmarsupial
February 11, 2014 2:35 am

Do the models all agree with eachother and with the observations exactly in 1983? No, so why plot them like that? Perhaps because if you baseline the models and the observations properly the result looks like this
http://www.realclimate.org/images/model122.jpg
Roy may be “growing weary of the variety of emotional, misleading, and policy-useless statements like “most warming since the 1950s is human caused””, but does it really help to replace them with rhetorical, misleading and policy-useless statements, such as “95% of Climate Models Agree: The Observations Must be Wrong”, based on a misleading plot of the model projections? No.
Note, I don’t think anybody is claiming that the observations are wrong, so that is a straw man. While they are not perfect, e.g. Arctic coverage, the apparent hiatus is well explained by processes such as ENSO.

February 11, 2014 2:48 am

My guess is that climateismydj is Katy Duke (Twitter @katyduke).
She has pointed to a post on hotwhopper.com (have to love that blog name! — http://blog.hotwhopper.com/2014/02/roy-spencers-latest-deceit-and-deception.html ) which indicates that the graph is wrong because it “shift[s] the CMIP5 charts up by around 0.3 degrees”.
Any comments on this critique of the work? I’d like to understand what’s going on here — hot air or a justified comment. Thanks!

Norman Woods
February 11, 2014 2:55 am

Truer word have never been said. Barking defiance to the world from a pulpit isn’t science. It’s evidence of activity that is well known to be of the kind that is into hiding facts, and assassinating character.
—–
Roy Tucker says:
February 10, 2014 at 7:36 pm
The Scientific Method
1. Observe a phenomenon carefully.
2. Develop a hypothesis that possibly explains the phenomenon.
3. Perform a test in an attempt to disprove or invalidate the hypothesis. If the hypothesis is disproven, return to steps 1 and 2.
4. A hypothesis that stubbornly refuses to be invalidated may be correct. Continue testing.
The Scientific Computer Modeling Method
1. Observe a phenomenon carefully.
2. Develop a computer model that mimics the behavior of the phenomenon.
3. Select observations that conform to the model predictions and dismiss observations as of inadequate quality that conflict with the computer model.
4. In instances where all of the observations conflict with the model, “refine” the model with fudge factors to give a better match with pesky facts. Assert that these factors reveal fundamental processes previously unknown in association with the phenomenon. Under no circumstances willingly reveal your complete data sets, methods, or computer codes.
5. Upon achieving a model of incomprehensible complexity that still somewhat resembles the phenomenon, begin to issue to the popular media dire predictions of catastrophe that will occur as far in the future as possible, at least beyond your professional lifetime.
6. Continue to “refine” the model in order to maximize funding and the awarding of Nobel Prizes.
7. Dismiss as unqualified, ignorant, and conspiracy theorists all who offer criticisms of the model.
Repeat steps 3 through 7 indefinitely.

John Deere Green
February 11, 2014 4:35 am

This is what happens when government employees and media trough feeders who simply want to draw crowds,
destroy real science.

John Deere Green
February 11, 2014 4:39 am

Everybody knows they’re wrong. What anyone you know who is a government employee or a media alarm predator’s irrelevant.
dikranmarsupial says:
February 11, 2014 at 2:35 am

Non Nomen
February 11, 2014 6:05 am

Prof Hans von Storch has something to contribute as well:
http://www.hvonstorch.de/klima/pdf/storch_et_al_recenttrends.pdf
In brief: forget the models.

beng
February 11, 2014 6:18 am

***
garymount says:
February 10, 2014 at 5:37 pm
I realized that I had just spent the previous couple of hours watching blooper out-takes from Star trek Enterprise. How I went from hamsters to Tribbles I’ll never know. 🙂 “where they will be no Tribble at all”, still my favorite line.
***
Or another one, when Kirk & Spock went back in time thru the “Guardian”. At the episode’s end, Kirk muttered, at least for 1968, a risky “Let’s get the hell out of here”.

ferdberple
February 11, 2014 6:26 am

Paul Pierett says:
February 10, 2014 at 12:42 pm
Instead of in bracing what is going on the media and the US government is in goose step March right down the road to destruction with the hypothesis of Man-Made Global Warming.
==============
The Soviet Union did the same thing with their economy in the first half of the 20th century. It made no difference to the ruling elite. What’s a million peasants more or less.

ferdberple
February 11, 2014 6:50 am

richardscourtney says:
February 10, 2014 at 1:01 pm
In other words, “If the deep ocean ends up averaging 4.1 deg. C, rather than 4.0 deg. C, it won’t really matter” because it can’t.
=========
I expect Dr. Spencer was being generous. I’d also like to see a calculation because I suspect a 0.1C change in deep ocean temps represents something like a 10C or more change in surface air temps over a period of hundreds or thousands of years..

ferdberple
February 11, 2014 6:58 am

michaelwonders says:
February 10, 2014 at 1:08 pm
I mention this out of wonderment and not out of being contrary. I’d love an explanation of how people are saying the temp hasn’t changed and how we shouldn’t be worried about this continuing. Thank you!
==========
because when you average out long term temp increase it is no different than the temp increase over the past 300 or so years since the little ice age, which points to it being due to natural causes. and it would be foolish to try and prevent natural climate change, because it is beyond our control.
the worry was that natural variability in climate was low, and thus the increase in temp in the late 20th century must be due to humans and CO2. However, since CO2 is increasing rapidly but temps are not, this means natural variability must be much higher than previously thought, which means that there is little we can do about climate change even if we wanted to.

February 11, 2014 6:59 am

I love how ‘Science’ has been taken over by activists.
If you do not agree with the propaganda, you will be ‘punished.’
Keep up the work.
Wayne
Luvsiesous.com

ferdberple
February 11, 2014 7:00 am

NotTheAussiePhilM says:
February 10, 2014 at 1:07 pm
– IMO, publishing mindless drivel like this one
=============
I agree, your comment was mindless drivel

ferdberple
February 11, 2014 7:17 am

rgbatduke says:
February 10, 2014 at 1:32 pm
Although the remaining models would still very likely be wrong, the observed temperature trend wouldn’t be too unlikely given the models and hence it cannot yet be said that the models are probably wrong. And I promise, the adjusted for statistical sanity CMIP5 MME mean, extrapolated, would drop climate sensitivity by 2100 like a rock, to well under 2 C and possibly to as low as 1 C.
============
Dr Brown makes a very good point. Why has the IPCC not simply dropped the models that are not consistent with observation? Any reasonably competent statistician would have done this, and said those models that remain cannot yet be rejected. And from this one could then provide a more informed estimate of climate sensitivity with increased confidence.
Instead the IPCC has kept all the models, even those that are clearly biased, and can be shown to be biased. They then compute an average that is clearly biased. And from this they conclude they have even greater confidence in the result. Scientifically this is fraud. There is no statistical basis for concluding increased confidence while the divergence is increasing.
Because the issue is not the “pause”. It is the increased divergence between observation and model mean that demonstrates there is no basis for increased confidence in the hypothesis that humans are causing the warming.
If one performs the statistical analysis correctly, then one can say with increased confidence that warming is likely to be much less than projected by the model mean.

Non Nomen
Reply to  ferdberple
February 11, 2014 11:06 am

” Why has the IPCC not simply dropped the models that are not consistent with observation? Any reasonably competent statistician would have done this, and said those models that remain cannot yet be rejected. And from this one could then provide a more informed estimate of climate sensitivity with increased confidence.”
Then there would be no more models left and the whole system implodes. The IPCC and its bondslaves know this. Btw, it makes an enormous impression having more than one hundred models. That they have all failed, who really cares? Not the IPCC!

richardscourtney
February 11, 2014 7:18 am

ferdberple:
I concluded my explanation at February 10, 2014 at 1:01 pm saying

In other words, “If the deep ocean ends up averaging 4.1 deg. C, rather than 4.0 deg. C, it won’t really matter” because it can’t.

At February 11, 2014 at 6:50 am you reply saying

I expect Dr. Spencer was being generous. I’d also like to see a calculation because I suspect a 0.1C change in deep ocean temps represents something like a 10C or more change in surface air temps over a period of hundreds or thousands of years.

I would welcome an explanation of that, please because I do not understand how it is possible for the putative 0.1 deg C rise in deep ocean temperature to make any discernible difference to anything except polar radiative flux.
Richard

Evan Jones
Editor
February 11, 2014 7:42 am

No time to go through all the comments (yet). But I want to ask about the start point. Is that a valid one or does it make better sense to start later, thus reducing the variance.
People will be asking me that. I would like to have a correct answer, however the chips may fall.

higley7
February 11, 2014 7:45 am

The real observations, above, may show a pause, but they are not properly adjusted for the Urban Heat Island (UHI) effect, in which case we are cooling.
It begs credulity that their adjustments for UHI are always to warm the non-UHI effected sites, thus raising the temperature average rather than lowering the UHI-effected sites and the average as they should.
But, then they would not be following their political agenda, would they?

rgbatduke
February 11, 2014 11:45 am

She has pointed to a post on hotwhopper.com (have to love that blog name! — http://blog.hotwhopper.com/2014/02/roy-spencers-latest-deceit-and-deception.html ) which indicates that the graph is wrong because it “shift[s] the CMIP5 charts up by around 0.3 degrees”.
Which charts are those? I agree that there are several troubling things about the graph above, one of them being a lack of caption or legend, another being that HADCRUT4 is missing features (like the 1997-1998 ENSO peak) that are there, smoothed or not:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1983/to:2014/mean:12
It might be even more smoothed, of course, but only at the expense of the end point(s). However, it indicates 90 CMIP5 model runs as well, where only 36 distinct named models are similarly plotted in AR5. So, are these runs for a single CMIP5 model, runs from several, the independent means of many perturbed parameter runs for 90 different models (if so, which 90, and how many runs contributed to each), how was the mean of the CMIP5 models computed (weighted or not, PPE runs or not etc) and finally, how were they normalized/locked down in the first year on the graph (which is well back into the model reference period and hence is not really when the models were released into the wild).
However, none of these things save CMIP5 from the point he is making, although they are all valid questions for a peer review. At the very least the graph needs to be explained much better, and of course the BEST thing to do would be to analyze each contributing model one at a time relative to e.g. HADCRUT4, and only then (after e.g. 36 such analyses) make statements that are not argumentative but definitive and based on simple statistical analysis concerning the meaning and reliability of the CMIP5 MME mean as displayed in AR5 specifically.
rgb

Janice Moore
February 11, 2014 12:05 pm

Bruce C! (re: February 11, 2014 at 1:40 am)
THANKS for that kind response. I know this site isn’t Facebook (or whatever), but, it really gets depressing to have so MANY of my posts which are specifically addressed to an individual be ignored or unacknowledged or never read.
Re: the video malfunction — it’s happened to me several times. There’s no way (that I know of) to test ahead of time what videos have some kind of YouTube code that auto-blanks them when WordPress tries to turn them into a control window here. It is always SUCH bummer when that happens. Well, I pasted in your search term and watched that phenomenal 1978 Buddy Rich drum solo. Gene Kruppa v. Buddy Rich, really can’t say! Kruppa and Goodman win for music though, all the way. Andrews Sisters are fun, but their music is not nearly as good as Goodman’s or Ellington’s or Miller’s or LOTS of musicians. Wasn’t that truly the Golden Age for swing and dance music — wow.
Well, lest I be accused of being under the influence of the nearly full moon… (heh, if you have read my posts for any length of time, you will realize that the moon has NOTHING to do with it, lol)
.. I’ll stop.
Gratefully,
Janice
************************************************
@ Gold Minor (er?) — Great linked comment of yours. Thanks for sharing. And, I agree.
[The “Test” thread on WUWT main page (Home page, right upper corner) can always be used if something kneads hands-on manipulation before posting. Mod]

Optimizer
February 11, 2014 12:57 pm

I appreciate the message, which can’t be repeated enough, that the models of “settled science” have been spectacular failures, but I do have some technical qualms about the graph.
(1) Where’s the big temperature spike in 1998 (from a “super El Nino”, if I have the term correct)? This temperature history does not reflect the 15-20 year “pause”, in part because that is missing. The green data shows about a 10-year pause, and the blue just shows a low slope – no pause at all, really.
(2) The vertical axis is labelled “Departure from 1979-83 Average”. That’s fine, but how come ALL the curves start at zero? Did they ALL – INCLUDING THE REAL-WORLD DATA – just HAPPEN to be equal to that average? It’s just not credible.
(3) The real-world data is in the neighborhood of a handful of model results. Are we to conclude that these 5% of models might possibly be right!?! I kind of doubt it, myself, but you can’t see how those few compare with the data (it’s too “busy”), and there’s no discussion about what is special about their models. Probably, they don’t match very well early on, but from what you can see here you’d have to wait 5 or 10 years to throw away those models, too (assuming they don’t finally get a break from Mother Earth).
I’d really like to see what this would look like if these issues were addressed.

Werner Brozek
February 11, 2014 1:25 pm

Optimizer says:
February 11, 2014 at 12:57 pm
Here are my best answers to some of your questions.
(1)Where’s the big temperature spike in 1998 (from a “super El Nino”, if I have the term correct)?
The mean of 60 months was taken so the spike got drowned out by the two La Ninas on either side of 1998 as seen below.
http://www.woodfortrees.org/plot/hadcrut4gl/from:1979/mean:60/plot/uah/from:1979/mean:60/plot/hadcrut4gl/from:1979/mean:12/plot/uah/from:1979/mean:12
(2) The vertical axis is labelled “Departure from 1979-83 Average”. That’s fine, but how come ALL the curves start at zero?
It is easy to offset all curves to start a the same point at any given year for comparison purposes to easily see the differences from that point on. That does not affect the relative change in 20 years, but makes it easier to see.
(3) The real-world data is in the neighborhood of a handful of model results. Are we to conclude that these 5% of models might possibly be right!?!
That could very well be the case! However if they were ever to admit it, then CAGW would cease to exist. Would they ever admit that?

Janice Moore
February 11, 2014 1:46 pm

Dear Moderator,
Thank you, so much, for taking the time to tell me (and others) about the Test thread. I’ve wanted to use that for that exact purpose (video post test) before, but, every time (about 3 times) I accessed the thread, there was no comment box (or it was there, but tiny, and would not “open”). I didn’t want to bug you with a Q about it, so, I just figured it would get fixed someday… .
Hey! I just went back and tested the Test thread and THERE was the comment box — and it opened for me, this time.
Weird. Maybe, it just takes a long time to materialize and I wasn’t patient enough before… .
Well, anyway, thanks for helping us out, here!
AND THANK YOU, SO MUCH, FOR ALL YOUR WORK FOR TRUTH IN SCIENCE!
Take care,
Janice
[Ask not for whom the Mods toil, lest they troll for you. 8<) mod]

negrum
February 11, 2014 2:26 pm

goldminor says:
February 11, 2014 at 1:20 pm
“… However, it wasn’t long before I noticed a few new warmist participants on WUWT, and sure enough my updates were knocked out again. …”
—-l
Paranoia on the net is a healthy survival trait, but in this case you might be reading too much into it. Using a different machine (if you can) would go a long way to testing your hypothesis 🙂

February 11, 2014 2:35 pm

The CAGW Modeler’s Mantra: “I reject real reality and substitute my own.”

Evan Jones
Editor
February 11, 2014 2:35 pm

Is there a version of this with margins of error?

Janice Moore
February 11, 2014 2:45 pm

Oh, Gunga Din, I sure hope you check back here… .
Steve Garcia gave you such a fine answer re: brightness of Venus and Earth on the Mars photo thread, here:
http://wattsupwiththat.com/2014/02/06/stunning-photo-earth-as-seen-from-mars/#comment-1563141
************************************
Thank you, dear Mod, for all your valiant toil on our behalf. If you troll for us…., however, (ahem) we fish are very canny and will not take the bait (smile).
#(:))

Janice Moore
February 11, 2014 2:49 pm

Re: “I am no “new theorist” in the climate field.” (V.P. again)
“Old error in new dress
is ever error, nonetheless.”
C.S.L, Screwtape Letters
Give it up, V.P. — with every post, you only confirm our suspicions about your incompetence.
Meh, on the other hand, KEEP ON POSTING — we can use a laugh.

Janice Moore
February 11, 2014 2:50 pm

In his humble opinion. LOL.

ferdberple
February 11, 2014 5:43 pm

Isn’t the fact that the models show a spaghetti result an indication of the size of natural variability? Otherwise, what explains the large variability in the model results?

ferdberple
February 11, 2014 5:59 pm

richardscourtney says:
February 11, 2014 at 7:18 am
I would welcome an explanation of that, please because I do not understand how it is possible for the putative 0.1 deg C rise in deep ocean temperature to make any discernible difference to anything except polar radiative flux
============
Hi Richard, my meaning obviously was not clear. When I said I thought Dr Spencer was being generous, I meant he was being generous in the amount of warming one would see in the deep oceans. The more likely amount would be a small fraction of 0.1C.
To raise the deep oceans 0.1C would likely take a staggering increase in surface temps over a period of centuries. My interest in seeing a calculation was to see if we could get a handle on how much it would take.
However, the process is not reversible. You cannot use 0.1C warming of the deep oceans to warm the atmosphere in excess of 0.1C. If you could you could design all sorts of wonderful machines to extract free energy from the oceans.

Global cooling
February 11, 2014 8:10 pm

It is important to remember that “global” warming happens only in the Nordic areas. Southern hemisphere and the tropics do not warm. The consequences of this are positive. Large areas in Siberia and Canada could become habitable. Maybe even Greenland would become green again. There is nothing to worry about if the weather in Northern Europe becomes similar than the weather in Germany now.

Larry Ledwick
February 11, 2014 8:42 pm

ferdberple says:
February 11, 2014 at 5:43 pm
Isn’t the fact that the models show a spaghetti result an indication of the size of natural variability? Otherwise, what explains the large variability in the model results?

You mean beside the obvious conclusion that they have no clue what they are doing and the models are just churning out random crap that sort of indicates their might be warming in the future sorta maybe we think?

February 11, 2014 10:20 pm

Global cooling says:
February 11, 2014 at 8:10 pm
There is nothing to worry about if the weather in Northern Europe becomes similar than the weather in Germany now.
———————————-
This is likely to be the interlude, before the next round of cooling for that region.

February 11, 2014 10:34 pm

negrum says:
February 11, 2014 at 2:26 pm
————————————–
What would possibly cause ‘Cumulative security update for Internet Explorer 8’ to disappear about 20 times over the last several years, plus Security Update for Microsoft .NET Framework 2.0 SP2 on Windows Server 2003 and Windows XP x86 (KB2898856), Security Update for Microsoft .NET Framework 4 on XP, Server 2003, Vista, Windows 7, Server 2008 x86 (KB2898855),Security Update for Windows XP (KB2916036) , Security Update for Windows XP (KB2909210), Security Update for Microsoft .NET Framework 2.0 SP2 on Windows Server 2003 and Windows XP x86 (KB2901111), Security Update for Microsoft .NET Framework 4 on XP, Server 2003, Vista, Windows 7, Server 2008 x86 (KB2901110), plus several others. Is this normal for computers? Why did this mostly happen when visiting a post by Colorado Bob at Newsvine? until I stopped going to Newsvine. I pay attention to what happens around me. I do not believe that I am wrong in this matter, or I most certainly would not have stated my thoughts on the subject.

Dr. Strangelove
February 11, 2014 10:53 pm

rgbatduke
Other than saying the models are all wrong, can you also say greenhouse gases have no detectable effect on climate, at least in the last 15 years? Or whatever effect it may have, it is indistinguishable from natural variability?
I hold that this is not only true for the last 15 years but also for the last 133 years. NOAA temperature anomaly data from 1880-2013 show no statistically significant warming until 1998. By statistically significant I refer to greater than two sigma deviation from the mean. This is the standard in all sciences to distinguish a real effect from noise.
However, the margin of error in the data is +/- 0.09 C. Taking this into account, there is no statistically significant warming at all since 1880. All warming (and cooling) in the data are indistinguishable from noise. There is no anthropogenic signature in the data. Not even a non-random natural signature. All the data are trivial and/or the measurements are too inaccurate to detect such a small effect that we’re looking for.
I also hold global temperature trend is a random walk function. The observed temperature graphs can be reproduced using the simplest random walk function with just one random variable. The point is random events can produce trend lines that appear to be deterministic.

Eugene WR Gallun
February 11, 2014 11:15 pm

The missing heat is being absorbed by the land not the sea. This has energized the continents and the speed of continental drift has increased dramatically. With carbon dioxide continuing to increase in the atmosphere it won’t be long before the continents are up to motor boat speed.
Eugene WR Gallun

Dr. Strangelove
February 12, 2014 1:02 am

rgbatduke
This is my random walk function:
Tn = T(n-1) + A X + B
Where T is the temperature anomaly; n is the year; X is a random integer; A and B are empirically-derived coefficients.
With this very simple function, I can reproduce all actual temperature graphs. The results are amazing! The real graph and the random walk graph look almost identical. Not only the magnitude of changes but also the sequence of events are consistent with what is expected from random processes.

February 12, 2014 1:04 pm

Does anyone know of a procedure that is recognized by the IPCC by which an IPCC climate model or specified group of such models can be invalidated? According to Vincent Gray (“Spinning the Climate”) the IPCC stopped claiming to have validated its models after he complained to the management that these models were insusceptible to being validated.

Visiting Physicist
February 12, 2014 6:48 pm

snip – more CRAP from banned commenter Doug Cotton

February 12, 2014 8:03 pm

Visiting Physicist,
Can we clear this up, please? Are you Doug Cotton?

negrum
February 13, 2014 12:15 am

goldminor says:
February 11, 2014 at 10:34 pm
” What would possibly cause ‘Cumulative security update for Internet Explorer 8′ to disappear about 20 times over the last several years …”
—-l
Possibly Micro$oft.
I believe you when you say that you do not believe you are wrong in this matter. I urge you to do the test I recommended for your own peace of mind, preferably using someone else’s machine. If you can demonstrate that warmists find you important enough to hack and are doing so by rolling back your Microsoft security updates, you will have performed a major service for seceptics everywhere 🙂
If you are in the mood for experimenting, try Firefox.

February 13, 2014 9:14 am

As Dr. Spencer reports, there is a great disparity between the computed and observed global temperatures. However, governmental policies continue to be made on the basis of the computed temperatures. A partial understanding of this phenomenon can be gained through consideration of the content of an IPCC-style “evaluation” of a model. In an evaluation, response functions are computed that map the time to the associated global temperatures.These response functions (the squiggly lines of Dr. Spencer’s graphic) are plotted along side one or more global temperature time series (the HadCRUT4 and UAH Lower Tropsphere in Dr. Spencer’s graphic).
That’s an evaluation. Notably absent from an evaluation is a decision on whether to retain or throw out any particular model. In legitimate science, a model is thrown out when falsified by the evidence and retained when statistically validated. A model is falsified when the observed relative frequencies of the outcomes events fail to match the predicted relative frequencies and validated if they do match. For the CMIP5 models, neither falsification nor validation can occur because neither events nor relative frequencies are defined by the methodology of the research. Those climatologists who were hired to design a scientific study of the global warming phenomenon blew their assignment!

Bernie Hutchins
February 13, 2014 9:28 am

Dr. Strangelove said in part February 12, 2014 at 1:02 am:
“. . . . .This is my random walk function:
Tn = T(n-1) + A X + B
Where T is the temperature anomaly; n is the year; X is a random integer; A and B are empirically-derived coefficients.
With this very simple function, I can reproduce all actual temperature graphs. The results are amazing!”
What you have is a discrete-time integrator (pole at z=1) which has as an input a scaled random integer (you don’t say, so let’s assume the integer is bipolar, and zero mean) plus a constant. For one thing, for non-zero B, this anomaly will always ramp to plus or minus infinity (actual temp to infinity or below absolute zero!). As for saying as you do “I can reproduce all actual temperature graphs”; only if B=0 and you believe in enough-monkeys and enough typewriters.
Did you mean to say that the results LOOK at times a lot LIKE actual temperature graphs? The integrated random number (assumed white) is red-noise (sometimes called Brown-noise after Brownian motion, or random walk, etc.) which is often a TEST signal for actual temperature series. That is what it is good for. But it is not meant to ever represent an actual series.

February 13, 2014 6:57 pm

negrum says:
February 13, 2014 at 12:15 am
goldminor says:
February 11, 2014 at 10:34 pm
—————————————-
Sorry for venting my problems, and thanks for the consideration.
I have used Firefox for several years now. Question, do I need IE8 security updates If I do not use IE8. I realize that this will be a moot point soon, as support for Win XP ends in April. I will have to move on to Windows 7, which sounds like the best choice. Also, I will buy the home version and not the Pro version.
May I ask a favor of you. I use this link, http://arctic.atmos.uiuc.edu/cryosphere/
, to view polar data. Recently, if I click on their interactive chart, then the chart that comes up no longer shows the full range of years on the right side. It cuts off at 2008. Then today it initially showed the years up to 2011, and on a refresh it only showed to 2008, once again. This just started 3 or 4 days ago. Would you check and see if you get the full chart with all years to 2013 on the right side of the chart?

negrum
February 14, 2014 12:16 am

goldminor says:
February 13, 2014 at 6:57 pm
—-l
Chart seems to display fine up to 2014 (yellow line.) I used opera 9.51 ( all features disabled, except cookies from site and javascript) on XP sp2, which seems to give the best speed.
The only Microsoft products I would recommend you to use is the operating system (slimmed, trimmed and locked) and Microsoft Office (If Open Office does not meet your needs.)
I have never installed any of the Microsoft security patches or updates. It probably does no harm to install them and might improve your security level. I strongly recommend having a separate machine from your network machine for all personal data, with the autorun features of the flashdrive disabled properly on both machines.

Spector
February 14, 2014 3:55 pm

According to the MODTRAN Web utility, which is based on absorption spectrum data, the raw effect of CO2 on global temperatures seems to be just shy of one degree C for each complete doubling of the amount of CO2 in the atmosphere. It is my understanding that many of the IPCC models assume the presence of a dangerous positive feedback mechanism capable of increasing this effect to something like 2.2 to 3.3 degrees C per doubling. As the HADCRUT4 data published by the UK Met office seems to show that average temperatures have only gone up about 0.8 degrees C since 1920 while the level of CO2 in the atmosphere has gone up by almost a half doubling (square root of 2), it seems hard to justify more than 1.6 degrees per CO2 doubling; and that assumes that the increase in black carbon, urbanization, heat produced by human industry, deforestation, and all other anthropogenic chemicals introduced into the environment by man have had no measurable affect on global temperatures.

Editor
February 20, 2014 6:54 pm

A useful exercise would be to compare and contrast the 5% of models that hewed relatively closely to observed reality vs those that did not, and look at what was so different about them.