Day of reckoning draws nearer for IPCC

According to Dr. Clive Best, A key prediction from the 2007 IPCC WG1 report fails statistical tests.

Abstract: Global temperatures measured since 2005 are incompatible with the IPCC model predictions made in 2007 by WG1 in AR4. All subsequent temperature data from 2006 to 2011 lies between 1 and 6 standard deviations below the model predictions. The data show with > 90%  confidence level that the models have over-exaggerated global warming.

Background: In 200o an IPCC special report proposed several future economic scenarios each with a different CO2 emission profile. For the 2007 assessment report these scenarios were used to model predictions for future global temperatures. The results for each of the scenarios were then used to lobby governments. It would appear that as a result of these predictions, there is  one favoured scenario – namely B1 which alone is capable of limiting temperature rises to 2 degrees.

Full story here

 

5 1 vote
Article Rating
73 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Jenn Oates
February 29, 2012 10:57 pm

That’ll be on the front pages of all major media tomorrow morning, I’m sure.
/

Steptoe Fan
February 29, 2012 10:59 pm

this is terrific ! I want to try and get this poster into local middle and high schools.

crosspatch
February 29, 2012 11:14 pm

“the models have over-exaggerated global warming.”
Simple exaggeration wasn’t enough, they had to go and OVER exaggerate!
Great article.

DirkH
February 29, 2012 11:32 pm

What? The IPCC was wrong. Too bad. We just dismantled our heavy industry.
http://www.spiegel.de/international/business/0,1518,816669,00.html

jones
February 29, 2012 11:52 pm

I’ve said it before and say it again that there is already a ready-made scare just waiting on the shelf to be dusted off…
Then down the memory hole it will all go…..
Doubleplusgood….

March 1, 2012 12:01 am

Um, this isn’t very good. Sure, it has been cooler than the models predict, and that’s important, but the confidence levels quoted are based on the error in measurements of global temperature only. Nobody ever claimed they could predict year-by-year temperatures with accuracy of +/- 0.05C, but that’s what’s been falsified.
I’m pretty sure if you look at the source of those A2 / A1B etc. curves there’s some error bars and they’re bigger than 0.05 degrees. Sorry I don’t have time to check.

March 1, 2012 12:20 am

So does statistical significance work in both directions?

Adam Gallon
March 1, 2012 12:57 am

And it still makes no difference, the political process will continue to grind along, as long as it can be dragged out for.

Mr Green Genes
March 1, 2012 1:39 am

This response on Dr Best’s website bears repeating.
So, if it walks like a duck, quacks like a duck and looks like a duck then it is definitely catastrophic global warming.
Congratulations to Boudumoon for that gem!

KNR
March 1, 2012 1:47 am

Hands up anyones who is surpirsed .

John Marshall
March 1, 2012 2:03 am

Even if the alarmist theories were to be correct why is 2C considered to be safe given that the MWP was 2-3C warmer and the RWP 5C warmer than today and the alarmist estimate of CO2 levels back then is 280ppmv. So with CO2 levels so low what caused the warming. Certainly neither the Romans nor Medieval people drove SUVs.

richard verney
March 1, 2012 3:08 am

DirkH says:
February 29, 2012 at 11:32 pm
/////////////////////////////////////
It is difficult to understand what the politicians have been thinking these past 15 to 20 years.
High energy costs are simply madness, especially to an industrial based economy; it increases the costs of manufacture and the costs of distribution. For many industries, the cost of energy is the largest component in the costs of raw materials.
If it turns out that the case for CO2 emission reduction has been over egged whether because of incorrrect assumptions as to sensitivity (or otherwise), there will be severe back lash on this issue. It is something that will not simply disappear quietly given the unemployment and poverty resulting from the pursuit of green policies. The public will not only be demanding answers but also baying for blood.
Merkel’s decision with respect to nuclear was very strupid coinciding with mounting evidence of the high costs, unreliability and grid problems associated with wind and solar. She has given France the real prospects of a stranglehold over Germany. I envisage that the Germans will long rue the hasty and knee jerk decision made to close down much of their nuclear generating plants. .

richard verney
March 1, 2012 3:15 am

John Marshall says:
March 1, 2012 at 2:03 am
/////////////////////////////////////////////////
For anyone who claims that 2, 3, 4, 5 deg C warming would be catastrophic, it should be a pre-requisite for them to demonstrate what catastrophicc climate/weather problems beset man living during the MWP, the RWP, the Minoan Warm Period and the Holocene optimum. As far as we know from history, during the MWP, the RWP and the Minoan Warm Period, it was a time of plenty and man thrived.

March 1, 2012 3:17 am

The probability argument concerning the data is simply this :
The quoted error on a single temperature anomaly measurement is 0.05 deg.C (see here). If you measure the shortfall between the 6 anomaly measurements and the lowest of the 3 scenarios – B1 then you find shortfalls of (1,3,4,2,2,6) standard deviations. If this was due to noise then one would expect +- 1 or 2 standard deviations at most. The probability that randomly all of them lie so far below the scenarios is naively the product of each probability. This leads to a very low probability that the scenario predictions are correct. A proper analysis (curtesy of bbbeard) using the spread of measurements (0.08 deg.C) gives a probability of 1%. So the statement > 90% confidence is correct. The reason why all this is important is because these very scenario curves have been projected into long term predictions and then used to argue for drastic curbing of carbon emissions to limit warming to 2 degrees.
The question of model to model uncertainties: Here I think we have a different problem. Yes you are correct: the spread in model predictions seem to be getting larger leading to statements that the data are within the spread of models. This may be factually correct but that fraction of model calculations still consistent with the data are just those with low feedbacks.
A healthy scientific method should be as follows:
A theoretical model is developed to describe some physical process. The model will have a number of unknown parameters which determine the result. The values for these parameters start with best guess values and the model then makes predictions of measurable variables for experimentalists. Experiments then make the measurements and compare them to the theory. The model are then either modified with new parameters which can better describe the data, otherwise if this is not possible the model is rejected.
The problem with climate science seems to me that predictions of models made 22 years ago have had a massive political impact with the consequence that these predictions have been fixed in stone. This is not because the science has not evolved – it has. It is mainly due to the political fallout of being wrong. I fully accept the basic physics of AGW leading to ~1 degree warming for a doubling of CO2. The feedbacks (mainly due to water) however are rather uncertain and could even be negative. The models need to be tuned to fit the data and NOT the other way round !

Mike (One of the Many)
March 1, 2012 3:20 am

Unfortunately, “It’s the Sun, Stupid” never goes down very well with our not so friendly warmists…..
Traditionally, the approach that they’ve taken is to try and erase from history or marginalize both the MWP and the RWP
The sooner people get used to the fact that we live with a bit of a variable Star the better really – Well unless they want to start claiming that the Romans invented SUV’s, which their Medieval counterparts rediscovered and started driving around 😉

Peter Stroud
March 1, 2012 4:03 am

Another excellent technical paper that falsifies the IPCC, surface warming predictions. It should, of course, lead to a dismantling of the most stupid effects of the Climate Act, such as wind generation and biofuels. And a complete rethink of energy policy. But, it won’t. The policy makers will just ignore it, and continue to walk around with their fingers in their ears and their blindfolds on.

March 1, 2012 4:18 am

I pointed a commenter to Gavin Schmidt’s site where a similar chart exists and stated:
“All surface temperature models are running warm compared to reality.”
So I did interpret the chart correctly, I’m not that thick after all!
He didn’t like my source:
“No I asked for scientific institution. That’s a dude on a blog.”
🙂

Bill Illis
March 1, 2012 4:25 am

Hadcrut3 is now lower than every single one of the 23 climate models used in the IPCC AR4 (and there is quite a spread in these models).
http://img99.imageshack.us/img99/5937/ipccmodelspreadvshc3jan.png

March 1, 2012 4:46 am

This very well researched piece blows apart the latest rubbish. http://bit.ly/9NDJ5

Garacka
March 1, 2012 4:53 am

Might “over-exaggerate” be considered a double negative?

1DandyTroll
March 1, 2012 5:02 am

The IPCC and the Gore-Mannian people are using the same flawed logic of the 18th century balloonist-would-be-astronauts who thought they could reach the moon if only they could pump in more hot air.

Frank K.
March 1, 2012 5:42 am

Garacka says:
March 1, 2012 at 4:53 am
“Might “over-exaggerate” be considered a double negative?”
Check it out here.
I agree with this answer:
“Exaggerate implies no degree, so it seems appropriate to indicate one sometimes: “He barely exaggerated!” or: “That was a huge exaggeration.” But I am not sure if “over exaggerate” is the same principle. It seems intended to describe a supposedly different concept, rather than a degree, but it doesn’t actually imply a different concept, because exaggerate does not mean “moderately exaggerate.””
Of course, with the IPCC, everything is an “over-axaggeration”! :^)
Meanwhile, the Earth’s temperature (according to the UAH AMSU Chan.5 daily temperature) is running quite cold…
http://discover.itsc.uah.edu/amsutemps/execute.csh?amsutemps

March 1, 2012 5:45 am

So it sounds to me as if we need to reject the null hypothesis! There is sufficient evidence to conclude that these guys are full of it!

March 1, 2012 5:47 am

Reblogged this on gottadobetterthanthis.

John Greenfraud
March 1, 2012 5:59 am

The alarmist have lost, or are losing, on each and every scientific point. It doesn’t matter, CAGW fear mongering is not about science, it is about control. They will simply role out more equally absurd claims, or change the name again, and continue on as if nothing is wrong. Their goal is not to get the science correct, but to make it plausible enough to sway public opinion. That being said, I don’t wish to downplay the importance of the people here exposing CAGWs faulty methods, predictions and shoddy science, it’s invaluable. Just expect the progressive media to trot out more name-calling, demagoguery and personal attacks for the ’cause’. Thanks, keep up the good work.

DR
March 1, 2012 6:15 am

HadCRUT4 will “fix” the problem 🙂

March 1, 2012 6:55 am

Sun is not helping either; watch out for more solar variability papers with
’ We propose that xyz can amplify small solar fluctuations
The latest SIDC SSN=33 (for February) is on low side.
Dr. Hathaway had already cut back his ‘prediction’.
http://www.vukcevic.talktalk.net/NFC7a.htm

March 1, 2012 7:00 am

OT
Hey, we are near solar max for the SC24, and what do we have here is one sunspot and one spec. http://www.vukcevic.talktalk.net/img3.htm

RobW
March 1, 2012 7:23 am

Data between 2006 -2011…Um data, we don’t need data when we have perfectly good models…

michael hart
March 1, 2012 7:34 am

I’ve been wondering for a while about the IPCC models. At what point [for each individual model] does the statistical grim reaper appear and tap the IPCC on the shoulder?

March 1, 2012 8:32 am

Dr. Best’s “predictions” are actually “projections” and while predictions are falsifiable, projections are not.

JJ
March 1, 2012 8:38 am

michael hart says:
I’ve been wondering for a while about the IPCC models. At what point [for each individual model] does the statistical grim reaper appear and tap the IPCC on the shoulder?

The grim reaper will first have to find IPCC. He should look progressively deeper in the ocean, as that is where they will be hiding.

JJ
March 1, 2012 9:08 am

Terry Oldberg says:
Dr. Best’s “predictions” are actually “projections” and while predictions are falsifiable, projections are not.

Then “projections” are not science, and should not be misrepresented as such. Until that fact is reflected in practice, quibbling over the distinction is just semantics.

Taphonomic
March 1, 2012 9:12 am

Testable hypotheses? We ain’t got no testable hypotheses. We don’t need no testable hypotheses. I don’t have to show you any stinking testable hypotheses.
(apologies to “Treasure of the Sierra Madre”)

Charlie A
March 1, 2012 9:22 am

The math on uncertainty and number of standard deviations, and therefore the probabilities are all incorrect.
See Lucia’s Blackboard for examples of the calculations done correctly. (blog link is in the right hand column of WUWT, under “Lukewarmers”

March 1, 2012 9:36 am

JJ:
You seem to say that to make a distinction between predictions and projections is “just semantics.” Actually, to make this distinction is essential to one’s grasp of an important fact about the IPCC’s inquiry into AGW. Not wirhstanding IPCC representations to the contrary, the methodology of this inquiry was neither scientific nor logical.

Jim G
March 1, 2012 9:41 am

Macro Contrarian (@JackHBarnes) says:
March 1, 2012 at 12:20 am
“So does statistical significance work in both directions?”
Yes, as the confidence intervals are + or -. However, we must remember that significance testing is typically used for sample size error confidence, only. It does not limit or measure other types of errors such as data quality, observational error (heat islands), input errors, intentional selection of data points to push a theory, non-use of other independent variables more suited to the analysis, interdependence of independent variables (multicolinearity), etc. ad nauseum.

Jack Greer
March 1, 2012 10:44 am

Take a hint from Anomaly UK => http://wattsupwiththat.com/2012/02/29/day-of-reckoning-draws-nearer-for-ipcc/#comment-909129
… and Charles A. => http://wattsupwiththat.com/2012/02/29/day-of-reckoning-draws-nearer-for-ipcc/#comment-909481
I’m sorry but this analysis is nonsense. Clive is taking a temperature data-point accuracy error and applying it to a temperature time series. Makes no sense, whatsoever.

Septic Matthew/Matthew R Marler
March 1, 2012 11:07 am

Terry Oldberg: Dr. Best’s “predictions” are actually “projections” and while predictions are falsifiable, projections are not.
Why exactly is that? The projections are cited in Congressional testimony and written exhortations as though that is what will happen if we do not act. The conditions for which the projections were made are satisfied (except that CO2 continues to rise, and a few projections assumed non-rising CO2), and the projected temperatures have not occurred. Why does that not show that the projections have been incorrect?
If Dr. Best’s “predictions” are actually “projections”, when will the AGW promoters tell us that they are of no consequence and may be ignored?

Reply to  Septic Matthew/Matthew R Marler
March 1, 2012 12:57 pm

Septic Matthew:
A “prediction” is an inference from the state of a system at the beginning of an independent statistical event to the state of the same system at the end of the same event. The former state is a condition on the Cartesian product of the values of the independent variables of the model. The latter state is a condition on the Cartesian product of the values of the dependent variables of this model. Conventionally, the latter state is called the “outcome” of the associated event. The complete set of events is an example of a “statistical population.” When the elements of a subset of these events are observed, this subset is a “statistical sample.” In testing a model, one compares the predicted to the observed relative frequences of the various possible outcomes. If there is not a match, the model is falsified by the evidence.
The relationship between the events and the predictions is one-to-one. Thus, a necessary condition for predictions to be made by a model is for a statistical population to be referenced by it. If you were to search AR4 for a citation to the statistical population underlying the IPCC’s conclusions, you’d draw a blank, for there is no population. It follows that: a) the IPCC models cannot be statistically tested and b) the methodology of the IPCC’s inquiry into AGW was not scientific.
In the minds of many, “projections” play the role of “predictions.” However, this cannot be so in view of the missing statistical population. The conflation, by professional climatologists and others, of the idea that is referenced by the word “projection” with the idea that is referenced by the word “prediction” has produced the ultimate disaster for the IPCC’s inquiry into AGW. This is for the inquiry to have been regarded as a scientific inquiry when it was not one.

Werner Brozek
March 1, 2012 11:31 am

The poster has the slope for CO2 wrong. Since 1997, it is about 2 ppm/year and not 1.
(slope = 1.95337 per year)
http://www.woodfortrees.org/plot/esrl-co2/from:1980/plot/esrl-co2/from:1997/trend

Reply to  Werner Brozek
March 1, 2012 1:32 pm

@Werner Brozek
Sorry for Typo – yes it should be 2ppm per year. It has been fixed now.

Werner Brozek
March 1, 2012 11:38 am

The January 2012 value for HadCrut3 at about 0.22 certainly does not help THE CAUSE. At 0.22, it would rank 18th hottest. And UAH for February certainly will not help either.
http://www.climate4you.com/GlobalTemperatures.htm#HadCRUT3%20TempDiagram

RaymondT
March 1, 2012 12:05 pm

The strategy now used at RealClimate and Barry Brickmore is to state that the predictions are still within the experimental (data) errors as discusssed in Barry Brickmore’s blog on the WSJ article. The authors of this latter article argued that the overprediction of the temperature anomalies disproves the climate models. The spread in the different model predictions is so large that we may have to wait until 2030 to really test the predictive capability of the climate models. As argued by Judith Curry in her blog while discussing fig 9.7 of the FAR IPCC report, the model and experimental errors may be too large to effectively test the climate models.

kadaka (KD Knoebel)
March 1, 2012 1:05 pm

Steptoe Fan on February 29, 2012 at 10:59 pm:
this is terrific ! I want to try and get this poster into local middle and high schools.
Werner Brozek on March 1, 2012 at 11:31 am:
The poster has the slope for CO2 wrong.
Are you all talking about this here poster linked to on the originating piece? If so, it might have been nice to clarify that point.
For myself, the attempted download of the poster on dial-up has now conked out early four times straight. Seems like a hosting issue, his site uses wordpress software but it doesn’t appear to be hosted on wordpress-dot-com. While I guess it’s interesting to look at, I’m giving up for now.

SteveSadlov
March 1, 2012 1:27 pm

Heh … I’ve been betting on +0.3 deg C for a good many years … wish I’d bet real money!

Rosco
March 1, 2012 1:33 pm

Will the REAL deniers please stand up.
The really funny thing about the AGW con is that the “true believers” call opponents “deniers”.
Only a blind fool could miss the “Inconvenient Truth” that it is the true believers who are the deniers – they deny reality with their ludicrous theory which has NO substantiated evidence to support it.
They got the whole basis for their theory wrong by missing the really obvious fact that during the day the atmosphere acts to reduce the heating effect of the solar radiation – not add heat as they idiotically claim.
Is there any proof ??
You betcha – because without an atmosphere to REDUCE the heating effect of the solar radiation during the day – and after all during the day is all that matters as there is NO solar radiation at night – I thought that needed explaining to people who deny reality – the Earth would be subjected to temperatures like the Moon – about 120 degrees C.
After all, both the Earth and Moon are subject to the same intensity of solar radiation !!
So – CLEARLY – the Earth’s atmosphere actually REDUCES the heating impact.
Only a real DENIER could argue that isn’t true.
So the real “deniers” are actually the “true believers” – those who deny reality in favour of their pet “religion”.

kadaka (KD Knoebel)
March 1, 2012 2:21 pm

Ha-ha! Great comment at the original story:

Jack Greer says:
March 1, 2012 at 2:48 pm
This is exactly the type of BS that will get you cross-posted at WUWT every time. You can’t be serious, Clive.

Jackie then included a link to RealTrueClimateStories, where presumably the Climate Science™-approved method to always uphold the IPCC is revealed. Guess that means if ain’t Team-reviewed then it ain’t science, and if it ain’t science then it is exactly the type of Gleick-ness suitable for posting at WUWT.
Ah Jackie, it’s good to know you have such a high opinion of this site. So, do you have any suggestions for improving the content on this site, that you’ll gladly and openly reveal here in Anthony’s home on the internet, right in front of Anthony’s virtual face?

March 1, 2012 2:22 pm

Reply : Jack Greer et al.
The basic argument is that a prediction was made in 2007 for a future temperature trend. Now in 2012 we compare how well that prediction has performed fro 2011 until 2011. The conclusion is that it has significantly overestimated all temperatures for the last 6 years. These are not random errors – they are systematically low.
Each model should strive to fit the data independently. Taking an ensemble of different models with chosen parameters, then selecting the mean and using the spread as some sort of “model error” is also meaningless. Instead each individual model should be run multiple times varying climate sensitivity until it best reproduces the data. Fighting turf wars makes no sense. There is nothing wrong with being wrong. If it finally turns out that climate sensitivity is smaller than feared then we should celebrate. There is no need for gnashing of teeth ! The next ice age is anyway only 2000-3000 years away !

JJ
March 1, 2012 2:35 pm

Terry Oldberg says:
March 1, 2012 at 12:57 pm

You and I are on the same page. The use of the “prediction/projection” distinction is a semantic argument by warmists to avoid responsibility for the scary stories they tell. In their lexicon, a “projection” is simply a “prediction” they made that turned out to be demonstrably false.
It is worth noting the official IPCC definitions, which suborns this nonsense:
Projection: “…a projection can be regarded as any description of the future and the pathway leading to it. However, a more specific interpretation has been attached to the term “climate projection” by the IPCC when referring to model-derived estimates of future climate.”
Forecast/Prediction: “When a projection is branded “most likely” it becomes a forecast or prediction. A forecast is often obtained using deterministic models, possibly a set of these, outputs of which can enable some level of confidence to be attached to projections.”
There is no formal distinction between an IPCC “climate projection” and a “forecast or prediction”, there is only their arbitrarily generated and ambiguosly quantified “confidence” that a projection is “most likely”.
They are responsible for this mess … and for their predictions, whether they want to call them that or not.

Reply to  JJ
March 1, 2012 3:10 pm

JJ:
I’m not sure we’re on the same page. I’m saying that, as predictions are one-to-one with the events in a statistical population and as there is no population, there are no predictions. Thus, the use of the word “prediction” by Dr. Best, the IPCC and many others is false and misleading. While in the absence of a statistical population, a model cannot make predictions, it can make projections. Climatologists have muddied the waters by failing to draw a distinction between a “prediction” and a “projection” leading the naive to the false conclusion that the IPCC’s models can be statistically tested when they cannot be statistically tested. Do we agree?

MarkW
March 1, 2012 3:22 pm

On another blog, make a claim about global warming. I responded with actual data and a few links.
His response was something along the lines of:
Baseball has it’s umpires.
Football has it’s referees.
For science, we have the National Academy of Sciences.
Since the Academy has spoken, the issue is now settled.
He wouldn’t even debate the facts I presented. The NAS has spoken and that was it.

Jack Greer
March 1, 2012 4:36 pm

@Clive Best says: March 1, 2012 at 2:22 pm
Clive, Please explain your logic for using a temperature data-point accuracy error value to estimate the statistical significance of how far adrift model projections are v. real measurement for a temperature time series. Seriously, I’m curious.

Reply to  Jack Greer
March 2, 2012 12:44 am

Greer.
My logic is the following: Quantum Chromodynamics predicts the cross-section for gluon production in quark-quark scattering. The calculation is difficult but eventually makes precise predictions about 3-jet events in a particle accelerator. Physicists work for several years to build an experiment to measure the cross-section for 3 jet production. QCD is compared to the results. and they agree within measurement errors.
Global Warming: At one instant in time – 2007: Some climate models which have been tuned to describe past temperature changes up to 2005 (or time series if you prefer) are then used to calculate forward in time to project/predict future temperature rises. These models include CO2 forcing trends and various assumptions for feedbacks, aerosols, albedo change etc.
Now in 2012 we can take the 6 new measurements since then and see how well the models performed. The answer in this case is not very well. All the points range from 1 to 6 standard deviations below the prediction.
What I think you want me to say is something like : Over a 50 year period the chi-squared agreement between climate models and the data is reasonably good so the models are doing just fine. However, this is based on “hindcasting” which I think is not quite the same thing. If I know the answer then It is easier to get the model to agree with the data. It is a bit like those mechanical models used to predict planetary orbits before Newton. By adding more cogs they get closer to an accurate description only if the underlying physics is very simple – Gravity.

March 1, 2012 4:52 pm


“The NAS has spoken and that was it.”
======================
That is a rather nebulous claim… Which NAS reports did he refer to? Or did he just cite the opinions/endorsements of some NAS admin staff? Because unless you are referencing a NAS study, all you are doing is citing the opinion of one or a few individuals who shuffle the paperwork for the NAS. I don’t think that point is well understood in some circles.

March 1, 2012 5:33 pm

The “day of reckoning” is already here because basic physics debunks AGW without having to wait for climate to confirm their errors.
Nothing less than this type of “thinking critically” is going to lead to valid conclusions in the climate debate. Not even the AGW proponents have been able to think critically enough like this example which, if you do think critically, will demonstrate why the greenhouse conjecture is false ….
Consider a metal plate enclosed on one side with a “perfect” insulator. It is dangling out of a satellite and collecting the full blast of the Sun on its uninsulated side. Say it is between the Sun and Earth and its plane is perpendicular to the line between Sun and Earth. Assume it is not affected by radiation from the satellite.
Say the Sun warms it to 330K at equilibrium. Now pull it into the shade of the satellite and turn it 90 degrees so it faces space and not the Earth or satellite. It will radiate virtually all the amount represented by all the area under its Planck curve, like a blackbody. So it “wants” to do this. You time how long it takes to cool to, say, 280K then to 200K
Now repeat the experiment, but when it is in the shade of the satellite face it towards the Earth.
We know the radiation from the Earth has the same flux (close enough) to that from the Sun which it receives. So will it stay at 330K because of this? I suggest the IPCC’s energy diagram concepts might “prove” that it would in fact do so, because they imply that the temperature would be a function of the number of photons received..
No. It will cool more slowly than before, but how does it “know” to do so? It still “wants” to cool faster. The slower cooling cannot be caused by a transfer of thermal energy from the cooler Earth system. The only other way is for radiation from the Earth to interfere with the plate’s radiation, and this can only happen with a standing wave.
It will take longer to cool down to 280K and it will stop cooling when in equilibrium with the Earth.
But only some of the Earth’s radiation will have any effect. Its rate of cooling will be slowed down and gradually come to a halt at whatever is the weighted mean temperature of the Earth and atmosphere – perhaps around 255K.
So at equilibrium the Earth’s radiation is split into two “sections” – some which is below its mean temperature and thus has no effect because it forms standing waves with the plate at 255K, and some above that 255K “cut-off” which is maintaining the plate’s temperature also at 255K.
It was the UV, visible and maybe some near-IR in the Solar radiation that was able to heat it to 330K.
So heat is not automatically transferred wherever “photons” roam. Heat cannot be transferred by radiation from a cooler atmosphere to a warmer surface. All that happens is that standing waves of radiation send a message to the warmer surface to slow down its rate of cooling each evening, so our evenings are more pleasant for our night life.
But each molecule of carbon dioxide cannot play a greater role in producing standing waves than each molecule of water vapour which outnumbers it by at least 25:1 when water vapour is 1% of the atmosphere. So doubling carbon dioxide is like increasing water vapour from 1.00% to 1.04% of the atmosphere. Given that it can already get up around 4% without too much of a problem, let’s sleep comfortably in the not-much-warmer nights. Actually, carbon dioxide cools us more anyway by sending some of the Sun’s IR back to space.

GregO
March 1, 2012 6:37 pm

“Charlie A says:
March 1, 2012 at 9:22 am
The math on uncertainty and number of standard deviations, and therefore the probabilities are all incorrect.
See Lucia’s Blackboard for examples of the calculations done correctly. (blog link is in the right hand column of WUWT, under “Lukewarmers”..”
Where? Can you provide a link? I am only vaguely familiar with Lucia’s blog. Love to have a look and explore in more detail what you are alluding to.
“Jack Greer says:
March 1, 2012 at 4:36 pm
@Clive Best says: March 1, 2012 at 2:22 pm
Clive, Please explain your logic for using a temperature data-point accuracy error value to estimate the statistical significance of how far adrift model projections are v. real measurement for a temperature time series. Seriously, I’m curious.”
Jack, can you elaborate on your question to Clive in a bit more detail – I’m interested in following the reasoning. Is it that the models themselves have multiply ambiguous initial-condition initialization issues? And hence they (perhaps obviously) are not forecasts or predictions of a specific temperature (GATA; whatever…) and what Clive has done treat them as if the models are a kind of “weather prediction”. Is that what is at issue?
If that is the case, (or something like that) then what are we to make of, not necessarily cooling, but a distinct lack of current empirical evidence of warming as shown in multiple temperature measurements, sea-levels, and ice extent? I do not mean this question as a challenge, but as probing how to look for future evidence to quantify climate variability.
Thanks in advance.

wermet
March 1, 2012 9:52 pm

suissebob says: March 1, 2012 at 4:18 am

I pointed a commenter to Gavin Schmidt’s site where a similar chart exists and stated:
“All surface temperature models are running warm compared to reality.”
So I did interpret the chart correctly, I’m not that thick after all!
He didn’t like my source:
“No I asked for scientific institution. That’s a dude on a blog.”

I had similar thoughts when I first saw this comment However, then I reflected for a moment. I came to the conclusion that given his opposition to new data, new analysis or simply other points of view, that Gavin Schmidt did not really embody the traits required to be a true scientist. Most of the stuff I’ve seen from Schmidt reads more like religious diatribe than scientific argument. Therefore, he might as well be just “a dude on a blog.”
It’s sad that society is beginning to view all scientists with such a degree of distrust. The scientific thought process is becoming increasingly harder to find in the average person. I fear that we are entering a new age of increased superstition and belief in all manner of magic and nonsense.

kadaka (KD Knoebel)
March 1, 2012 9:59 pm

From GregO on March 1, 2012 at 6:37 pm:

“Charlie A says:
March 1, 2012 at 9:22 am
The math on uncertainty and number of standard deviations, and therefore the probabilities are all incorrect.
See Lucia’s Blackboard for examples of the calculations done correctly. (blog link is in the right hand column of WUWT, under “Lukewarmers”..”
Where? Can you provide a link? I am only vaguely familiar with Lucia’s blog. Love to have a look and explore in more detail what you are alluding to.

Link to the Blackboard.
Here’s some important things to know in the climate debates. Besides the still-brainwashed unquestioning people, there are four main groups.
1. Those who never believed in (C)AGW, as in the proposed mechanisms, physics, math, with the “tipping points” that will generate the catastrophic part. This includes geologists and other intelligent educated people who knew CAGW was highly improbable if not impossible. Some level of AGW may be happening, sure, but not CAGW.
2. Intelligent educated people who had previously accepted (C)AGW since it was “settled science,” and are upset that they were lied to, upset they let themselves be fooled without examining it more critically, who refuse to be fooled again, and examine anything from the (C)AGW-pushers hyper-critically with loads of derision when anything smells off. Anthony Watts is in this group. Again, might be AGW but not CAGW.
These two groups are the vast majority of those commenting on skeptic sites like WUWT, #2 gets rather vocal.
Then there are the “too smart” groups:
3. Know there’s something wrong, things don’t always add up, but they are certain they themselves are too smart to ever have been fooled by a con game. There must be truth in there, it’s just a matter of looking at the science correctly, doing all the math properly, and then one can see there’s truth in (C)AGW, even though the top scientists themselves don’t always get it perfectly correct, even though it doesn’t look as catastrophic as advertised. These are “lukewarmers” like Steven Mosher. They’re like people who bought into a “Get rich quick in real estate” training scheme, it’s not working for them but they keep coming back for another seminar, one more book, some paid coaching from a program “counselor”, since all they have to do is get enough knowledge and learn how to do it right and it’ll all make sense and they’ll be successful too.
4. They are too smart to ever be fooled, they are smarter than practically everyone else on the planet, therefore (C)AGW must be true because they are scientific and have accepted the science, and if you think it’s not then it’s obviously because you’re not as smart as them and can’t accept the scientific truth. That’s the “Pro AGW views” section of the blogroll in the right column, includes numerous “trolls” on WUWT, and certain malodorous individuals like Gleick and associates.
3 and 4 are at Lucia’s. Be wary of going there to learn how to “do the math properly”, as there you’ll find those trying to figure out better ways to do the math so it adds up to (C)AGW more conclusively, and those who insist the math has always perfectly added up to (C)AGW which you would realize if you were as smart as them.

wermet
March 1, 2012 10:09 pm

Terry Oldberg says: March 1, 2012 at 8:32 am

Dr. Best’s “predictions” are actually “projections” and while predictions are falsifiable, projections are not.

At some point we need to be able to judge whether the projections were useful or not. After we have compared the projections to the measured trajectory of the climate, we need to judge the correctness of these projections.
At this time, I believe that all the original Hansen (1988) and IPCC AR1 projections can be viewed as incorrect and henceforth abandoned. All have greatly over estimated the change in global temperature.
As an engineer, if I project that a process should proceed in one manner and another is observed, I must abandon my projection and find a better explanation. This is how engineering (and science) works.

Reply to  wermet
March 2, 2012 8:12 am

wermet:
I’m glad to see that you’ve made a distinction between “predictions” (which were made by none of the general circulation models referenced by AR4) and “projections” (which were made by all of them). In statistically testing a model, one compares the the relative frequencies of the outcomes that were predicted (not projected) to the relative frequencies of the outcomes that were observed in the corresponding, specified statistical events. If there is not a match, the model is statistically falsified. If there is a match, the model is statistically validated. (There are some complications resulting from the phenomenon of sampling error that I’ll gloss over for the sake of brevity.)
To statistically test a model, one needs: a) observed statistical events and b) predictions of the outcomes of of these events. Neither of these two ingredients is available with respect to any of the IPCC models. Thus, to statistically test the IPCC models would be impossible. Nonetheless it is entirely possible for one to compare projected global average surface air temperatures to a selected global average surface air temperature time series. The IPCC calls such a comparison an “evaluation.” It is a widely misunderstood fact that in evaluating a model one neither statistically falsifies nor statistically validates it.

Editor
March 2, 2012 5:46 am

Terry Oldberg says:
March 1, 2012 at 12:57 pm

A “prediction” is an inference from the state of a system at the beginning of an independent statistical event to the state of the same system at the end of the same event.

I was going to take you to task for ignoring a “teachable moment” to explain the difference between prediction and projection. You partially redeemed yourself with this muddy definition of prediction.
Please address the following:
What do you consider a projection to be and do the AR4 projections fit your definition?
When the IPCC refers to a projection, is it as JJ describes at http://wattsupwiththat.com/2012/02/29/day-of-reckoning-draws-nearer-for-ipcc/#comment-909749 ? (JJ presents “the official IPCC definitions”.
Group discussion can continue wrt how much taxes and regulations should governements commit to respond to climate projections? Note Terry has mentioned “Dr. Best’s ‘predictions’ are actually ‘projections’ and while predictions are falsifiable, projections are not.”

Reply to  Ric Werme
March 2, 2012 4:58 pm

Rick Werme:
Thanks for taking the time to reply and for giving me the opportunity to clarify. The idea of a “projection” is best developed in the context of the idea of a “prediction.” Thus, I’ll begin my response by developing the latter idea.
The variables of a predictive model are divided between dependent and independent variables. Each independent variable takes on one of its values at time t; I’ll call this the “start-time” for the associated event. Each dependent variable takes on one of its values at time t + delta t. I’ll call this the “end-time” the associated event.
The start-time and end-time define the time-interval of an event. For the statistical independence of the various events, their time-intervals must not overlap. As the complete set of these events covers the time-line, the set of time-intervals must be a partition of the time-line.
Let X1 and X2 designate variables. The “Cartesian product” of the set of values that are taken on by X1 and the set of values that are taken on by X2 is the class of sets in which each set pairs a value that is taken on by X1 with a value that is taken on by X2. In mathematical jargon, each such pair is an example of a “tuple.” The Cartesian product is the complete set of tuples. By extension of this idea, one can form the Cartesian product of any number of variables.
An “Outcome” of an event is a condition on the Cartesian product of the values that are taken on by the associated model’s dependent variables; the complete set of Outcomes is a partition of this Cartesian product. A “Condition” of an event is a condition on the Cartesian product of the values that are taken on by the associated model’s independent variables; the complete set of Conditions is a partition of this Cartesian product. For a weather forecasting event, an example of a Condition is “cloudy.” An example of an Outcome is “rain in the next 24 hours.”
A Condition is a state of a system. Similarly, the Outcome is a state of a system. In climatology, such a system is called a “climate system.”
The point in time at which the Condition of a system is determined is the start-time for a statistical event. The point in time at which the Outcome is determined for the same system is the end-time for the same event. At the start-time, the Condition can be observed but the Outcome cannot have been observed. At the end-time, the Condition and Outcome can both have been observed. In the circumstance that the Condition and Outcome have both been observed, the associated event is said to have been “observed.” A set of observed events is an example of a statistical sample.
A “prediction” is an extrapolation from an observed Condition to an unobserved Outcome. For example, it is an extrapolation from the observed Condition “cloudy” to the unobserved Outcome “rain in the next 24 yours.” A predictive model is a procedure for making a conditional prediction or “predictive inference.” In this kind of inference, the observed Condition is a premise and the unobserved Outcome is the conclusion. Given the observed Condition, the unobserved Outcome is generally uncertain.
Usually when an inference is made there are many candidates for being made. Logic contains the rules by which the one inference that is correct may be discriminated from the many that are incorrect. Usually, it is found that information that would be needed for deductive conclusion is missing. In this circumstance, it is necessary to replace the rules of the deductive logic by a generalization of them. Usually, this is accomplished by replacing the rule that every proposition has a “truth-value” by the rule that every proposition has a probability of being true. The resulting logic is called the “probabilistic logic.” The probabilistic logic includes the inductive as well as the deductive logic. The inductive logic differs from the deductive logic in the respect that information for a deductive conclusion is missing in the former branch of logic but not the latter.
In defining the various possible Outcomes for global climatology, one alternative is for each Outcome to be a different value of the GASAT. However, this alternative is not supported by the probabilistic logic, for propositions are generated for which there are no observed events, making the model non-falsifiable and thus unscientific. A generalization from the probabilistic logic provides partial support for this alternative, but I won’t adopt the alternative as the logic is complicated.
In the alternative that I shall adopt, the logic is probabilistic and the set of all possible Outcomes contains two Outcomes. One is that the numerical value of the GASAT exceeds its long-term median. The other is that the numerical value of the GASAT does not exceed its long-term median.
By the definition of “climatology,” a dependent variable of a climatological model is an average of the instantaneous values of a time series over a specified time-period. The canonical time-period is 30 years. For the sake of illustration, I shall adopt this time-period in defining my statistical population. In particular, the question of which of my two Outcomes has been realized by an observed event will be answered by 1) averaging the GASAT over the preceding 30 years and 2) determining whether this average exceeds or does not exceed the GASAT’s long-term median.
Let the “duration” of an event, designate the end-time less the start-time. The duration can be no less than the time-period of the averaging. For illustration, I’ll adopt the assumption that the duration is identical to the averaging period, that is, 30 years.
By various assumptions, I’ve pinned down the duration of an event but not the start-time or end-time. Specifying the start-time of a single event pins down the start-time and end-time in every event in the complete set of them. For illustration, I shall identify this start-time as the beginning of the year 1850; this choice is convenient for it marks the start of a GASAT time series that has been cited by the IPCC in arguing for CAGW.
It follows from the above that only 5 independent events can have been observed since the start of this time series. By the definition of a Condition, the minimum number of them is 2. Taking the number to be at this minimum, the number of condition-outcome pairs is 4. Each condition-outcome pair describes a different kind of event. Thus, 4 kinds of event are defined by my assumption. Each kind of event has its own relative frequency of occurrence. The relative frequency in the limit of observed events of infinite number is called the “limiting relative frequency.”
A model is tested by comparison of the model-predicted to the observed estimates of the limiting relative frequencies of the various kinds of events; My example holds events of 4 different kind, each corresponding to a different condition-outcome pair. The five observed events are spread far too thinly among rhw four kinds of event for the construction of a statistically validated model that predicts with statistical significance.In practice, more than 100 observed events are required for this purpose..
A “projection” is formed by operating on the computed values of the GASAT that are emitted by a general circulation model. In this operation, values that are adjacent in time are connected by straight lines. This operation yields a function that maps times to projected GASAT values.
A similar operation on a GASAT time series yields a function that maps times to GASAT values, a minor fraction of which were observed. In an IPCC-style “evaluation,” a function of the latter type plus functions of the former type are plotted on X-Y coordinates. While this exercise may serve one or more purposes, among them is not the falsification of or validation of the associated models.

GregO
March 2, 2012 6:04 am

“kadaka (KD Knoebel) says:
March 1, 2012 at 9:59 pm ”
Thanks! I appreciate the education – I’ve only been at this since really, 2010 and am not an earth scientist or meteorologist, just an engineer and I find it interesting to do my best to follow the arguments.

Jack Greer
March 2, 2012 6:37 am

Clive Best says:
March 2, 2012 at 12:44 am
Greer.
My logic is the following: Quantum Chromodynamics predicts the cross-section for gluon production in quark-quark scattering. The calculation is difficult but eventually makes precise predictions about 3-jet events in a particle accelerator. Physicists work for several years to build an experiment to measure the cross-section for 3 jet production. QCD is compared to the results. and they agree within measurement errors.Exactly. Shame on you, Dr. What do you intend to do about making a “clarifying statement”?
The models are largely not designed to project the timing of variability, often spurred by many natural oscillations, rather to project fit to longer term trends on a multiple decade scale (about 30 years) – but then you already know this. The calculated confidence interval are driven by the characteristics of the data, not a data point measurement error. Sorry if the models don’t meet your granularity expectation, but anyone can manufacture outrage by starting from an unreasonable premise and then supporting it with false calculations.

Jim G
March 2, 2012 9:51 am

Very interesting discussion of statistical significance! As I tried to say before, confidence intervals and statistical significance are of little relevence in this discussion at any rate as they are based upon numbers of observations relative to a population base. The myriads of other types of possible errors, some of which I noted previously, cannot be taken into account in a confidence interval. So why even debate the issue? Bottom line, predictions or projections which are inaccurate, as pointed out by many here, are of little use and those “models” , their assumptions and calculations should be discarded.

March 2, 2012 2:57 pm

W
On another blog, make a claim about global warming. I responded with actual data and a few links.
His response was something along the lines of:
Baseball has it’s umpires.
Football has it’s referees.
For science, we have the National Academy of Sciences.
Since the Academy has spoken, the issue is now settled.
He wouldn’t even debate the facts I presented. The NAS has spoken and that was it

Hmm, that would make the NAS the Vatican of the CAGW crowd. I was wondering when they’d get around to setting up a primary see.

March 3, 2012 7:27 am

@ Jack Greer:
“The models are largely not designed to project the timing of variability, often spurred by many natural oscillations, rather to project fit to longer term trends on a multiple decade scale (about 30 years)”.
Jack: We are making progress. The basic problem is that in the public’s mind there is the strong impression of a direct (linear) relationship between CO2 emissions and temperature. This impression has been put there by scientists who should know better, and by various pressure groups. Unfortunately nature doesn’t quite work that way. It seems more likely that there are regular natural climate variations with are superimposed on an underlying AGW trend. This probably simulated the rapid warming seen in the 90s implying to some a large climate sensitivity, because they were also convinced of a single driver for climate – CO2. The models then project/predict steep warming trends. Now, however the data show that we seem to have entered a natural cooling trend which may last another 10 years. In this case it will become ever harder to convince the public of the cost of costly carbon emission cuts. It would be far better to come clean now and admit that natural processes apart form CO2 are also important in determining climate. Note also that 30 years is about the length of one scientific career which seems a bit too fortuitous !
Fitting the temperature data to a log dependency and a 60 year temperature oscillation leads to an underlying AGW trend of 2.5 Ln(C/C0) giving a total rise by the end of this century of about 0.6 deg. C above current values for scenario B1. This is about half the values from AR4 models.

Reply to  Clive Best
March 3, 2012 9:21 am

Clive Best(March 3, 2012):
With the availablity of only 5 thirty year observed independent events going back to the start of the hadCRUT3, it would be impossible to link the CO2 level or any other independent variable to the global average surface air temperature. People who establish such a link do so by engaging in the human proclivity for interpreting noise as signal.
To separate the signal from the noise, we need at least 100 observed events and 10,000 would be a lot better but only 5 thirty year observed events are available from the hadCRUT3. The oldest temperature time series, the Central England, provides only 11 observed events and does not extend over the globe. Thus, to have any hope of constructing a model that is predictive of global average surface air temperatures over 30 year spans, we are forced to delve into the paleo data despite the numerous shortcomings of these data.

Editor
March 3, 2012 10:14 pm

Terry Oldberg says:
March 2, 2012 at 4:58 pm

The idea of a “projection” is best developed in the context of the idea of a “prediction.” Thus, I’ll begin my response by developing the latter idea.

I wasn’t quite expecting such a theoretical response! Between that and your
web site there’s a lot to absorb, and I won’t have too much time for that in
the next couple of days, as I’m putting together a post about 50 year-old
conditions that led to a three day long coastal storm outcome. 🙂
Two things:
1) What is GASAT? The only thing that seems to fit would take a lot of
understanding.
2) Is this how the IPCC defines “projection”?
-Ric

March 4, 2012 9:12 am

Rick Werme (March 3, 2012 at 10:14 pm):
Thanks for taking the time to reply. While you’re boning up you might also try the series of three articles that I’ve published in Climate, Etc, under the title of “The Principles of Reasoning.” There are three parts. Part I is at http://judithcurry.com/2010/11/22/principles-of-reasoning-part-i-abstraction/, Part II is at http://judithcurry.com/2010/11/25/the-principles-of-reasoning-part-ii-solving-the-problem-of-induction/ and Part III is at http://judithcurry.com/2011/02/15/the-principles-of-reasoning-part-iii-logic-and-climatology/ . Part III covers the topic we are discussing in detail.
Answers to your questions follow:
1) GASAT is the acronym for the global average surface air temperature.
2) Sometimes, the IPCC defines “projection” as I’ve defined it. In “Spinning the Climate,” Vincent Gray reports that IPCC established a policy of distinguishing between the idea referenced by “prediction” and the idea referenced by “projection” but failed to enforce this policy uniformly; a consequence was for the two ideas to be conflated in IPCC documents. In a paper published circa 2007, Kesten Green and Scott Armstrong report that they polled a number of professional climatologists, most of whom were IPCC authors or reviewers. They found that most conflated the two ideas. These data suggest that the two ideas are conflated in the minds of most professional climatologists.
When distinct ideas are conflated, a consequence is for one of Aristotle’s three laws of thought to be negated; this is the law of non-contradiction. By using the negated law as a premise to an argument, one can provide a specious proof of a conclusion that is false or unproved. In arguing for CAGW, the IPCC employs this kind of specious argument. Anthony Watts is evidently unaware of the importance of distinguishing the idea referenced by “projection” and the idea referenced by “prediction,” for he frequently and persistently publishes articles that fail to make the distinction in his blog.

March 4, 2012 7:43 pm

This is what kills me about the CAGW warmunists..
The IPCC makes up these inane scenarios with huge temp differentials, and when actual temperature data is much lower than the scenario parameters that come closest to reflecting reality, CAGW warmunists point to lower temp scenarios (which are STILL way off) that don’t come even close to matching the scenario parameters and say, “Look! we’re only off by 2 standard deviations!!!!.”
That’s like shooting a bunch of arrows all over a very large barn wall, randomly painting a circle somewhere on the side of the barn, laying down the paintbrush and screaming, “BULLSEYE!!!!”
The IPCC and warmunists are getting to be so tedious.

March 19, 2012 9:04 pm

Excellent. First, I must ask Can we see the code? Second: is the solid black line in figure 1 the isrmnutental record, and if so what dataset?Third: This is CMIP3. Do you have any information, or even intuition, about corresponding results for CMIP5? I’m particularly interested because (AIUI) CMIP5 runs out past 2100.Fourth: I would prefer a more continuous representation, although I don’t have any particular suggestions about how to produce one compactly. These figures show me distributions for each SRES scenario at the 2C/3C/4C milestones, but if I’m interested in 1.5C or 2.5C, I’m stymied. I guess a variation on Fig 1 could have coloured bands, but you’d need some way to distinguish the scenarios. Maybe a horizontal offset?Fifth: in figure 2, the coloured bands seem to run all the way to the right. Shouldn’t they stop at the point at which the last model run reaches that temperature (e.g. the 2C band for A2 should stop at guessing 2065)?Sixth: in figure 2, A2 looks pretty linear (2C to 3C looks like about the same duration as 3C to 4C). Is that so?