
Guest post by Pat Frank
The summary of results from version 5 of the Coupled Model Intercomparison Project (CMIP5) has come out. [1] CMIP5 is evaluating the state-of-the-art general circulation climate models (GCMs) that will be used in the IPCC’s forthcoming 5th Assessment Report (AR5) to project climate futures. The fidelity of these models will tell us what we may believe when climate futures are projected. This essay presents a small step in evaluating the CMIP5 GCMs, and the reliability of the IPCC conclusions based on them.
I also wanted to see how the new GCMs did compared to the CMIP3 models described in 2003. [2] The CMIP3 were used in IPCC AR4. Those models produced an estimated 10.1% error in cloudiness, which translated into an uncertainty in cloud feedback of (+/-)2.8 Watts/m^2. [3] That uncertainty is equivalent to (+/-)100% of the excess forcing due to all the GHGs humans have released into the atmosphere since 1900. Cloudiness uncertainty, all by itself, is as large as the entire effect the IPCC is trying to resolve.
If we’re to know how reliable model projections are, the energetic uncertainty, e.g., due to cloudiness error, must be propagated into GCM calculations of future climate. From there the uncertainty should appear as error bars that condition the projected future air temperature. However inclusion of true physical error bars never seems to happen.
Climate modelers invariably publish projections of temperature futures without any sort of physical uncertainty bars at all. Here is a very standard example, showing several GCM projections of future arctic surface air temperatures. There’s not an uncertainty bar among them.
The IPCC does the same thing, brought to you here courtesy of an uncritical US EPA. The shaded regions in the IPCC SRES graphic refer to the numerical variability of individual GCM model runs. They have nothing to do with physical uncertainty or with the reliability of the projections.
Figure S1 in Jiang, et al.’s Auxiliary Material [1] summarized how accurately the CMIP5 GCMs hindcasted global cloudiness. Here it is:
Here’s what Jiang, et al., say about their results, “Figure S1 shows the multi-year global, tropical, mid-latitude and high-latitude mean TCFs from CMIP3 and CMIP5 models, and from the MODIS and ISCCP observations. The differences between the MODIS and ISCCP are within 3%, while the spread among the models is as large as ~15%.” “TCF” is Total Cloud Fraction.
Including the CMIP3 model results with the new CMIP5 outputs conveniently allows a check for improvement over the last 9 years. It also allows a comparison of the official CMIP3 cloudiness error with the 10.1% average cloudiness error I estimated earlier from the outputs of equivalent GCMs.
First, here are Tables of the CMIP3 and CMIP5 cloud projections and their associated errors. Observed cloudiness was taken as the average of the ISCCP and Modis Aqua satellite measurements (the last two bar groupings in Jiang Figure S1), which was 67.7% cloud cover. GCM abbreviations follow the references below.
Table 1: CMIP3 GCM Global Fractional Cloudiness Predictions and Error
| Model Source | CMIP3 GCM | Global Average Cloudiness Fraction | Fractional Global Cloudiness Error |
| NCC | bcm2 | 67.7 | 0.00 |
| CCCMA | cgcm3.1 | 60.7 | -0.10 |
| CNRM | cm3 | 73.8 | 0.09 |
| CSIRO | mk3 | 65.8 | -0.03 |
| GFDL | cm2 | 66.3 | -0.02 |
| GISS | e-h | 57.9 | -0.14 |
| GISS | e-r | 59.8 | -0.12 |
| INM | cm3 | 67.3 | -0.01 |
| IPSL | cm4 | 62.6 | -0.08 |
| MIROC | miroc3.2 | 54.2 | -0.20 |
| NCAR | ccsm3 | 55.6 | -0.18 |
| UKMO | hadgem1 | 54.2 | -0.20 |
| Avg. 62.1 | R.M.S. Avg. ±12.1% |
Table 2: CMIP5 GCM Global Fractional Cloudiness Predictions and Error
| Model Source | CMIP5 GCM | Global Average Cloudiness Fraction | Fractional Global Cloudiness Error |
| NCC | noresm | 54.2 | -0.20 |
| CCCMA | canesm2 | 61.6 | -0.09 |
| CNRM | cm5 | 57.9 | -0.14 |
| CSIRO | mk3.6 | 69.1 | 0.02 |
| GFDL | cm3 | 71.9 | 0.06 |
| GISS | e2-h | 61.2 | -0.10 |
| GISS | e2-r | 61.6 | -0.09 |
| INM | cm4 | 64.0 | -0.06 |
| IPSL | cm5a | 57.9 | -0.14 |
| MIROC | miroc5 | 57.0 | -0.16 |
| NCAR | cam5 | 63.5 | -0.06 |
| UKMO | hadgem2-a | 54.2 | -0.20 |
| Avg. 61.2 | R.M.S. Avg. ±12.4% |
Looking at these results, some models improved between 2003 and 2012, some did not, and some apparently became less accurate. The error fractions are one-dimensional numbers that are condensed from errors in three-dimensions. The error fractions do not mean that some given GCM predicts a constant fraction of cloudiness above or below the observed cloudiness. GCM cloud predictions weave about the observed cloudiness in all three dimensions; here predicting more cloudiness, there less. [4, 5] The GCM fractional errors are the positive and negative cloudiness errors integrated over the entire globe, suppressed into single numbers. The total average error of all the GCMs is represented as the root-mean-square average of the individual GCM fractional errors.
CMIP5 cloud cover was averaged over 25 model years (1980-2004) while CMIP3 averages represent 20 model years (1980-1999). GCM average cloudiness was calculated from “monthly mean grid-box averages.” So a 20-year global average included 12×20 monthly global cloudiness realizations, reducing GCM random calculational error by at least 15x.
It’s safe to surmise, therefore, that the residual errors recorded in Tables 1 and 2 represent systematic GCM cloudiness error, similar to the cloudiness error found previously. The average GCM cloudiness systematic error has not diminished between 2003 and 2012. The CMIP3 models averaged 12.1% error, which is not significantly different from my estimate of 10.1% cloudiness error for GCMs of similar sophistication.
We can now proceed to propagate GCM cloudiness error into a global air temperature projection, to see how uncertainty grows over GCM projection time. GCMs typically calculate future climate in a step-wise fashion, month-by-month to year-by-year. In each step, the climate variables calculated in the previous time period are extrapolated into the next time period. Any errors residing in those calculated variables must propagate forward with them. When cloudiness is calculationally extrapolated from year-to-year in a climate projection, the 12.4 % average cloudiness error in CMIP5 GCMs represents an uncertainty of (+/-)3.4 Watts/m^2 in climate energy. [3] That (+/-)3.4 Watts/m^2 of energetic uncertainty must propagate forward in any calculation of future climate.
Climate models represent bounded systems. Bounded variables are constrained to remain within certain limits set by the physics of the system. However, systematic uncertainty is not bounded. In a step-wise calculation, any systematic error or uncertainty in input variables must propagate as (+/-)sqrt(sum(per-step error)^2) into output variables. Per-step uncertainty increments with each new step. This condition is summarized in the next Figure.
The left picture illustrates the way a GCM projects future climate in a step-wise fashion. [6] The white rectangles represent a climate propagated forward through a series of time steps. Each prior climate includes the variables that calculationally evolve into the climate of the next step. Errors in the prior conditions, uncertainties in parameterized quantities, and limitations of theory all produce errors in projected climates. The errors in each preceding step of the evolving climate calculation propagate into the following step.
This propagation produces the growth of error shown in the right side of the figure, illustrated with temperature. The initial conditions produce the first temperature. But errors in the calculation produce high- and low uncertainty bounds (e_T1, etc.) on the first projected temperature. The next calculational step produces its own errors, which add in and expand the high and low uncertainty bounds around the next temperature.
When the error is systematic, prior uncertainties do not diminish like random error. Instead, uncertainty propagates forward, producing a widening spread of uncertainty around time-stepped climate calculations. The wedge on the far right summarizes the way propagated systematic error increases as a step-wise calculation is projected across time.
When the uncertainty due to the growth of systematic error becomes larger than the physical bound, the calculated variable no longer has any physical meaning. In the Figure above, the projected temperature would become no more meaningful than a random guess.
With that in mind, the CMIP5 average error in global cloudiness feedback can be propagated into a time-wise temperature projection. This will show the average uncertainty in projected future air temperature due to the errors GCMs make in cloud feedback.
Here, then, is the CMIP5 12.4% average systematic cloudiness error propagated into everyone’s favorite global temperature projection: Jim Hansen’s famous doomsday Figure; the one he presented in his 1988 Congressional testimony. The error bars were calculated using the insight provided by my high-fidelity GCM temperature projection emulator, as previously described in detail (892 kB pdf download).
The GCM emulator showed that, within GCMs, global surface air temperature is merely a linear function of net GHG W/m^2. The uncertainty in calculated air temperature is then just a linear function of the uncertainty in W/m^2. The average GCM cloudiness error was propagated as:
“Air temperature” is the surface air temperature in Celsius. Air temperature increments annually with increasing GHG forcing. “Total forcing” is the forcing in W/m^2 produced by CO2, nitrous oxide, and methane in each of the “i” projection years. Forcing increments annually with the levels of GHGs. The equation is the standard propagation of systematic errors (very nice lecture notes here (197 kb pdf)). And now, the doomsday prediction:
The Figure is self-explanatory. Lines A, B, and C in part “a” are the projections of future temperature as Jim Hansen presented them in 1988. Part “b” shows the identical lines, but now with uncertainty bars propagated from the CMIP5 12.4 % average systematic global cloudiness error. The uncertainty increases with each annual step. It is here safely assumed that Jim Hansen’s 1988 GISS Model E was not up to CMIP5 standards. So, the uncertainties produced by CMIP5 model inaccuracies are a true minimum estimate of the uncertainty bars around Jim Hansen’s 1988 temperature projections.
The large uncertainty by year 2020, about (+/-)25 C, has compressed the three scenarios so that they all appear to nearly run along the baseline. The uncertainty of (+/-)25 C does not mean I’m suggesting that the model predicts air temperatures could warm or cool by 25 C between 1958 and 2020. Instead, the uncertainty represents the pixel size resolvable by a CMIP5 GCM.
Each year, the pixel gets larger, meaning the resolution of the GCM gets worse. After only a few years, the view of a GCM is so pixelated that nothing important can be resolved. That’s the meaning of the final (+/-25) C uncertainty bars. They mean that, at a distance of 62 years, the CMIP5 GCMs cannot resolve any air temperature change smaller than about 25 C (on average).
Under Jim Hansen’s forcing scenario, after one year the (+/-)3.4 Watts/m^2 cloudiness error already produces a single-step pixel size of (+/-)3.4 C. In short, CMIP5-level GCMs are completely unable to resolve any change in global surface air temperature that might occur at any time, due to the presence of human-produced carbon dioxide.
Likewise, Jim Hansen’s 1988 version of GISS Model E was unable predict anything about future climate. Neither can his 2012 version. The huge uncertainty bars makes his 1988 projections physically meaningless. Arguing about which of them tracks the global anomaly trend is like arguing about angels and heads of pins. Neither argument has any factual resolution.
Similar physical uncertainty bars should surround any GCM ensemble projection of global temperature. However, the likelihood of seeing them in the AR5 (or in any peer-reviewed climate journal) is vanishingly small.
Nevertheless, CMIP5 climate models are unable to predict global surface air temperature even one year in advance. Their projections have no particular physical meaning and are entirely unreliable.
Following on from the title of this piece: there will be no reason to believe any – repeat, any — CMIP5-level future climate projection in the AR5.
Climate projections that may appear in the IPCC AR5 will be completely vacant of physical credibility. Attribution studies focusing on CO2 will necessarily be meaningless. This failure is at the center of AGW Climate Promotions, Inc.
Anyone, in short, who claims that human-produced CO2 has caused the climate to warm since 1900 (or 1950, or 1976) is implicitly asserting the physical reliability of climate models. Anyone seriously asserting the physical reliability of climate models literally does not know what they are talking about. Recent pronouncements about human culpability should be seen in that light.
The verdict against the IPCC is identical: they do not know what they’re talking about.
Does anyone think that will stop them talking?
References:
1. Jiang, J.H., et al., Evaluation of cloud and water vapor simulations in CMIP5 climate models using NASA “A-Train” satellite observations. J. Geophys. Res., 2012. 117, D14105.
2. Covey, C., et al., An overview of results from the Coupled Model Intercomparison Project. Global Planet. Change, 2003. 37, 103-133.
3. Stephens, G.L., Cloud Feedbacks in the Climate System: A Critical Review. J. Climate, 2005. 18, 237-273.
4. AchutaRao, K., et al., Report UCRL-TR-202550. An Appraisal of Coupled Climate Model Simulations, D. Bader, Editor. 2004, Lawrence Livermore National Laboratory: Livermore.
5. Gates, W.L., et al., An Overview of the Results of the Atmospheric Model Intercomparison Project (AMIP I). Bull. Amer. Met. Soc., 1999. 80(1), 29-55.
6. Saitoh, T.S. and S. Wakashima, An efficient time-space numerical solver for global warming, in Energy Conversion Engineering Conference and Exhibit (IECEC) 35th Intersociety. 2000, IECEC: Las Vegas. p. 1026-1031.
Climate Model Abbreviations
BCC: Beijing Climate Center, China.
CCCMA: Canadian Centre for Climate Modeling and Analysis, Canada.
CNRM: Centre National de Recherches Météorologiques, France.
CSIRO: Commonwealth Scientific and Industrial Research Organization, Queensland Australia.
GFDL: Geophysical Fluid Dynamics Laboratory, USA.
GISS: Goddard Institute for Space Studies, USA.
INM: Institute for Numerical Mathematics, Russia.
IPSL: Institut Pierre Simon Laplace, France.
MIROC: U. Tokyo/Nat. Ins. Env. Std./Japan Agency for Marine-Earth Sci.&Tech., Japan.
NCAR: National Center for Atmospheric Research, USA.
NCC: Norwegian Climate Centre, Norway.
UKMO: Met Office Hadley Centre, UK.
Very good general points. In researching the climate chapter of forthcoming Arts of Truth, dug into cloud trends from ICOADS and ISCCP versus what is modeled in CMIP3. Observationally, TCF has increased slightly since 1965 (baseline chosen per NASA to include ‘primarily’ AGW). There has been no statistically significant change in the proportions of low, medium, or high altitude clouds at editorial or mid latitudes. (High latitudes were under sampled prior to satellites, the trends are less certain.) ISCCP also has optical depth unchanging.
By contrast, 12 of 13 GCMs examined have TCF decreasing. All CMIP3 GCMs have clouds as a net positive feedback (cooling effect diminishes with tmperature increase). On average, low cloudiness is underestimated by 25% and medium altitude cloudiness variable by up to 50%. They also underestimate precipitation, which suggests they underestimate optical depth and negative feedback from it by a factor of up to four. Each of these findings is from peer reviewed literature post AR4. It will be interesting to see how much of this makes it into AR5 consideration. Probably very little.
All references, plus data charts and more details, are in the book.
For those arguing semantically whether the models are intended to provide predictions or projections, consider the somewhat archaic definition of a “projector” from dictionary.com.
“a person who forms projects or plans; schemer”
john says:
August 24, 2012 at 3:29 am
JP Morgan also ran the Oil for Food Program.
___________________________
Maurice Strong of the UN Earth Summits fame was implicated in the “oil for food” scandal so we are really talking about a rather small number of people involved in all these scams. A bit of poking around the internet turns up another player.
So who is BNP bank? “….Europe’s leading provider of banking and financial services, has four domestic retail banking markets in Europe, namely in Belgium, France, Italy and Luxembourg.”
SInce when did banks get voted overseers of the human race?
The President of BNP Paribas is Michel Pébereau a very powerful man it would seem. So I guess that does make him out “better”
A specialist in Science Fiction??? So now we know how they are coming up with all this stuff!
When you read this Annual Report, remember commodities future trading is linked to the food riots in 2008 and derivative products are linked to the economic crash and Forclosuregate in the USA.
Do not forget ENRON was neck deep in the Cap ‘n Trade Scam. From the National Review no less Al Gore’s Inconvenient Enron
And Pat Frank, thanks for bring us up to date on the latest iteration of the climate models.
As far as I am concerned they are way way ahead of themselves. They have not even identified all the various possible parameters yet. Sort of like a blind man building a model of an elephant who hasn’t gotten past investigating the tail and thinks it looks like a snake.
Robert Brown says:
August 24, 2012 at 6:58 am
“However, the basic message of this is sound enough — there should be error estimates on the “projections” of GCMs — at the very least a Monte Carlo of the spread of outcomes given noise on the parameters.”
I want to add a word or two for emphasis. There must be some standards that can be applied to models. Maybe the Monte Carlo is sufficient for now. However, the more important problem that you see is the following:
“…the things that cause instability on this scale do not appear to be any of the parameters included in the GCMs anyway because we don’t really know what they are or how they work.”
I agree fully. The fundamental problem is a lack of understanding of the physical facts.
Yet some very accomplished scientists who have turned to modeling seem to believe that if the model is complicated enough then it must contain a physical theory and that physical theory must comprehend the facts. I think that this kind of mystical reasoning is worthy to be named a fallacy. I propose that it be called the Congressman Akin fallacy because it parallels his reasoning to the conclusion that the female anatomy can distinguish among various kinds of sexual assault. .
Terry Oldberg,
I think you are beating the prediction vs projections drum to death. I really don’t care if the IPCC et al calls what they put out as predictions, projections, forecasts, or tea leaves. The point is that what they said in the past was going to happen didn’t, and what they say now is going to happen probably won’t. In the court of public opinion the semantics of prediction vs projection are immaterial, the only thing the matters is “did they get it right or wrong”. They got it wrong. Repeatedly. Beat that drum. Loudly and repeatedly.
“I think you are beating the prediction vs projections drum to death”
Seconded, and such petty nitpicking misses the point entirely – what the climate models are trying to do, in the way that they are trying to do it, is mathematically impossible.
“””…..rgbatduke says:
August 24, 2012 at 8:08 am
Sorry satellites don’t cut the mustard……”””””
Well I do get your point Professor Bob, but just what is it that satellites, either geo-stationary
or mobile, are measuring ?
Common sense tells me that any satellite is receiving some species of electro-magnetic radiation, that is proceeding upward/outward from the earth (above the clouds)
Absolutely none of that radiation can have any effect on the earth or its climate or weather.
What we would like to know is what the sum total of all species of EM radiation proceeding downward / earthward underneath the clouds is; and of course where it is, so we can predict its fate.
If the satellite network imitates an integrating sphere, then the problem is solved as regards total planet emission, and subtracting from TSI we have total planet absorption. But just exactly where all that energy went or is, would seem to be still largely unknown. If not, then I will withdraw my objection.
davidmhoffer says:
Terry Oldberg,
I think you are beating the prediction vs projections drum to death.
Don’t blame Terry. It is the chicken-feces pseudoscientists of the IPCC that are constantly equivocating on those terms. Terry is just pointing that out. It is relevant to this discussion, as every time someone brings up a point such as Pat’s, the snotty reply from on-high at Team headquarters is generaly some version of “those aren’t predictions, those are projections.”
Terry gives us the appropriate response – “Then they aren’t science, so go cluck yourself.”
Pardon the fowl language.
Regarding the “assumption” in GCMs that water feedback (clouds) is “positive”; which the general chatter suggests is the case. Well I’ve never looked at a GCM; don’t have a terrafloppy computer to run one on.
Why the hell is ANYONE constructing ANY model of ANYTHING, making any assumption, a priori about the consequence of some circuit connections.
If you assemble some physics model representing real physical phenomena, that can be described by some well established mathematical equations; you do not presume that this or that “feedback” or “forcing” or “driver” or “power source”or “whatever” is this or that polarity.
You simply wire up the components of the system to replicate the real connections of the actual real physical world system, and then run it.
The system model itself will decide whether some connection is positive or negative; NOT you !
In the case of some system connection that DOES actually form a positive feedback loop, well the system can overshoot, or even oscillate, or operate with increased “gain”, depending on the parameters, but the running system model will decide if such and such is a positive feedback or not, or even if it is a feedback.
If some modeller has to be told just what some piece of the kit is, and where it assembles to in the finished model, then they clearly shouldn’t be building models.
I suggest they buy themselves a Mr Potato Head, and go sit in a corner and play with it.
If they put pieces upside down or in the wrong holes, at least they won’t be dooming millions of people to poverty.
JJ
Don’t blame Terry. It is the chicken-feces pseudoscientists of the IPCC that are constantly equivocating on those terms. Terry is just pointing that out.
>>>>>>>>>>>>>>>>>>>>>>>>>>
The average person in the street doesn’t understand the point being made, nor do they care.
Me: The IPCC made multiple predictions about climate change that have not come true and are demonstrably false.
Cimate Scientists: Well those were projections, not predictions…..
Me: Fine. The IPCC made multiple projections about climate change that have not come true and are demonstrably false.
Get it? It is a nuance that has no value to discuss further.
Gail, I really appreciate your comment and it helped more then you think.
I have been looking into this for quite awhile and…
http://www.eco-business.com/news/upc-renewables-eyes-two-more-wind-projects/
Robert Austin says:
August 24, 2012 at 9:24 am
For those arguing semantically whether the models are intended to provide predictions or projections,
It’s not a semantic distinction.
You can reduce science (the scientific process) to 3 elements, all of which are necessary for an activity to be called science. The first of these elements is deriving (quantified) predictions,
The reason the modellers steadfastly maintain the model outputs aren’t predictions is because predictions can be compared to observations (measurements) and the source of the predictions (the models) falsified.
Which is why I say the climate models aren’t science.
BTW, the way to make the climate models scientific is to progressively break them apart identifying the source of errors (prediction-observation discrepancies).
The reason this isn’t done, is the modellers know as well as I do that it will be a very long time before the components produce sufficiently accurate predictions for the models to be reassembled.
Terry Oldberg says: August 24, 2012 at 8:22 am
Allan MacRae
It sounds as though you are under the misimpression that IPCC climate models make “predictions.” As the climatologist Kevin Trenberth emphatically stated in the blog of the journal Nature in 2007, they make only “projections.” Though predictions are falsifiable, projections are not. The models are not wrong but rather are not scientific.
_________
Actually, I think I understand your points quite well Terry, although I’m not sure I would rely on Trenberth for enlightenment on this subject.
The warming alarmists represented their models as providing credible predictions when they wanted to support their Cause to “fight global warming” and more recently have had to recognize that their models have no predictive skill.
Hence the semantic games of “prediction” versus “projection”, but it is all just CAGW-BS.
In conclusion:
The climate models produce nonsense, and yet our society continues to squander trillions of dollars of scarce global resources, based on this nonsense.
All,
People don’t read reports, they just look at the pictures. Here’s the picture from AR4 WG1
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-10-26.html
I really don’t care if you call the different scenarios predictions, projections, forecasts, estimates, WAG’s, tea leaf readings or tarot cards, reality is that we’re seeing more CO2 emissions and less temperature change and all these pictures are WRONG.
By debating predictions versus projection we’re just playing the perception management game by their rules. Play it by our rules. Here folks is what they said would happen, it isn’t happening, they’ve over estimated CO2’s effects and their models are useless as shown by comparing the results to what they said was going to happen.
davidmhoffer:
In seeking to disambiguate the terms in which a debate is conducted, one’s purpose is not to manage perceptions but rather to ensure that a false or unproved conclusion will not appear to be true. If a policy of disambiguation is applied to the terms “prediction” and “projection” with a “prediction” defined as an extrapolation from an observed condition to the unobserved but observable outcome of a statistical event and a “projection” defined as it is in the field of ensemble forecasting then a conclusion of high importance emerges in conjunction with the absense in IPCC climatology of identification of the statistical population underlying the IPCC climate models. This is that the projections which are made by these models are not falsifiable. Predictions are falsifiable but they are not made by these models. Do you not agree, then, that a policy of disambiguation should be followed by debaters in this particular thread?
Pat Frank says
Climate modelers invariably publish projections of temperature futures without any sort of physical uncertainty bars at all. Here is a very standard example, showing several GCM projections of future arctic surface air temperatures. There’s not an uncertainty bar among them.
———
That is a completely clueless thing to claim.
The models are calculations not physical measurements. You are supposed to infer their uncertainties differently, by:
1. studying the spread of the different simulations
2. comparing the simulations with actual measurements
The next calculational step produces its own errors, which add in and expand the high and low uncertainty bounds around the next temperature.
———-
Utter nonsense. Whether this happens depends on the character of the numerical simulations involved.
Terry Oldberg,
Do you not agree, then, that a policy of disambiguation should be followed by debaters in this particular thread?
>>>>>>>>>>>>>>>>>
Do you understand that the vast majority of the population understands completely what I said and hasn’t got a clue what you said?
davidmhoffer:
As I have used the term “statistical population” it is a set of events and not, as you seem to have assumed, a set of humans. Each such event can be described by a pair of states. One such state is a condition on a model’s independent variables and is conventionally called a “condition.” The other state is a condition on the same model’s dependent variables and is conventionally called an “outcome.” A “prediction” is an extrapolation from the observed condition of an event to the unobserved but observable outcome of the same event.
Suppose, for example, that the condition is “cloudy” and the outcome is “rain in the next 24 hours.” Given that “cloudy” was observed, an example of a prediction is “rain in the next 24 hours.”
This discussion needs clarification.
How about this:
When a global warming alarmist makes a “prediction” that proves false, he calls it a “projection”.
I note that global warming alarmists make an awful lot of projections these days, and no predictions.
This observation is entirely consistent with my “Law of Warmist BS”
http://wattsupwiththat.com/2012/02/28/the-gleick-tragedy/#more-57881
“You can save yourselves a lot of time, and generally be correct, by simply assuming that EVERY SCARY PREDICTION the global warming alarmists express is FALSE.”
This Law had stood the test of time, and has yet to be falsified.
NEW SCIENTIFIC REVELATION – “Axiom 1 of the Law of Warmist BS”
“Global warming alarmists don’t make predictions anymore – they just make projections.”
🙂
I knew a guy. A real card. He used to go to the track with a kit. It was a box with a handle on the top. He’d go sit down at a table in the clubhouse and with great ceremony put on eye shades, open up his racing form, and then reach in to his kit box and pull out a spinner card.
Look!
8 predictions, Zero projections!
http://wattsupwiththat.com/2012/07/25/lindzen-at-sandia-national-labs-climate-models-are-flawed/#comment-1046529
Sallie Baliunas, Tim Patterson and I published an article in the PEGG in 2002:
Here is what we predicted a decade ago:
Our eight-point Summary* includes a number of predictions that have all materialized in those countries in Western Europe that have adopted the full measure of global warming mania. My country, Canada, was foolish enough to sign the Kyoto Protocol, but then wise enough to ignore it.
Summary*
Full article at
http://www.apegga.org/Members/Publications/peggs/WEB11_02/kyoto_pt.htm
Kyoto has many fatal flaws, any one of which should cause this treaty to be scrapped.
1. Climate science does not support the theory of catastrophic human-made global warming – the alleged warming crisis does not exist.
2. Kyoto focuses primarily on reducing CO2, a relatively harmless gas, and does nothing to control real air pollution like NOx, SO2, and particulates, or serious pollutants in water and soil.
3. Kyoto wastes enormous resources that are urgently needed to solve real environmental and social problems that exist today. For example, the money spent on Kyoto in one year would provide clean drinking water and sanitation for all the people of the developing world in perpetuity.
4. Kyoto will destroy hundreds of thousands of jobs and damage the Canadian economy – the U.S., Canada’s biggest trading partner, will not ratify Kyoto, and developing countries are exempt.
5. Kyoto will actually hurt the global environment – it will cause energy-intensive industries to move to exempted developing countries that do not control even the worst forms of pollution.
6. Kyoto’s CO2 credit trading scheme punishes the most energy efficient countries and rewards the most wasteful. Due to the strange rules of Kyoto, Canada will pay the former Soviet Union billions of dollars per year for CO2 credits.
7. Kyoto will be ineffective – even assuming the overstated pro-Kyoto science is correct, Kyoto will reduce projected warming insignificantly, and it would take as many as 40 such treaties to stop alleged global warming.
8. The ultimate agenda of pro-Kyoto advocates is to eliminate fossil fuels, but this would result in a catastrophic shortfall in global energy supply – the wasteful, inefficient energy solutions proposed by Kyoto advocates simply cannot replace fossil fuels.
[end of excerpt]
______
P.S.:
In a separate article in the Calgary Herald, also published in 2002, I (we) predicted imminent global cooling, starting by 2020 to 2030. This prediction is still looking good, since there has been no net global warming for about a decade, and solar activity has crashed. If this cooling proves to be severe, humanity will be woefully unprepared and starvation could result. This possibility (probability) concerns me.
P.P.S.
If I’m wrong about my global cooling prediction, I’ll just call it a projection, and get a job with the IPCC. 🙂
Allan MacRae says:
“I note that global warming alarmists make an awful lot of projections these days, and no predictions…
“Axiom 1 of the Law of Warmist BS…
“Global warming alarmists don’t make predictions anymore – they just make projections.”
•••
Exactly right. Every ‘prediction’ has been debunked and falsified. When their feet are held to the fire of the scientific method, their conjecture fails. It is based on pseudo-science; pure anti-science.
If I am wrong, all it would take to deconstruct the skeptical position would be to provide scientific evidence showing a clear connection between human CO2 emissions and global warming.
But there is no such evidence. None. They are winging it.
Thanks Smokey – valid comments – but I thought you were in the penalty box.
I guess you got off with a minor for roughing.
Mann gets a double-major and game misconduct for high-sticking.
Allan,
Only in the penalty box for roughing Perlwitz. But that was on a different thread. BTW, here is another ‘wheel’ to go with the article. ☺
Terry Oldberg;
As I have used the term “statistical population” it is a set of events and not, as you seem to have assumed, a set of humans.
>>>>>>>>>>>>>>>>>>>>
I made no such assumption and you have completely missed the point of my comment.
davidmhoffer:
If you were to more fully explain what the point of your comment was, perhaps I could address it.
Robert Brown, August 24, 2012 at 6:58 am said”
“…..the climate doesn’t seem enormously unstable toward hothouse Earth or snowball Earth even on a geological time scale,…….”
The climate has been varying for billions of years without triggering any “tipping points”, extreme or irreversible changes such as those described in Jimmy Hansen’s “Fairy Tales”; those tales are less related to reality than the ones written by Hans Christian Andersen.
The planet experienced “Ice Ages” when CO2 concentrations were >7,000 ppm. Thus one should take any model that “predicts” temperature rises of 1.4 to 5.8 Kelvin by 2100 based on a rise in CO2 concentration to 650-970 parts per million (AR4) with a pinch of salt. Far from recognising the absurdity of the AR4 “Predictions”, AR5 seems to be heading further into cloud cuckoo land. Thankfully physicists like “rgb” are helping us separate “Science” from “Politics”.