Redefining the Scientific Method–Because Climate Change Science Is Special

by Indur M. Goklany

Phil Jones famously said:

Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!” – Phil Jones 8/7/2004

Today, we have an example:

“[T]his is also the way science works: someone makes a scientific claim and others test it. If it holds up to scrutiny, it become part of the scientific literature and knowledge, safe until someone can put forward a more compelling theory that satisfies all of the observations, agrees with physical theory, and fits the models.” – Peter Gleick at Forbes; emphasis added. 9/2/2011

Last time I checked it was necessary and sufficient to fit the observations, but “fits the models”!?!?

So let’s ponder a few questions.

  1. 1.       Do any AOGCMs satisfy all the observations? Are all or any, for example, able to reproduce El Ninos and La Ninas, or PDOs and AMOs? How about the spatial and temporal distribution of precipitation for any given year? In fact, according to both the IPCC and the US Climate Change Science Program, they don’t. Consider, for example, the following excerpts:

 

“Nevertheless, models still show significant errors. Although these are generally greater at smaller scales, important large scale problems also remain. For example, deficiencies remain in the simulation of tropical precipitation, the El Niño-Southern Oscillation and the Madden-Julian Oscillation (an observed variation in tropical winds and rainfall with a time scale of 30 to 90 days).” (IPCC, AR4WG1: 601; emphasis added).

 

“Climate model simulation of precipitation has improved over time but is still problematic. Correlation between models and observations is 50 to 60% for seasonal means on scales of a few hundred kilometers.” (CCSP 2008:3).

 

“In summary, modern AOGCMs generally simulate continental and larger-scale mean surface temperature and precipitation with considerable accuracy, but the models often are not reliable for smaller regions, particularly for precipitation.” (CCSP 2008: 52).

This, of course, raises the question:  Are AOGCMs, to quote Gleick, “part of the scientific literature and knowledge”? Should they be?

 

  1. 2.       What if one model’s results don’t fit the results of another? And they don’t—if they did, why use more than one model and why are over 20 models used in the AR4?  Which models should be retained and which ones thrown out? On what basis?

 

  1. 3.       What if a model fits other models but not observations (see Item 1)? Should we retain those models?

I offer these rhetorical questions to start a discussion, but since I’m on the move these holidays, I’ll be unable to participate actively.

Reference:

CCSP (2008). Climate Models: An Assessment of Strengths and Limitations. A Report by the U.S. Climate Change Science Program and the Subcommittee on Global Change Research [Bader D.C., C. Covey, W.J. Gutowski Jr., I.M. Held, K.E. Kunkel, R.L. Miller, R.T. Tokmakian and M.H. Zhang (Authors)]. Department of Energy, Office of Biological and Environmental Research, Washington, D.C., USA, 124 pp.

5 1 vote
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

168 Comments
Inline Feedbacks
View all comments
Jeff Alberts
September 3, 2011 9:37 am

Has any GCM ever consistently predicted temperature, precipitation, cloud cover, humidity, etc even one year out?

Reed Coray
September 3, 2011 9:39 am

Peter Wilson says:
September 3, 2011 at 1:47 am

I think you have summarized the issue very well, and I don’t think you’re missing anything. But before agreeing with you completely, I need to determine what my model says..

Reed Coray
September 3, 2011 9:46 am

If NASA has any input to the models, I sure hope they don’t mix English units with metric units.

John Campbell
September 3, 2011 9:53 am

When I was deep into economic modeling in the late ’60s / early 70’s, our computer models were certainly useful in helping to understand the dynamics of what were then called “complex non-deterministic systems”. But If anyone back then were to have asked if the theory fitted the model output, they’d have been quickly referred to the college psychiatric department. Models were used to better understand the theory, with a view to polishing up the theory. That is, the models were nothing more than an extension to the theory aimed at enabling us run simulations of the theory such that the model output could be checked against reality. I don’t see any changes in the past 40 years that would make switching models from the theory side to the reality side anything other than insanity plain and simple.

thelastdemocrat
September 3, 2011 10:10 am

Classic science. Adam is totally on track. Look up “The map is not the terrain.”
Models are over-simplifications that help us understand things and phenomena, and help us predict how some natural thing or process will act. Models are over-simplifications.
When a very strong, accurate model is developed, it gives the impresssion that we have discovered a “law,” and that we now know how the physical universe operates. Sure, we can be very close. But never accurate.
Atomic theory is good enough to predict an aweful lot of stuff to a pretty good degree. The drug companies take a recognized molecule that has some effect, then strive to develop molecules with similar structure, then trial these to see if they can develop an even-better drug. That is impressive. But it is still trial-and-error. Because the models of the molecule, down to the atomic level, and the models of the body, down to the receptor level, are not totally accurate. Close enough to develop effective drugs, with trial-and-error, but not “exact.”
Some phenomena really can only be conceptialized by complex models. Climate is an example. when you get a model that fits, you are tempted to be impressed by your model, and believe you have it.
Even if Mann 98 were properly done and properly predictive, that model, as all other climate models, would eventually be shown to be limited in some way.
THere is a fundamental challenge in a climate model. A climate model can either be developed to ultimately include limiting factors that will keep the output within some physical bound, from running away far beyond what might happen naturally, or some factor of the model, given any input, will eventually run away to extreme values.
An example is: if a climate model can predict runaway temps, with no calculation to bound that parameter, then the model could be run far enough into the future that the output, predicted temp would physically be impossible – a temp at which the earth would be hotter than the sun, for example.
if the model is bounded, i.e., hey, we better put in some feedback loop, some rate-limiting loop, so that a set of parameters does not ultimately run to the bottom of the kelvin scale or to the temp of the sun, then the boundary of the model might be one of three things: either spot-on, or too high, or too low. Spot-on is so incredibly unlikely that it is obvious that an awesome model, given current knowledge, could only run into the future a limited amt and yield valuable output – i.e., a fair estimate of global temps 100 yrs from now, given a couple of scenarios (i..e., co2 steady, or rising, or falling).
So, get the most awesome temp model, and ask the originator: doe it have some rate-limiter, or not? Could it mathematically run to yield temps higher than the sun’s temp, if predicting far enough into the future?
That obvious factor shows that a model is just a model. The physical universe does not zig then zag because our models changed, or got updated. The tail does not wag the dog.
The model is always at risk of being tossed aside. It is always just a model. Scientific theories or observations are not supported because they fit some model. Models must always serve the data. Of course, this is a sticky wicket, since data depend upon measurement and resolution, etc. – this is why it takes a lot of training to become a scientist, and why it takes a lot of work to develop a model and test it against real-world measurement (which, also, are inherently limited – but that is for another post).
The bottom-line issue is: can we develop predictive models well enough, and figure out all of the variables well enough, and measure them well enough, to predict a future catastrophe that we could then avoid?
My lowly opinion is that we can predict the path of asteroids and space junk well enough to know pretty clearly whether a piece would hit earth or not, so answering the decision abt whether to send a nuke out to space to blow up an asteroid well enough that it will veer away or result in pieces small enough to burn up on hitting the atmosphere.
Can we send a rocket to the moon? Obviously. Course corrections have been needed on all moon shots – they never just launched, like a catapult might throw a stone, and sit back and watch the rocket approach the moon – ridiculous.
Sure, I want to know whether exhaust in the atmosphere will lead to a disastrous green house effect. A model is one way to test this hypothesis. There are others. I myself believe that the confidence in models is overblown, and that it is circular – the models seem to have taken on great authority and power over mother nature, for the true believers. They have a hard time listening to us skecptics, and entertaiing other lines of investigation (soot, other temp proxies, gamma rays, etc.).
There is nothing sacred about a model. Models do not drive nature. A model is always suspect. The science is never “settled.” That is 100% science for ya.

September 3, 2011 10:18 am

LazyTeenager says:
September 3, 2011 at 6:32 am
In fact as a way of capturing understanding of climate they are much better than the enumerate hand waving that is popular here. Ordinary language is poor at describing the complexities of climate. In short if you can’t calculate something chances are you don’t properly understand it.
—————
O Teenager of apparently eternal youth (and all the self-assured certainty of others’ stupidity that this entails): In fisheries science in the late 1970s a new method for calculating the size of commercial fish populations was developed, a kind of hind-casting called Virtual Population Analysis. By fitting models to past population catches and estimates of the size of fish populations, fisheries biologists then felt confident to predict future populations and catches. Some Canadian fisheries managers -the ones in charge of the Northwest Atlantic fishieres- felt so confident about these models that they saw no need to develop robust data sets independently to monitor the state of the fish stocks. Rather, they relied on the data provided by corporate fishing boats (which were, incidentally, using sonar to detect the remaining fish concentrations). They were so confident of these models that they cast as ‘deniers’ those independent fishermen who were actually out in boats (unlike most of the scientists) and who saw abundant evidence that the groundfish stocks were in serious decline. Guess who had egg on their face when the fish stocks collapsed in 1992?
Unfortunately, as entire society had to lose its way of life because the models were wrong. The people of Newfoundland deserved better.

otter17
September 3, 2011 10:22 am

Spencer uses a simple model, albeit poorly. People in fields from biology to engineering use models all the time to create a means to test what if scenarios and do design work. One can mathematically describe elements of a system and how they interact and set the model to work on a complex problem that can’t be hand-calculated. Sure, the IPCC admits that on small scales, the models won’t be able to predict what day it is going to rain at you house, but there is strong support that models can do well at the continental and global scale. If they can hindcast and predict effects in advance, they seem to be on the right track for predicting the magnitude of this century’s temp rise based on varying emissions scenarios.
http://www.skepticalscience.com/climate-models.htm
Here lies the rub, though. The models predict that rising CO2 emissions will also contribute to raise the temperature, a human impact that scares some people due to the threat of the regulatory boogey man (let’s be honest, folks that is where the denial comes from). The paleoclimate evidence provides an even more compelling estimate of Earth’s eventual temperature rise due to the geologically instantaneous jump in greenhouse gas levels.
So yeah, that’s the rub. Any takers on creating a model that can perform better and show that CO2 is not one of the primary drivers of climate? Get peer review approval from the Journal of Geophysical Research? If not, step aside and let the people brave enough to address the issue using the best evidence we have figure out what to do.

Jeremy
September 3, 2011 10:43 am

Heads up to all, from the Guardian:

Next week, Prof Andrew Dessler of the department of atmospheric sciences at Texas A&M University, is due to publish a paper in the journal Geophysical Research Letters offering a detailed peer-reviewed rebuttal of Spencer’s paper.

Should be an interesting read.

Louis
September 3, 2011 10:55 am

“…safe until someone can put forward a more compelling theory that satisfies all of the observations”
Even that part is not true. There is no requirement in science to put forward a more compelling theory before you’re allowed to debunk an existing theory. Providing solid evidence that a theory is incorrect is enough to remove the theory from its “safe” status. Debunking a faulty theory is a valid service to science whether you can come up with a valid replacement theory or not.

Jim
September 3, 2011 10:58 am

All models are wrong, a few are useful.

September 3, 2011 10:58 am

I would like to offer a partial defense of good models vs bad data.
in the late 80s and early 90s I worked for about a decade for a consulting company that provided highly sophisticated, non-linear, fundamental science and kinetic chemistry-based computerized models to our world-wide clientele. The experience obtained there is, I believe, instructive because it is analogous to the present-day climate modeling issue.
In the early days of our simulation efforts, we had a serious problem of having the model match the client’s data. When there was a mismatch, we had to determine if the data was wrong, or the model was wrong. That’s a tough question, but was finally wrestled down and actually was an iterative process. We would question the data, the measurements, instrument calibration, laboratory analysis, fundamental errors in measurements (is the flow accurate to plus-or-minus 3 percent, or 10 percent, or something better?). We also had issues with fundamentals such as conservation of mass, and conservation of energy. The data for our purposes had several input streams and even more output streams. Everyone agreed that the total mass input had to match the total mass output, because there was no inventory change in the system. The same was true for the energy balance, several heat inputs had to be summed, then had to match the heat flows out plus heats of reaction. Once the data was finally hammered on and straightened out, we turned our attention to the model to tune it to match the data. Sometimes we found errors in the model and corrected them. Finally, we had what we considered a robust, accurate, and useful simulation. I’d like to note that we had one model, not a dozen or more.
Then another client came along with a slightly different situation and our model had to be modified to match not only the first client’s data but the second one’s also. That also was finally accomplished, again by ferreting out errors in the new client’s data and tuning the model to match once his data was properly vetted.
After several years of this, we had confidence that our model was indeed accurate and robust. When new clients came along, they were understandably proud of their data and would at first argue with us that the discrepancies between model runs and their actual data was due to a fault with the model. We then explained how the model had been improved over the years and suggested they review their data. Invariably, the client found problems with their data and would collect a new set of data after instrument calibration and laboratory fine-tuning.
The point of this rather long narrative is that this method of model development and data acquisition is not new, is not unique, and has occurred in many applications for decades.
Where the problem in climate modeling lies is in the premature declaration that the models are accurate, are valid, have been vetted, and therefore any new data that does not match must be discarded. Quite simply, the state of this art is nowhere close to that point. As I mentioned, we had one truly robust and battle-tested model. The model was accepted as state-of-the-art by the majority of the world’s approximately 2000 industrial sites with that process. Climate science has multiple models, some say 13 and I’ve heard as many as 20. At their meetings, sessions are referred to as “spaghetti charts” due to all the lines on a slide to show each model’s results. It is in no way correct to say that the models are accurate, when there are multiple and inconsistent models.
There may indeed be errors in the new climate data sets that are collected, and those should of course be carefully evaluated and vetted so that the data is as accurate as possible. Only then can the models be improved.

Mark T
September 3, 2011 11:04 am

Here lies the rub, though. The models predict that rising CO2 emissions will also contribute to raise the temperature, a human impact that scares some people due to the threat of the regulatory boogey man (let’s be honest, folks that is where the denial comes from).

Nonsense. As with any politicized movement, there will be some basing their positions purely on outcomes (desired or otherwise,) but the “denial” has nothing to do with the outcome in general. In fact, very few actually deny anything other than the claims of accuracy and magnituded. Perhaps you get all of your talking points from RC? For many, the threat of the regulatory boogey man is a concern whether there is warming or not, and whether it is human caused or not. They are unrelated. So, let’s be honest, you really don’t understand anything about those you so vehemently oppose.

The paleoclimate evidence provides an even more compelling estimate of Earth’s eventual temperature rise due to the geologically instantaneous jump in greenhouse gas levels.

Uh, you’re letting your ignorance show here. The paleo record – the last 800k years – indicates the opposite sign, i.e., CO2 changes as a result of temperature changes. Over longer terms, the record indicates that there is no connection between CO2 and temperature. Get your facts straight.

Any takers on creating a model that can perform better and show that CO2 is not one of the primary drivers of climate?

Your assumption is a logical fallacy. You are presupposing that a) current models actually perform well (in any measurable context, they do not at all) and b) it is possible for a model (any model) to perform well. Sorry, but epic fail.

If not, step aside and let the people brave enough to address the issue using the best evidence we have figure out what to do.

Brave enough? You have got to be joking…
Mark

SteveW
September 3, 2011 11:04 am

I’ve not read all of the comments yet, but surely a more pertinent point regarding the scientific method versus the statement
” it become part of the scientific literature and knowledge, safe until someone can put forward a more compelling theory that satisfies all of the observations, agrees with physical theory, and fits the models.”
would be that we do not need a new theory to dismiss the existing state of the art, merely find some observation or other which disagrees with the current theory, at which point the theory becomes extant.

Greg, Spokane WA
September 3, 2011 11:13 am

“agrees with physical theory, and fits the models.” – Peter Gleick at Forbes; emphasis added. 9/2/2011”
Seems to me that no more science need be done. A set of observations is taken, a theory is created, a model based on that theory is created and we’re done. Since the models and the theory are all then the subject is closed and all climate funding can be dropped. It’s done. Scientists losing their jobs as a result can move into some other Gov. funded scientific field.
There’s no more need to do science since either the new observations/data will agree with the models and are, therefore, pointless (since they agree); or they observations/data will disagree and are, therefore, wrong.
The only remaining discussion is why the models don’t agree with each other, which is odd, since they’re perfect and must, therefore, be correct. The discussion can be adequately carried out in the blogs, including why Hansen’s Model C seems to be the closest of the models to what’s observed in the real world.
/end_snark

September 3, 2011 11:23 am

It’s similar to the BBC redefining Balance, to mean what we used to call bias.
http://www.bbc.co.uk/bbctrust/our_work/other/science_impartiality.shtml
As the acadaemic elite hijack & redefine the language, to fit their sociological models & reinforce their cosy view of the world.

September 3, 2011 11:24 am

“and fits the models”
Hmmm, that’s model bias at its best. So, if the new work satisfies everything else and does not fit the models, it’s bogus, right? Just checking.

jorgekafkazar
September 3, 2011 11:33 am

First, for their post/comments/links, kudos to: Indur M. Goklany, Truthseeker, Steve Keohane, Josualdo Silva, and Jim Cripwell. Too many more to list.
Next, special thanks to Lazy Teenager for revealing his ignorance of the meaning of enumerate, giving a certain specialness to his comment, which provided many easy shots and much amusement here.
Like Peter Gleick’s statement, a model is a good way to bring your ignorance to the forefront. Models, no matter how complex and expensive, are only tools. In themselves, they are no more Science than is a ruler or a magnifying glass. As one who has attempted to encapsulate behavior of complex systems within mathematical constructs, I can state that AGW’s faith in models is dangerous, hubristic pseudo-science.

September 3, 2011 12:09 pm

It is time to DROP THE CHARADE and admit that TRUE progress in “science and technology” DOES NOT COME FROM “PEER REVIEWED RESULTS”.
Now, this is a bold statement. Let me give some researchable examples and contrasts to the “so called” – climate “scientists” and their “modis operandi”, which will illustrate my point quite completely.
I am a fan of Dr. Kwabena Boahen (http://www.stanford.edu/group/brainsinsilicon/) . I would HIGHLY encourage all us “skeptics” to look over his work and his group. Several points come to mind when evaluating his work and his graduate student’s work.
Number 1. is, THEY HAVE NO PEERS !!! There are, right now, no major groups doing their approach. SO HOW DO THEY GET A PEER REVIEW?
Number 2. ALL THEIR WORK IS COMPLETELY PUBLISHED AND AVAILABLE. If you want to duplicate their circuits, you download their programs for designing them. If you want to understand the technology, you look at the posting of 10+ years of the “intro” course Dr. Boahen has posted. If you have a question, you Email them. (They are pretty good at answering questions, even from dilitants as myself.)
Number 3. They were running entirely on Stanford Department funding, up until 4 years ago, when they got a $5 Million dollar grant from the NIH. Dr. Boahen thinks they may have a “brain in a shoe box” in about 3 years. They already have duplicated Choclea function and Retina functionality.
SO the key question is: WHY DO THEY NOT WITH-HOLD information, and WHY don’t they have “critics”? (Or need for “peer review”.) The answer is .. they are doing REAL work, with REAL results, which are all testable. The second answer is that Dr. Boahen comes from Ghana and he really believes in the “common good”.
My only complaint about this, is that if he “played the game”, maybe he’d have 50 Million or 500 Million. But then, like the money squandered by the Human Genome project (10 Billion in 10 years) compared to the mere 300,000,000 in 3 years, and cracking the genome by Salara corp, privatedly funded..it might be a detriment to get TOO MUCH MONEY.
Again, another example of “real science”, and proper exposition of work, with little peers to review. Dr. Stephan Hell: http://www.mpibpc.mpg.de/groups/hell/ Dr. Hell NOW has some peers, doing similar work In 1998 he was almost ALONE in doing work to break the “Abbe Limit” with regard to microscopic resolution. His papers are amazing, in that the SPECIMEN preparation for samples shown using their various methods, are usually COMPLETELY OUTLINED as an appendix at the end of the paper. (Particularily the last 5 to 7 years.) This is because people doing parallel work CANNOT SET UP TO DUPLICATE WELL unless they know how to prepare the specimens. Dr. Hell WANTS his work to be duplicated and to be clear. NOTHING IS WITH-HELD!!!
Real SCIENCE, REAL RESULTS, totally transparent. It’s time we pointed out that the POINTY HEADS doing the “CLIMATE SCIENCE” are for the most part “speculators” and “snake oil salesmen”. They deserve NO GLORY, SCANT ATTENTION and general denigration for their behavior, arrogance and bluster.
Maybe, some day, we’ll get a group that starts with the premise: DATA FIRST, ANALYSIS SECOND, PREDICTION THIRD, OBSERVED DATA AGAIN, COMPARISON WITH PREDICTION, AND “YES OR NO” on DATA (OBSERVATIONS) FITTING PREDICTION and the humble attitude to not publish in an obscure journal an admission of failed theory and prediciton, but to POST IT ON THE INTERNET PUBLICALLY and indicate what lesson was learned from the failure.
That’s a LOT of “humility” to have. Judging by what I’ve seen of the HOLYIER THAN THOU, STIPEND POSITIONED, ACADEMIC SNOBS in the “climate science” realm, I doubt that will happen any time in the future.
Mean time, when your Brains in Silicon robot is tending to you in old age, or your custom drug is being made, after the elucidation of your bio-problem, using advanced molecular imaging…all developed by REAL scientists, with NO PEERs, just “kiss your robot” thanks for the REAL SCIENTISTS making progress, who have no peers.

Ken Harvey
September 3, 2011 12:36 pm

Did any of you in your youth ever come across one of those sure fire promotions for a horse race betting system? They may still be around for all I know. The main selling point was a very long list of of past winners, with prolific detail of names, courses, dates, starting prices etc. which (after payment) would be demonstrated to have been predicted by this wonder system together with a calculation of how much you would have won if only you had been privy to the system. Too often the gullible parted with their money while the more skeptical gave the matter some thought and came to appreciate that given a list of determined events, one can come up with umpteen different methods (models) that could absolutely predict those outcomes in hind-cast. A similar scam aimed at a different market did the same with stock prices. Almost any fool can come up with a system which will hind-cast the outcome of past events but foreseeing the future is a different proposition. Ask Mr. Greenspan or Mr. Bernanke.

DirkH
September 3, 2011 12:38 pm

otter17 says:
September 3, 2011 at 10:22 am
“Sure, the IPCC admits that on small scales, the models won’t be able to predict what day it is going to rain at you house, but there is strong support that models can do well at the continental and global scale.”
Models don’t do well at all; please refute me by providing a link to a paper that SHOWS that a climate model has been proven to have predictive skill; a link to “skepticalscience” won’t do. Models fail to get ENSO right (they’re at a complete loss there), they fail to get cloudiness by latitude right (that’s on a continental or global scale), they fail to get large convective fronts right (they’re too big to be described by statistical properties, and the models are incapable of simulating the physical processes).
Or does “skepticalscience” mean with “can do well” that they get it right once in a blue moon? Yeah, that’s surely worth billions. ;-P

Roy UK
September 3, 2011 12:56 pm

LazyTeenager says:
September 3, 2011 at 6:32 am
“In fact as a way of capturing understanding of climate they are much better than the enumerate hand waving that is popular here.”
Oscar Wilde once wrote, “In America, the young are always ready to give to those who are older than themselves the full benefits of their inexperience.”
Never a truer word.
So Lazy Teenager, what you have done is direct your initial comment at the readership of this Blog, not at the issue under discussion.
So to rephrase your comment, and to ask a question, “In fact as a way of capturing understanding of climate, models are much better than using observational data.”. Is this what you really think?
Please give us the benefit of all of your experience.

S Basinger
September 3, 2011 12:57 pm

I posted this elsewhere prior to reading this priceless little tidbit from Gleick:
“I find it hilarious that it’s even suggested that modelers should be consulted about real world measurements that their models will be judged upon. It’d be like proponents of String Theory being given an editorial veto over what CERN can publish so they don’t hurt their feelings.
It’s up to the modellers to reconcile where their theory went sideways when compared with reality…”

Jeff Mitchell
September 3, 2011 1:10 pm

Well, I think models are great. If they don’t work, then you clearly don’t understand something, and you can fix it. The problem stems from how things get fixed. If they fix it by simply tweaking it in some way, that is not going to work except by accident. Any tweak needs to based on a principle that wasn’t previously understood that accounts for increased understanding of the whole system. You can’t simply “adjust” the data. The model is your friend if you aren’t trying to cheat. It tells you when your ideas don’t work so you can go back and figure out what it is you don’t yet understand.
I think debates would be good so that people can see why skeptics are skeptics. It isn’t hard to pull a few important variables out that affect climate which are not accounted for in the models, like the cosmic radiation cloud seeding, or Willis’ thunderstorm thermostat theory. When you have major variables and forcings that are not accounted for, I think it is a huge mistake to base policy on incomplete theories.
I think the real problem is that climate is being used for political purposes, which is very bad for science. Government likes to control people. It looks for any reason to do so. If an area such as climate is useful, government will use it. And you have the apparatchiks like Schmidt, Mann, Hansen who claim the mantle of science to do it. Their power depends on coming up with a particular outcome in the debate, not on truth. I believe that explains an awful lot of the behavior we see out of the Team.

NetDr
September 3, 2011 1:19 pm

I haven’t read all of the above entries but hasn’t anyone pointed out that observations do NOT mirror the model’s predictions ? Several of the posts I have read take as a given that the GCM’s correctly predicted the last 13 years of non warming when they did not !
The AR4 models predicted that there should have been around .3 ° C warming from 2000 to present.
There hasn’t been any measurable warming at all in that time period.
http://www.cgd.ucar.edu/ccr/strandwg/CCSM3_AR4_Experiments.html
Hansen’s model has long since Jumped the shark.
As of the present we are way below the scenario “C” which was with stringent CO2 reduction which never took place. [In effect it is the control]
http://sppiblog.org/news/the-hansen-model-another-very-simple-disproof-of-anthropogenic-global-warming

tom T
September 3, 2011 1:30 pm

otter17 says:
September 3, 2011 ” Here lies the rub, though. The models predict that rising CO2 emissions will also contribute to raise the temperature, a human impact that scares some people due to the threat of the regulatory boogey man (let’s be honest, folks that is where the denial comes from).
“The paleoclimate evidence provides an even more compelling estimate of Earth’s eventual temperature rise due to the geologically instantaneous jump in greenhouse gas levels.”
—————————————————————————————————————————
Hogwash I got interested in this in 1972 when I was in elementary school and my teachers said we were all going to die because pollution was going to cause an ice age. Then by 1980 it was global warming. I wanted to know what had happend to the ice age. I didn’t give a fig about regulations. I have followed it ever since. If there ever appears any data that shows I am wrong I will change my mine, so far I haven’t seen any. Which gets me to my question.
Where is the peer reviewed studies that support this statement “The paleoclimate evidence provides an even more compelling estimate of Earth’s eventual temperature rise due to the geologically instantaneous jump in greenhouse gas levels.”

Verified by MonsterInsights