This post made me think of this poem, The Arrow and the Song. The arrows are the forecasts, and the song is the IPCC report – Anthony
I shot an arrow into the air,
It fell to earth, I knew not where;
For, so swiftly it flew, the sight
Could not follow it in its flight.
I breathed a song into the air,
It fell to earth, I knew not where;
For who has sight so keen and strong,
That it can follow the flight of song?
– Henry Wadsworth Longfellow
Guest Post by Ira Glickstein.
The animated graphic is based on Figure 1-4 from the recently leaked IPCC AR5 draft document. This one chart is all we need to prove, without a doubt, that IPCC analysis methodology and computer models are seriously flawed. They have way over-estimated the extent of Global Warming since the IPCC first started issuing Assessment Reports in 1990, and continuing through the fourth report issued in 2007.
When actual observations over a period of up to 22 years substantially contradict predictions based on a given climate theory, that theory must be greatly modified or completely discarded.

IPCC SHOT FOUR “ARROWS” – ALL HIT WAY TOO HIGH FOR 2012
The animation shows arrows representing the central estimates of how much the IPCC officially predicted the Earth surface temperature “anomaly” would increase from 1990 to 2012. The estimates are from the First Assessment Report (FAR-1990), the Second (SAR-1996), the Third (TAR-2001), and the Fourth (AR4-2007). Each arrow is aimed at the center of its corresponding colored “whisker” at the right edge of the base figure.
The circle at the tail of each arrow indicates the Global temperature in the year the given assessment report was issued. The first head on each arrow represents the central IPCC prediction for 2012. They all mispredict warming from 1990 to 2012 by a factor of two to three. The dashed line and second arrow head represents the central IPCC predictions for 2015.
Actual Global Warming, from 1990 to 2012 (indicated by black bars in the base graphic) varies from year to year. However, net warming between 1990 and 2012 is in the range of 0.12 to 0.16˚C (indicated by the black arrow in the animation). The central predictions from the four reports (indicated by the colored arrows in the animation) range from 0.3˚C to 0.5˚C, which is about two to five times greater than actual measured net warming.
The colored bands in the base IPCC graphic indicate the 90% range of uncertainty above and below the central predictions calculated by the IPCC when they issued the assessment reports. 90% certainty means there is only one chance in ten the actual observations will fall outside the colored bands.
The IPCC has issued four reports, so, given 90% certainty for each report, there should be only one chance in 10,000 (ten times ten times ten times ten) that they got it wrong four times in a row. But they did! Please note that the colored bands, wide as they are, do not go low enough to contain the actual observations for Global Temperature reported by the IPCC for 2012.
Thus, the IPCC predictions for 2012 are high by multiples of what they thought they were predicting! Although the analysts and modelers claimed their predictions were 90% certain, it is now clear they were far from that mark with each and every prediction.
IPCC PREDICTIONS FOR 2015 – AND IRA’S
The colored bands extend to 2015 as do the central prediction arrows in the animation. The arrow heads at the ends of the dashed portion indicate IPCC central predictions for the Global temperature “anomaly” for 2015. My black arrow, from the actual 1990 Global temperature “anomaly” to the actual 2012 temperature “anomaly” also extends out to 2015, and let that be my prediction for 2015:
- IPCC FAR Prediction for 2015: 0.88˚C (1.2 to 0.56)
- IPCC SAR Prediction for 2015: 0.64˚C (0.75 to 0.52)
- IPCC TAR Prediction for 2015: 0.77˚C (0.98 to 0.55)
- IPCC AR5 Prediction for 2015: 0.79˚C (0.96 to 0.61)
- Ira Glickstein’s Central Prediction for 2015: 0.46˚C
Please note that the temperature “anomaly” for 1990 is 0.28˚C, so that amount must be subtracted from the above estimates to calculate the amount of warming predicted for the period from 1990 to 2015.
IF THEORY DIFFERS FROM OBSERVATIONS, THE THEORY IS WRONG
As Feynman famously pointed out, when actual observations over a period of time contradict predictions based on a given theory, that theory is wrong!
Global temperature observations over the more than two decades since the First IPCC Assessment Report demonstrate that the IPCC climate theory, and models based on that theory, are wrong. Therefore, they must be greatly modified or completely discarded. Looking at the scattershot “arrows” in the graphic, the IPCC has not learned much about their misguided theories and flawed models or improved them over the past two decades, so I cannot hold out much hope for the final version of their Assessment Report #5 (AR5).
Keep in mind that the final AR5 is scheduled to be issued in 2013. It is uncertain if Figure 1-4, the most honest IPCC effort of which I am aware, will survive through the final cut. We shall see.
Ira Glickstein
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Dr Glickstein (and mikerossander): many thanks and (referring to your December 20, 7:38 pm comment) I can no longer fault your analysis. Particularly, I agree that there is likely to be a (probably self-serving) bias to the models which to me also suggests “the dice are loaded”, leading to a higher probabilty of error than the original 1/10 for any single run – but not so high as 1/10,000 for 4 truly independent events.
Apropos of nothing, what actually first came to my mind (as a lawyer here in the UK for many years) was the notorious Sally Clark case here in the UK. She was wrongly convicted of the murder of two of her sons both of whom died suddenly within a few weeks of birth. A paediatrician gave evidence that in his opinion the chance of two children from her well-off background suffering sudden infant death syndrome was 1 in 73 million, taken by squaring 1/8500 (his estimate of the likelihood of a single cot death occurring in similar circumstances). The jury convicted on that evidence, despite the judge giving a warning of the possible “prosecutor’s fallacy”. She was imprisoned for life and had served 4 years when it emerged that the pathologist failed to disclose microbiological reports implying the possibility that at least one of her sons had died of (unlinked) natural causes. Her convictions were then overturned but (having lost two sons, then having been wrongly convicted of their murders) she never recovered – she died just 4 years later, aged 42. A very sad case.
The title of the post,
“An animated analysis of the IPCC AR5 graph shows ‘IPCC analysis methodology and computer models are seriously flawed”
raises a question: Flawed for what purpose?
Obviously, current global temperatures are below what the models would have led us to believe. But the models can’t predict specific ENSO events in advance, or long-term solar output trends, at all. People who work with them, or are used to examining their output, know this, and can allow for the fact that unexpected ENSO events or solar forcings will give a real-life result that the models didn’t predict. But when the model results are presented to non-specialists, it’s hard to avoid this point being lost.
Foster and Rahmstorf have taken a stab at adjusting the temperature history, for ENSO/solar/volcanic histtory, with the aim of isolating the CO2 effects. They used a multivariate regression analysis, so the accuracy of their results will depend on whether the factors they examined affecting temperature (CO2, ENSO, solar output, and aerosols) leave out any significant contributors, and the extent to which their effects can, for the metrics they chose, be thrown together as linear, independent influences on temperature.
Models do include ENSO events at random, and it would be interesting to see what predictions came out when selecting runs with a strong El Nino bias in the late 1990s, and a strong La Nina bias recently. What I’d really like to see would be some models run using the known ENSO history and solar influences, for hindcasting. That would give a better idea of how well the models work, and what we might expect under various scenarios for future ENSO and solar influences.
unha says: December 20, 2012 at 7:39 pm:
Thank you for raising the exponential rate of error accumulation in GCM time step integrations.
When I could not understand how climatologists thought that they could get sensible data from GCMs I did some checking and found out that the models use low pass filters between integration steps in order to preserve conservation of energy, mass and momentum, and to maintain “stability”. Even worse, they use pause/reset/restart techniques when physical laws are violated, or the “climate trajectory” breaches boundary conditions.
All of this tells me that what they are trying to do is mathematically impossible.
davidmhoffer says: December 20, 2012 at 8:56 pm
************
“While there is high agreement that the initialization consistently improves several aspects of climate (like North Atlantic SSTwith more than 75% of the models agreeing on the improvement signal), there is also high agreement that it can consistently degrade others (like the equatorial Pacific temperatures).”
************
How much more obvious could it be?
=========================================
Exactly. A frank admission of the inadequacy of the models. Tinker here, and – oops! Contradicts the cite above from Ch. One, as given by Dr. Glickstein. The crack of Doom? Or a case of indiscipline?
An analysis of past climate history shows that during the period 1870 to 1910, the global air temperatures and the global ocean surface temperatures both declined as the sunspot number declined. From 1910 to 1940 all three again moved up together. From 1940 to 1970’s, the global ocean surface temperatures declined as they entered their cool mode and wiped out the global surface temperatures rise from continuing solar sunspot increase. From 1980 to 2000 all three variables again moved up in unison.. During the last decade or 2000-2010, all three climate variables are again going down as global cooling again gets underway. This declining pattern is likely to continue until 2030 at least . It would appear that the decadal average yearly sunspot number level of about 30-45 seems to be the tipping point where any level below this figure causes global cooling and above this figure causes global warming unless ocean cycles happen to be out of sync and over ride any warming [ like 1950’s-1970’s]. Most recently we have been running at an average yearly decadal sunspot number of 29.2 over the last 10 years. This low figure clearly explains why there has been no warming for the last 16 years and why instead we are starting to see global cooling like during the past the period of 1880-1910. Not enough solar energy is being put into the planet to cause any warming.
The average yearly sun spot numbers during the Dalton Minimum decades [ 1790 to 1837], a period of much colder temperatures like the period 1880-1910 were 27.5, 16.5, 19.3 and 39 . So there is some convincing evidence that low solar sunspot numbers and declining global temperatures are directly linked.and we are already in cooling phase like we had before
Dave Hoffer: Always good to “see” you here with comments on my Topics. I had not read Chapter 11 of the IPCC leaked AR5 and I share your opinion of the statement you quoted. The IPCC climate models are biased towards human caused Global Warming and are therefore incapable of predicting Global Cooling or even low levels of warming.
Your comment led me to look at Chapter 11 and I found this amazing statement that confirms the IPCC bias towards blaming warming PRIMARILY on humans:
In other words, the IPCC, despite their “low … understanding of the impacts of solar activity”, continues to support the view that human (“anthropogenic”) activities are the PRIMARY cause of Global Warming. This flies in the face of the recent slowing of warming despite record increases in atmospheric CO2.
Ira
IRA
IPCC has been completely wrong about their winter climate predictions .
UNITED STATES
The winter temperatures for Contiguous United States has been dropping since 1990 at -0.26 F per decade [per NCDC]
The annual temperature for Contiguous United States has been dropping since 1998 at -0. 80 F per decade[ per NCDC]
Basically US winter temperatures have been flat with no warming for 20 years
CANADA
The annual temperature departure form 1961-1990 averages has been flat since 1998
The winter temperature anomaly has been rising mostly due to the warming of the far north and Atlantic coast only
8 of the 11 climate regions in other parts of Canada showed declining winter temperature departures since 1998
During the 2011/2012 winter the Canadian Arctic showed declining winter temperature departures
Yet the IPCC assessment for North America was:
All of North America is very likely to warm during this century, and the annual mean warming is likely to exceed the global mean warming in most areas. In northern regions, warming is likely to be largest in winter, and in the southwest USA largest in summer. The lowest winter temperatures are likely to increase more than the average winter temperature in northern North America
EUROPE
The winter temperature departures from 1961-1990 mean normals for land and sea regions of Europe have been flat or even slightly dropping for 20 year or since 1990
Yet the IPCC assessments of projected climate change for Europe was:
Annual mean temperatures in Europe are likely to increase more than the global mean. The warming in northern Europe is likely to be largest in winter and that in the Mediterranean area largest in summer. The lowest winter temperatures are likely to increase more than average winter temperature in northern Europe
It is not happening
Ira,
I’m in general a sceptic and very much find the graph you pulled from the report highly confusing. I think the poor quality of the graph has led you to totally mis-read it and worse to misapply it.
For AR4 as an example, the starting point for the hindcast / forecast is clearly 1990.
If you want to eliminate that hindcast portion of the AR4 fan, then you need to start your AR4 line from the middle of the AR4 2007 hindcast for 2007. Then connect that line to the center of the 2012 forecast. The slope of that line would be totally different from the slope of the line you got by mixing a 2007 actual temp with a 2012 temp findcast/forecast with a 1990 starting point.
As it currently is, I think your entire blog post should be withdrawn as simply being a misinterpretation of a really poorly done graph.
[GregF, you are entitled to your opinion. However, I find it somewhat ridiculous that the middle of the AR4 prediction fan (the brown and rust-colored band) is so far above the actual observations for the year before AR4 was issued as well as for the year AR4 was issued. It seems to me that a prediction should start with a known situation and predict the future from that point. Nevertheless, thanks for your input. – Ira]
JazzyT wrote:
It seems to me that we "non-specialists" who are not invested in the meme of human-caused Global Warming are more attuned to the abject failures of the IPCC models.
Please have a look at Figure 1-4 from the AR5 draft (above). The black vertical bars represent actual temperature observations as reported by the IPCC. Note that, IMMEDIATELY AFTER the IPCC released their FAR, SAR, and AR4, all predicting major temperature increases, the observations show strong temperature decreases! Indeed, there are four years worth of lower temperatures following the release of the FAR!
The IPCC cannot even predict ONE YEAR in advance and yet they have been somewhat successful in convincing the media and governments that drastic action must be taken to prevent runaway Global Warming.
Therefore, it is pretty clear (at least to me, an admitted "non-specialist") that common sense indicates a very strong bias on the part of the IPCC researchers to predict high rates of warming, even though three out of four of their predictions were followed by cooling in the very next year.
JazzyT asks about the IPCC models I claim are flawed: “Flawed for what purpose?” IMHO, for the purpose of keeping their funding by governments that are politially motivated to increase their control over the global economy, even if, as a result, our economy is wrecked.
Ira
Ira Glickstein;
Your comment led me to look at Chapter 11 and I found this amazing statement
>>>>>>>>>>>>>>
I’m only part way through it, but there are a few more beauts in there. One is that they predict 0.4 to 1.0 degrees of warming for 2016-2035 compared to 1986-2005, and they expect to be at the low end of that range. For starters, we are right now today at +0.2 compared to 1986-2005, so they only need +0.2 by 2016-2035 to hit their projection range. But they then hedge their bets further by stating that this is all based on the assumption that there will be rapid decreases in aerosol emissions over the next few years. No justification for the assumption that I can find, and it makes little sense to make such an assumption given the rapidly industrializing economies in China, India and Brazil which will ramp up emissions far beyond what we can reduce them in the western world. Talk about a get out of jail free card! Nor can I find (so far anyway) who much of the warming they project is due to the decrease in aerosols that they project, so how much is actually left to attribute to CO2 is currently a mystery to me.
But here’s one that got the expletive’s going big time:
“It is virtually certain that globally-averaged surface and upper ocean (top 700m) temperatures averaged over 2016–2035 will be warmer than those averaged over 1986–2005”
Well duh! Since CURRENT temps are ALREADY 0.2 degrees above 1986-2005, we’d have to see a COOLING of 0.2 degrees by 2016-2035 for this to NOT be true!
And you have just got to love this one on surface ozone:
“There is high confidence that baseline surface ozone (O3) will change over the 21st century, although projections across the RCP, SRES, and alternative scenarios for different regions range from –4 to +5 ppb by 2030 and –14 to +15 ppb by 2100.”
Are they kidding? They are highly confident that it will be either higher, or lower, or about the same, but not exactly the same?
The more of it I read, the sadder it gets.
{ davidmhoffer says:
December 21, 2012 at 9:10 am }
RE:
–4 to +5 ppb by 2030 and –14 to +15 ppb by 2100.”
LOL…Nice catch. But the real question is……(drumroll)
{ There is high confidence }
The best they can do is “high confidence”. I think it’s “very highly likely”, or maybe “almost certainly”, or even so far as, dare I go there, “irrefutably robust”.
;<)
Ira Glickstein, PhD says:
December 21, 2012 at 7:57 am
Well first, the bit about “non-specialists” was not intended as a jab at anyone, and I regret it if that’s how it came through through.
But I’ll try to clarify what I meant. Suppose a model prediction persistently fails to match reality within a stated tolerance. (I say “persistently” because one excursion could be a statistical fluke.) Now, if the model diverges from reality because processes that were modeled were gave incorrect answers, then the model is not working. However, if reality does not match prediction solely because of processes that were not modeled, then it’s not the model that’s failed, although the prediction has failed.
Is this what has been happening? I don’t know. ENSO processes can”t be predicted, so they are modeled randomly. The real-life events of a super El Nino in 1998 and double-dip La Ninas recently tend to flatten out temperatures. These won’t match the mean of model runs using random ENSO processes, some of which would raise the trend and others lower, or flatten it. Weak solar output over this cycle and the last contribute more to temperature flattening. How would the temperature curve have looked without these? Would it have matched the model predictions?
There’s been one statistical attempt to deal with all these processes, that could not have been included in model predictions (because they’re unpredictable). But that gives the best fit to the data, which is not necessarilty the most physically plausible interpretation. That’s why I’d like to see some model runs that actually include the ENSO and solar events of the last 15 years, as they actually happened. That would have a lot to say about how well the model is working in general.
Now, the climate modelers understand these issues very well. They may be exposed to the risk of confusing models with reality, but they do know what’s in the models and what isn’t. When I see a peer-reviewed article about models, the language seems appropriately cautious, trying to state simplifying assumptions and areas of uncertainty. When it gets into the IPCC scientific summary, it gets compressed and these caveats lose detail. In the summary for policymakers, these technical details are likely to be left out. By the time it has been digested by the mass media, possibly several times, and passed on to people who have no reason worry about how the models work. At this point, they see the prediction, but none of the caveats.
So, the divergence of models from reality is clearly due, partly, to things that just weren’t modeled. But the predictions, as communicated to the public, didn’t include that as a possibility. So, if you want to define a model at each stage–modelers, two (or three) layers of IPCC, and one or more runs through the mass media–well, the end prediction could be called a model too. And, the predictions that came out at the end certainly didn’t work. And that’s a problem. How much of it was in the code and how much in the communication–that’s what I’d like to find out.
JazzyT, The communication ignores more than just the possibility of divergence because of things that weren’t modeled like volcanoes, ENSO and an change in solar activity. It also generally ignores the diagnostic literature documenting problems in the things that were modeled. Models may not seem that far wrong when consideration is given to the things that could not be modeled in advance, but they can achieve that by just following the trend linearly for awhile. They diverge from that in longer range projections, and are not credible when we know they have “matched” the climate incorrectly. They have documented correlated errors larger than the phenomenon of interest.
JazzyT: As Yogi Berra famously said, “It’s tough to make predictions, especially about the future.”
Those of us who come up with scientific theories and make predictions about the future know that no model can capture the total reality, because, if it did, it would BE the reality. We develop scientific theories and use them to make models and then compare the results of those models to observations of the real world for two purposes: (1) to better prepare ourselves and our society for future developments that the scientific theory and models based on that theory predict are likely to occur, and (2) to test the underlying scientific theory against actual observations that may possibly strengthen our acceptance of the theory, or may disprove the theory.
With that in mind, let me address the points in your latest comment:
In other words, “the operation was a success but the patient died.” :^)
I agree that we cannot blame the model, per se, if something extraordinary happens that was not considered when the model was constructed.
However, when, as in the case of the IPCC Climate Models, the abject failure to match reality happens four times in a row, over a period of 22 years, and in the same direction of unreasonably high Global Warming predictions, and using updated models each adjusted and tuned by reference to five or six years of additional observations, I think it is pretty clear that something is very wrong with the underlying Climate Theory.
I guess it is possible that four prediction failures in a row are just due to bad luck and that the IPCC Climate Theoreticians and “climate modelers understand these issues very well.” Perhaps they are all competent climate scientists and nice guys who have chanced into a string of bad luck. Perhaps their theory that human civilization is the PRIMARY cause of the Global Warming experienced in the latter half of the 20th century is correct. Perhaps rising atmospheric CO2 and land use that affects the Earth’s albedo and can be blamed on human actions is actually a stronger force than natural cycles of the Earth and Sun.
Perhaps they were justified when they used their over-stated predictions of these flawed climate models and hyped them in the media to strike fear of a Global Warming “tipping point” that would destroy human civilization. Is it really their fault that the fear they generated led to hysterical international and national government actions, such as subsidizing ethanol and other “green” energy schemes that have, in part, wrecked the economies of the US and Europe?
They may have had fine motives to start with, and, like Chicken Little, may have really thought they were saving humanity from itself. However, assuming good motives to start with, don’t you think they should have moderated their hype and fear campaign by now? After four failures of their Climate Theory?
It seems to me that the powers that be in the IPCC (and the US Goddard Institute, UK Climategate Research Unit, and other members of the Official Climate “hockey” Team) have a not so well hidden agenda. They want to continue to rake in the government funding they need to continue to earn a living and publish papers in prestigious peer-reviewed journals and appear on TV programs as “experts”. They cling to their underlying Climate Theory and cannot get themselves to be honest and admit that natural cycles of the Sun and Earth, not under human control or influence, are the actual PRIMARY cause of Global Warming (and Cooling). To admit that human actions are a SECONDARY cause would put them out of business.
Bottom line: As Feynman says in the video I linked in the main Topic above,
Ira
Ira Glickstein, PhD says:
December 22, 2012 at 8:08 am
This happens sometimes. But if the patient died in a traffic accident as their spouse was driving them home from the hospital, it would take a rather brazen lawyer to sue the surgeon for malpractice. :->
But we’re on the same page as far as what’s in and out of the models.
I couldn’t help noticing something else, and I’m surprised I didn’t see it come up in the thread. With the arrow metaphor, of course, we score a hit when the arrow hits the target. The target, in this case, could be the actual temperature…or, you could say that the temperature was the bulls-eye, and the scoring rings extend to the edge of the error bars. But 2012 has no error bars, and when viewing the animation, the eye naturally goes to the last year with error bars, 2011. Two of the arrows, SAR and AR4, actually hit 2011, not in the bulls-eye, by any means, but still in scoring range. It’s the same for 2010. The arrows would probably not hit the error bars for 2012 once those are available, but insisting on using 2012, and disregarding the two previous years would invite a charge of cherry-picking.
Others have covered things like picking the starting point, how to get the slope, etc. I’ll only add that I’m old enough to have learned how to do a linear fit to the data by eye (and, in fact, they still have students do this at least once or twice in a college physics lab, to make the students interact with their data). When I do that, I get a slope that is, by eye, slightly lower than that of FAR, higher than SAR, lower than TAR, and distinctly lower than AR4.
But it seems strange to compare the slope for the entire series with the slopes for each model. Why would each model’s predictions for the future be tested against the past? It seems that you’d want four slopes for measured data that start at the time of each model’s predictions. But then, AR4s and TARs would be completely impractical due to the short time intervals, and TAR could be dodgy as well.
If you want to do this again when 2012 data are complete, well, those are the issues I noticed, which others would surely notice if this is released to a wider audience. Now they’re in the same pile as everyone else’s comments; some stuff from that pile will probably be useful for the next version.
Wouldn’t it be a hoot if that’s what actually happens! (I suspect the Pranksters on Olympus are thinking the same way.)
As per my comment upthread:
Ira
“Those of us who come up with scientific theories and make predictions about the future know that no model can capture the total reality, because, if it did, it would BE the reality.”
I n my opinion, there is nothing wrong with scientists doing model work to understand the climate. Personally i think one is trying to model something that has too many variables that cannot be predicted or modeled completely. However where I have a more serious concern is when unproven and purely experimental models are portrayed as solid science and are thrust on the public domain to shape public policy . This very expensive , wasteful and burdensome on the society . These models should remain as experimental only until there is sufficient evidence that they have high level of success. . In my judgment , we are decades away from that point when it comes to climate.
There used to be a rule of thumb in engineering work , that one should make all your changes or alternate options studies during the conceptual design stages because if you make major changes as you progress from concept to detail design to procurement and finally construction, the costs go up progressively and they can be 100 to a 1000 fold higher than during the concept stage. Yet when it comes to climate science we are doing exactly the opposite . We are into the implementation and construction stage when it comes to energy changes , environmental actions and public policy while the models are still in the concept and unproven stage . So the whole planet is now like big experiment where these scientists are allowed to play around with public resources , energy options and taxpayers money based only on questionable science and unproven models most of which have been seriously wrong predicting just the first few years ahead .Successful hind casting of models does not prove the model as it is too easy to feed fudge factors and twig the model to give you a known answer,Successfully predicting a decades into the future is the only true test in my opinion..
herkimer says: @ur momisugly December 23, 2012 at 6:52 am
There used to be a rule of thumb in engineering work , that one should make all your changes or alternate options studies during the conceptual design stages because if you make major changes as you progress from concept to detail design to procurement and finally construction, the costs go up progressively and they can be 100 to a 1000 fold higher than during the concept stage…..
>>>>>>>>>>>>>>>>>>>>>>>>>
And any company that has it’s head on straight gathers all of its technical personnel together to have a go at ripping to shreds the design while in the pilot stage BEFORE it gets expensive.
This is what the most successful company I worked for did with very good results. Sadly it is not common because of the delicate sensibilities of the scientists/engineers who head projects and who can not stand criticism. It takes a brave soul to present his ‘baby’ to the critiquing wolves.
JazzyT: You seem bound and determined to ignore the role of the IPCC Climate Theorists in the failure of the Climate Theory that underlies the Climate Models.
We agree that the Climate Modellers, following the lead of the Theorists, did not allow for some natural variations due to cycles of the Earth and Sun that are not under control or influence by we humans. They also modelled CO2 Climate Sensitivity way too high, again following the lead of the Climate Theorists who run the IPCC and the Official Climate “hockey” Team. Atmospheric CO2 continues to rise at a rapid rate yet Global Warming has slowed or stalled because the high CO2 Sensitivity THEORY is wrong.
You wrote in an earlier comment: “… if reality does not match prediction solely because of processes that were not modeled, then it’s not the model that’s failed, although the prediction has failed.” and I replied “In other words, ‘the operation was a success but the patient died.’ :^)”
In your most recent comment, you say:
I agree that if the doctors DIAGNOSED the patient correctly and the surgeon gave him the proper operation and he received adequate treatment, and he died as a result of a traffic accident, that could not be blamed on the Medical Establishment.
However, do you agree that if the patient was MISDIAGNOSED (say with heart disease when his actual problem was heartburn) and the patient was therefore subjected to an unnecessary operation that, as far as it went, was successful, but he happened to die as a result of a hospital infection, that should be blamed on the Medical Establishment? What if there was a pattern of MISDIAGNOSIS and evidence the hospital had done that four times in a row? What if there was reason to suspect the MISDIAGNOSIS was not an accident but rather a way to increase the income of the hospital or to help the surgeon make a payment on his yacht?
***************
In your latest comment you also note that, looking at 2010 temperature observations, two of the four IPCC “arrows” graze the outer scoring ring of the target, and, when the final 2012 data comes in, they may in fact be determined to have hit the outer edge of the target. OK, I think they should get partial credit for that. What grade would you give them for totally missing the target twice and hitting the outer ring twice? By the way, I teach an online grad course in System Engineering at the University of Maryland, and, even with that type of partial credit, they would not pass the course :^)
Ira
Herkimer wrote:
THANKS, Herkimer, and I agree 100%
I considered how we ran projects during my long career as a System Engineer as I read JazzyT’s statement that: “… if reality does not match prediction solely because of processes that were not modeled, then it’s not the model that’s failed, although the prediction has failed.”
In System Enginering we did BOTH System Verification and System Validation:
1) Verification compared the system design and implementation with the Specifications. If the Specifications said “A, B, and C …” did the designers and implementers actually provide “A, B, and C …”? In other words, “Did we build the system right?” I would say that the Climate Modellers passed that threshold. They coded what they were given by the Climate Theorists.
2) Validation compares what the system actually does with what the users actually need. In other words, does the system perform the mission effectively? In other words, “Did we build the right system?” I would say the Climate Theorists gave the modellers the wrong specifications and therefore the wrong system was built!
And, of course I agree we should never go into production with the wrong system. That does not satisfy the mission and wastes money and does not serve any of the stakeholders well.
As you point out, blindly accepting the catastrophic predictions of climate models based on flawed Climate Theory has wasted taxpayer money. IMHO, public funding of harebrained “green” energy schemes has benefitted no one but the Official Climate Establishment and politically-connected industries. Theories must be VALIDATED before predictions based on them are implemented on any large scale.
Ira
Ira Glickstein, PhD says:
December 23, 2012 at 10:51 am
….As you point out, blindly accepting the catastrophic predictions of climate models based on flawed Climate Theory has wasted taxpayer money. IMHO, public funding of harebrained “green” energy schemes has benefited no one but the Official Climate Establishment and politically-connected industries. Theories must be VALIDATED before predictions based on them are implemented on any large scale.
>>>>>>>>>>>>>>>>>>>>>>>>>
Too bad the run-of-the-mill taxpayer who is being scammed can not see that. One wonders just how bad the backlash will be when realization hits. Given the acceptance of the Banker bailout fiasco by those who were conned it looks like everyone will take it lying down or maybe not….
I think a friend’s four year old had the right idea when she said she wanted to grow-up to be a government. (She now works in DC)
This entire exercise I think has also been made considerably worse by having the scientific and political mandates together at UN /IPCC where the political objective to collect money and distribute the wealth dictates scientific mandate and clouds the scientific objectivity. Things are being rushed where there is no reason to rush as we now see that the warming will not be anywhere near the rate predicted. We have the time to do things right with the right science
I’m assuming that the data points in the graphic only go as far as 2011 (?)
How was the increase in temperature of between 0.12 and 0.16 degrees C, between 1990 and 2011, calculated in the animated graphic? It appears to me that this was done using only the first and last data points in the chart (1990 and 2011). If so, then I don’t think this isn’t the best method for estimating the increase in temperature. I think linear regression would be better as it uses all of the data points, and thus reduces the influence of year-to-year variability.
Using annual global combined (land and ocean) surface temperature anomaly data from 3 data sets (GISS, HardCrut4, NOAA/NCDC) I calculated the slope of the regression line between 1990 and 2011, and estimated the increase in temperature in degrees C between 1990 and 2011 to be 0.33 for HadCrut4, 0.33 for NOAA/NCDC, and 0.37 for GISS.
Admittedly, the estimates obtained above are most likely to be too high, as the slope of the regression line would be steepened due to mount Pinatubo erupting in 1991, so I did 2 very simple alternative analyses to adjust for this:
Firstly, I re-calculated the temperature anomalies for 1991, 1992, 1993 and 1994 as the average of the anomalies for 1990 and 1995. When I did this, the increase in temperature in degrees C between 1990 and 2011 was estimated to be 0.23 for HadCrut4, 0.23 for NOAA/NCDC, and 0.25 for GISS.
Secondly, I re-calculated the temperature anomalies for 1991, 1992, 1993 and 1994 using simple linear interpolation (from the temperature anomalies for 1990 and 1995). This gave idenitical results to 2 decimal places (i.e. 0.23 degrees C for HadCrut4, 0.23 for NOAA/NCDC, and 0.25 for GISS).
Therefore, unless I’m missing something, or unless I’ve made a mistake in my calculations, the graphic’s suggestion that the actual increase in global surface temperature from 1990 to 2011 was between 0.12 and 0.16 degrees C seems misleading to me.