Model Charged with Excessive Use of Forcing

Guest Post by Willis Eschenbach

The GISS Model E is the workhorse of NASA’s climate models. I got interested in the GISSE hindcasts of the 20th century due to an interesting posting by Lucia over at the Blackboard. She built a simple model (which she calls “Lumpy”) which does a pretty good job of emulating the GISS model results, using only a model including forcings and a time lag. Stephen Mosher points out how to access the NASA data here (with a good discussion), so I went to the NASA site he indicated and got the GISSE results he points to. I plotted them against the GISS version of the global surface air temperature record in Figure 1.

Figure 1. GISSE Global Circulation Model (GCM or “global climate model”) hindcast 1880-1900, and GISS Global Temperature (GISSTemp) Data. Photo shows the new NASA 15,000-processor “Discover” supercomputer. Top speed is 160 trillion floating point operations per second (a unit known by the lovely name of “teraflops”). What it does in a day would take my desktop computer seventeen years.

Now, that all looks impressive. The model hindcast temperatures are a reasonable match both by eyeball and mathematically to the observed temperature. (R^2 = 0.60). True, it misses the early 20th century warming (1920-1940) entirely, but overall it’s a pretty close fit. And the supercomputer does 160 teraflops. So what could go wrong?

To try to understand the GISSE model, I got the forcings used for the GISSE simulation. I took the total forcings, and I compared them to the GISSE model results. The forcings were yearly averages, so I compared them to the yearly results of the GISSE model. Figure 2 shows a comparison of the GISSE model hindcast temperatures and a linear regression of those temperatures on the total forcings.

Figure 2. A comparison of the GISSE annual model results with a linear regression of those results on the total forcing. (A “linear regression” estimates the best fit of the forcings to the model results). Total forcing is the sum of all forcings used by the GISSE model, including volcanos, solar, GHGs, aerosols, and the like. Deep drops in the forcings (and in the model results) are the result of stratospheric aerosols from volcanic eruptions.

Now to my untutored eye, Fig. 2 has all the hallmarks of a linear model with a missing constant trend of unknown origin. (The hallmarks are the obvious similarity in shape combined with differing trends and a low R^2.) To see if that was the case I redid my analysis, this time including a constant trend. As is my custom, I merely included the years of the observation in the analysis to get that trend. That gave me Figure 3.

Figure 3. A comparison of the GISSE annual model results with a regression of the total forcing on those results, including a constant annual trend. Note the very large increase in R^2 compared to Fig. 2, and the near-perfect match of the two datasets.

There are several surprising things in Figure 3, and I’m not sure I see all of the implications of those things yet. The first surprise was how close the model results are to a bozo simple linear response to the forcings plus the passage of time (R^2 = 0.91, average error less than a tenth of a degree). Foolish me, I had the idea that somehow the models were producing some kind of more sophisticated, complex, lagged, non-linear response to the forcings than that.

This almost completely linear response of the GISSE model makes it trivially easy to create IPCC style “scenarios” of the next hundred years of the climate. We just use our magic GISSE formula, that future temperature change is equal to 0.13 times the forcing change plus a quarter of a degree per century, and we can forecast the temperature change corresponding to any combination of projected future forcings …

Second, this analysis strongly suggests that in the absence of any change in forcing, the GISSE model still warms. This is in agreement with the results of the control runs of the GISSE and other models that I discussed st the end of my post here. The GISSE control runs also showed warming when there was no change in forcing. This is a most unsettling result, particularly since other models showed similar (and in some cases larger) warming in the control runs.

Third, the climate sensitivity shown by the analysis is only 0.13°C per W/m2 (0.5°C per doubling of CO2). This is far below the official NASA estimate of the response of the GISSE model to the forcings. They put the climate sensitivity from the GISSE model at about 0.7°C per W/m2 (2.7°C per doubling of CO2). I do not know why their official number is so different.

I thought the difference in calculated sensitivities might be because they have not taken account of the underlying warming trend of the model itself. However, when the analysis is done leaving out the warming trend of the model (Fig. 2), I get a sensitivity of 0.34°C per W/m2 (1.3°C per doubling, Fig. 2). So that doesn’t solve the puzzle either. Unless I’ve made a foolish mathematical mistake (always a possibility for anyone, check my work), the sensitivity calculated from the GISSE results is half a degree of warming per doubling of CO2 …

Troubled by that analysis, I looked further. The forcing is close to the model results, but not exact. Since I was using the sum of the forcings, obviously in their model some forcings make more difference than other forcings. So I decided to remove the volcano forcing, to get a better idea of what else was in the forcing mix. The volcanos are the only forcing that makes such large changes on a short timescale (months). Removing the volcanos allowed me to regress all of the other forcings against the model results (without volcanos), so that I could see how they did. Figure 4 shows that result:

Figure 4. All other forcings regressed against GISSE hindcast temperature results after volcano effect is removed. Forcing abbreviations (used in original dataset): W-M_GHGs = Well Mixed Greenhouse Gases; O3 = Ozone; StratH2O = Stratospheric Water Vapor; Solar = Energy From The Sun; LandUse = Changes in Land Use and Land Cover; SnowAlb = Albedo from Changes in Snow Cover; StratAer = Stratospheric Aerosols from volcanos; BC = Black Carbon; ReflAer = Reflective Aerosols; AIE = Aerosol Indirect Effect. Numbers in parentheses show how  well the various forcings explain the remaining model results, with 1.0 being a perfect score. (The number is called R squared, usually written R^2) Photo Source

Now, this is again interesting. Once the effect of the volcanos is removed, there is very little difference in how well the other forcings explain the remainder. With the obvious exception of solar, the R^2 of most of the forcings are quite similar. The only two that outperform a simple straight line are stratospheric water vapor and GHGs, and that is only by 0.01.

I wanted to look at the shape of the forcings to see if I could understand this better. Figure 5 has NASA GISS’s view of the forcings, shown at their actual sizes:

Figure 5: The radiative forcings used by the GISSE model as shown by GISS. SOURCE

Well, that didn’t tell me a lot (not GISS’s fault, just the wrong chart for my purpose), so I took the forcing data, standardized it, and took a look at the forcings in a form in which they could be seen. I found out the reason that they all fit so well lies in the shape of the forcings. All of them increase slowly (either negatively or positively) until 1950. After that, they increase more quickly. To see these shapes, it is necessary to standardize the forcings so that they all have the same size. Figure 6 shows what the forcings used by the model look like after standardization:

Figure 6. Forcings for the GISSE model hindcast 1880-2003. Forcings have been “standardized” (set to a standard deviation of 1.0) and set to start at zero as in Figure 4.

There are several oddities about their forcings. First, I had assumed that the forcings used were based at least loosely on reality. To make this true, I need to radically redefine “loosely”. You’ll note that by some strange coincidence, many of the forcings go flat from 1990 onwards … loose. Does anyone believe that all those forcings (O3, Landuse, Aerosol Indirect, Aerosol Reflective, Snow Albedo, Black Carbon) really stopped changing in 1990? (It is possible that this is a typographical or other error in the dataset. This idea is supported by the slight post-1990 divergence of the model results from the forcings as seen in Fig. 3)

Next, take a look at the curves for snow albedo and black carbon. It’s hard to see the snow albedo curve, because it is behind the black carbon curve. Why should the shapes of those two curves be nearly identical? … loose.

Next, in many cases the “curves” for the forcings are made up of a few straight lines. Whatever the forcings might or might not be, they are not straight lines.

Next, with the exception of solar and volcanoes, the shape of all of the remaining forcings is very similar. They are all highly correlated, and none of them (including CO2) is much different from a straight line.

Where did these very strange forcings come from? The answer is neatly encompassed in “Twentieth century climate model response and climate sensitivity”, Kiehl, GRL 2007 (emphasis mine):

A large number of climate modeling groups have carried out simulations of the 20th century. These simulations employed a number of forcing agents in the simulations. Although there are established data for the time evolution of well-mixed greenhouse gases [and solar and volcanos although Kiehl doesn’t mention them], there are no established standard datasets for ozone, aerosols or natural forcing factors.

Lest you think that there is at least some factual basis to the GISSE forcings, let’s look again at black carbon and snow albedo forcing. Black carbon is known to melt snow, and this is an issue in the Arctic, so there is a plausible mechanism to connect the two. This is likely why the shapes of the two are similar in the GISSE forcings. But what about that shape, increasing over the period of analysis? Here’s one of the few actual records of black carbon in the 20th century, from 20th-Century Industrial Black Carbon Emissions Altered Arctic Climate Forcing, Science Magazine (paywall)

Figure 7. An ice core record from the Greenland cap showing the amount of black carbon trapped in the ice, year by year. Spikes in the summer are large forest fires.

Note that rather than increasing over the century as GISSE claims, the observed black carbon levels peaked in about 1910-1920, and have been generally decreasing since then.

So in addition to the dozens of parameters that they can tune in the climate models, the GISS folks and the other modelers got to make up some of their own forcings out of the whole cloth … and then they get to tell us proudly that their model hindcasts do well at fitting the historical record.

To close, Figure 8 shows the best part, the final part of the game:

Figure 8. ORIGINAL IPCC CAPTION (emphasis mine). A climate model can be used to simulate the temperature changes that occur from both natural and anthropogenic causes. The simulations in a) were done with only natural forcings: solar variation and volcanic activity. In b) only anthropogenic forcings are included: greenhouse gases and sulfate aerosols. In c) both natural and anthropogenic forcings are included. The best match is obtained when both forcings are combined, as in c). Natural forcing alone cannot explain the global warming over the last 50 years. Source

Here is the sting in the tale. They have designed the perfect forcings, and adjusted the model parameters carefully, to match the historical observations. Having done so, the modelers then claim that the fact that their model no longer matches historical observations when you take out some of their forcings means that “natural forcing alone cannot explain” recent warming … what, what?

You mean that if you tune a model with certain inputs, then remove one or more of the inputs used in the tuning, your results are not as good as with all of the inputs included? I’m shocked, I tell you. Who would have guessed?

The IPCC actually says that because the tuned models don’t work well with part of their input removed, this shows that humans are the cause of the warming … not sure what I can say about that.

What I Learned

1. To a very close approximation (R^2 = 0.91, average error less than a tenth of a degree C) the GISS model output can be replicated by a simple linear transformation of the total forcing and the elapsed time. Since the climate is known to be a non-linear, chaotic system, this does not bode well for the use of GISSE or other similar models.

2. The GISSE model illustrates that when hindcasting the 20th century, the modelers were free to design their own forcings. This explains why, despite having climate sensitivities ranging from 1.8 to 4.2, the various climate models all provide hindcasts which are very close to the historical records. The models are tuned, and the forcings are chosen, to do just that.

3. The GISSE model results show a climate sensitivity of half a degree per doubling of CO2, far below the IPCC value.

4. Most of the assumed GISS forcings vary little from a straight line (except for some of them going flat in 1990).

5. The modelers truly must believe that the future evolution of the climate can be calculated using a simple linear function of the forcings. Me, I misdoubts that …

In closing, let me try to anticipate some objections that people will likely have to this analysis.

1. But that’s not what the GISSE computer is actually doing! It’s doing a whole bunch of really really complicated mathematical stuff that represents the real climate and requires 160 teraflops to calculate, not some simple equation. This is true. However, since their model results can be replicated so exactly by this simple linear model, we can say that considered as black boxes the two models are certainly equivalent, and explore the implications of that equivalence.

2. That’s not a new finding, everyone already knew the models were linear. I also thought the models were linear, but I have never been able to establish this mathematically. I also did not realize how rigid the linearity was.

3. Is there really an inherent linear warming trend built into the model? I don’t know … but there is something in the model that acts just like a built-in inherent linear warming. So in practice, whether the linear warming trend is built-in, or the model just acts as though it is built-in, the outcome is the same. (As a side note, although the high R^2 of 0.91 argues against the possibility of things improving a whole lot by including a simple lagging term, Lucia’s model is worth exploring further.)

4. Is this all a result of bad faith or intentional deception on the part of the modelers? I doubt it very much. I suspect that the choice of forcings and the other parts of the model “jes’ growed”, as Topsy said. My best guess is that this is the result of hundreds of small, incremental decisions and changes made over decades in the forcings, the model code, and the parameters.

5. If what you say is true, why has no one been able to successfully model the system without including anthropogenic forcing?

Glad you asked. Since the GISS model can be represented as a simple linear model, we can use the same model with only natural forcings. Here’s a first cut at that:

Figure 9. Model of the climate using only natural forcings (top panel). All forcings model from Figure 3 included in lower panel for comparison. Yes, the R^2 with only natural forcings is smaller, but it is still a pretty reasonable model.

6. But, but … you can’t just include a 0.42 degree warming like that! For all practical purposes, GISSE does the same thing only with different numbers, so you’ll have to take that up with them. See the US Supreme Court ruling in the case of Sauce For The Goose vs. Sauce For The Gander.

7. The model inherent warming trend doesn’t matter, because the final results for the IPCC scenarios show the change from model control runs, not absolute values. As a result, the warming trend cancels out, and we are left with the variation due to forcings. While this sounds eminently reasonable, consider that if you use their recommended procedure (cancel out the 0.25°C constant inherent warming trend) for their 20th century hindcast shown above, it gives an incorrect answer … so that argument doesn’t make sense.

To simplify access to the data, I have put the forcings, the model response, and the GISS temperature datasets online here as an Excel worksheet. The worksheet also contains the calculations used to produce Figure 3.

And as always, the scientific work of a thousand hands continue.

Regards,

w.

 

[UPDATE: This discussion continues at Where Did I Put That Energy.]

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
155 Comments
Inline Feedbacks
View all comments
Urederra
December 19, 2010 6:24 pm

Why do they need a 160 teraflops supercomputer?
To make forcings ‘a la carte’ that fit with their desired predictions.
It is ecneics, science made backwards. Instead of getting empirical data and use it to create climate models. They use the climate models to make up the data needed for their desired prediction.

John F. Hultquist
December 19, 2010 6:25 pm

For those that do not like Willis’ backgrounds, you could seek –for a small fee—his image work-flow files. Then you could vary the opacity of that layer (after promoting it) until it suits your taste. While there is no “arguing about tastes and colors”, Willis probably likes green as well as most of us, so all could be happy.
http://www.pixalo.com/community/tutorials-guides/high-key-18631.html
I’m with Rhett Butler on this issue. I don’t give a . . .
—————————————
I found the post and comments interesting and educational. Thanks, Willis and the rest.

Chris
December 19, 2010 6:41 pm

My beef is the trend of aerosols assumed in the models. Always getting worse? Excuse me? What about the worsening air quality in the 40-70’s in the industrial northern hemisphere as compared to now? The modelers erased this fact like they erased the MWP. It’s all rubbish.

3x2
December 19, 2010 6:42 pm

Bart says: December 19, 2010 at 3:17 pm
[…]Which means that, essentially, all it is, is a glorified multivariable curve fit[…]

Yes, you do have to wonder as to how many more “forcings” would be required to fit the “global average” graph exactly.
The problem with models is that subconsciously you need a “ball park” to play into. By that I simply mean that, whether you have designed it from scratch or modified some readily available code, examining the output forces you to look at what others have published and then decide if your run is in the “ball park”. Supposing that my run suggests twenty years of cooling… do I publish or re-write the code? Twenty years of rapid warming … publish or re-write? Somewhere between all the other models.. publish or re-write? I know which one would be safe.
Given that the “global mean” itself (gridded RSM, FDM) is a model, when designing “V3_mean” what “ball park” am I testing against and how will I know that I’m in it?
Just how much synergy exists between the models and the “global mean” within a particular organisation? GISS “real” temps and GISS modelled temps track each other if you include enough variables… colour me shocked pink.
Perhaps those treenometers and their hidden decline have been right all along … how would I know? Then again we will always have harry..read..me…

Anton Eagle
December 19, 2010 6:42 pm

Willis,
Lets go back and take a look at that figure 4 for a moment. I think that graph DOES tell us something useful about the assumptions behind their modelling. For example, take a look at the line for “Land Use”. They are essentially saying that land use has had, and will have, no effect on climate whatsoever. Seriously?
The magnitude of the assumed forcing due to GHG dominates their model (and their thinking) and thus inevitably leads them to the conclusion that they pre-assumed. This of course, is well understood by the skeptic, but it’s interesting to see it displayed in their own forcings graph.
Also, wouldn’t a forcing due to the effects of Black Carbon, and a forcing due to lowering the albedo of snow (due to black carbon) essentially be modelling the same thing? Are they double counting there?
Thanks, as usual, for your digging.

dp
December 19, 2010 6:50 pm

Withholding comment until your methods and parameters are properly pal reviewed.

Roy Clark
December 19, 2010 6:56 pm

The ‘radiative forcing constants’ used in all the climate models have no phyical meaning, nor do the tempertures that they ‘predict’. They are just modeling ‘fudge factors’ used to change the surface temperature. The radiatative forcing trick is described in Hansen at al., 2005, ‘Efficacy of Climate Forcings’ J. Geophys. Research, 110, D18104, pp1-45. [ http://pubs.giss.nasa.gov/docs/2005/2005_Hansen_etal_2.pdf ].
The starting point is that it is assumed that a 100 ppm increase in atmospheric CO2 concentration has produced a 1 C rise in ‘average surface temerpature’. This is the rise in ‘meteorological surface air temperature’ from the ‘hockey stick’. Yes – GISSE is ‘calibrated’ using the ‘hockey stick’! [Reliable] Spectroscopic calculations show that a 100 ppm increase in atmospheric CO2 concentration also produces an increase in ‘clear sky’ downward LWIR flux of 1.7 W.m-2.
Now, there is no physical cause and effect relationship between the 1 C rise in air temperature measured at eye level above the ground and a change in LWIR flux of 1.7 W.m-2 at the surface 5 ft below. (The 1 C rise is from changes in ocean surface temperatures, urban heat islands, and downright data ‘fixing’.) However, the radiative forcing constant for CO2 is defined as 1/1.7 = 0.67 C/(W.m-2).
This is then extended as a ‘calibration constant’ to all other atmospheric species. In other words, it is arbitrarily and empirically assumed that a 1 W.m-2 increase in LWIR flux from any species produces a 0.67 rise in ‘average equilbrium surface temperature’. The increase in LWIR flux for other greenhouse gases such as CH4, O3 etc. can be calculated from their spectroscopic constants. Aerosols etc. are just used as empirical ‘levers’ to fix the model output. That is how the volcano terms are used. The whole thing is pseudoscience. All we need to add are the signs of the zodiac and it becomes climate astrology.
Reality is that it is impossible for a 1.7 W.m-2 increase in downward ‘clear sky’ LWIR flux to produce any measurable change in surface temperature. The LWIR flux has to be added correctly to the total flux at the surface and used to calculate the surface temperature of a real surface with all of the heat flux terms and proper thermal properties included. This is discussed at:
http://hidethedecline.eu/pages/posts/what-surface-temperature-is-your-model-really-predicting-190.php
The fundamental assumption that changes in surface temerpature can be simulated using small changes in long term ‘equilbrium’ flux averages is incorrect. There is no climate ‘equilbrium’ on any time scale.
Hansen’s model is an empirical, pseudoscientific fraud.

December 19, 2010 7:38 pm

Peter Hartley is correct: temps are regressed onto forcings, not vice versa.

DR
December 19, 2010 7:57 pm
old engineer
December 19, 2010 8:32 pm

Dear Mr Eschenbach
I always enjoy your posts and learn a lot from them. However, I agree with Madman2001 and others who complain about the picture backgrounds of your graphs.
I know you say you like them. Since you are the author, I grant that you can present them however you want. But, hopefully your purpose is not just to present something you find pretty, but actually to inform and influence others. Some folks have said that they find the background pictures a hindrance to understanding what you are trying to convey. No one has said that the pictures help them understand the graph
If it were me, and I was trying to inform and convince people of the correctness of my work, I would go with what had the best chance of achieving my goal, regards of my personal preferences. Just a thought.

frederik wisse
December 19, 2010 8:47 pm

Consider yourself a computersystem developer receiving the well-paid task to develop a system describing the development of world temperatures and calculating the future . What would you do , knowing that further orders would be depending on your ability to dance with the high priests of climate alarmism ? Why is the building of these models declared a secret ? Because there is really something to hide ! What happened before climate-gate ? Rejections by the self-declared scientists of requests to publish the underlying facts . Climategate proved that these rejections were necessary to keep the fraud going on . Is there any real difference between the lack of GISS modeling particulars and the lack of information from the UK universities ?
We are dealing here with a modern-style fashioned priesthood getting tremendously wealthy with their fearmongering theories at the expense of the rest of our society and utterly showing their disdain for any real intellectual confrontation by telling the science is settled …… Do we really wish to be ruled by stupidity , then poverty shall be our fate .

Bugge
December 19, 2010 9:08 pm

I haven’t read all the comments, someone might have pointed out the same issue:
In figure 8, I keep wondering why the model simulate significant warmer temperatures in the 1860-70’s and 1910-30 (natural forcing only). It’s almost a 0,5C difference over several decades. And when they add the input/AGW, the temperature goes down.
It seems like we had AGC (a-g-cooling).

AntonyIndia
December 19, 2010 9:24 pm

The climate computer modeling reminds me of the financial computer modeling:
“The Financial Crisis and the Systemic Failure of Academic Economics”
http://www.ifw-members.ifw-kiel.de/publications/the-financial-crisis-and-the-systemic-failure-of-academic-economics/KWP_1489_ColanderetalFinancial%20Crisis.pdf

AusieDan
December 19, 2010 9:31 pm

Some commentators have querried the need to use massive super computers with many teraflops to do what in essence is a rather simple task.
Somebody even asked how were the excess teraflops used?
I have the answer.
When, long ago, I was writing simple prograns in Basic on my trusty Tandy TRS80, I often can upon a problem.
I wanted to see what was happening to a number of variables, while the program was running, as well as seeing the end result when it was finished.
My problem as that the numbers just falshed past on the screen too quickly to be taken in.
My solution was quite simple, which I will reveal at the end.
I think the climatologists problem is rather different.
They have been successful in getting so much money in grants that they have far too much computer power.
Use it or lose it is the rule in government and quango circles.
The answer is to add an extra subroutine like the following,
Where “N” is any number large enough to slow things down a lot.
10 for I = 1 to N
20 Next I
That’s all folks

AusieDan
December 19, 2010 9:48 pm

Willis on a more serious note:
There is a much more fundamental flaw in the whole concept of modelling the climate, with the hope of predicting outcomes many years into the future.
It is the chaotic nature of the climate.
Chaos means that there are an almost infinite number of overlapping cycles, up to millioms of years long or more.
At any moment the next largest (and hence more powerful) cycle will intervene without warning.
Even within the existing cycle, the inter-connections are so complex that things, which seem to be going on in an orderly fashion, suddenly change in unpredictable ways.
That’s why it is impossible to forecast the economy with any accuracy for more than a few months in advance.
The economy has been the subject of longer and much more intense study than the climate.
Yet very little real progress has been made.
Just enough to construct next year’s government budget with reasonably good accuracy in most years, except when the unexpected happens, as in 2007.
At least many of the economic variables are known and quantified.
The climate is both much more complex and much more unknown.
We should not take long term climate forecasting seriously, but should make a concerted effort to educate the public, press and politicans, that you cannot forecast the long term future climate, particluarly when the main forces are largely unknown.

vigilantfish
December 19, 2010 9:58 pm

Willis,
Very nice post, entertaining and informative as always. Interesting how little surprise there is at your revelations… it feels more like deja vu. A year ago this would have whipped up WUWT readers into a frenzy. BTW I like your background pictures; I find I can mentally block the images out when concentrating on the trend lines and real information, but they are initially eye-catching and attractive. But then, I’m not an engineer.
Urederra says:
December 19, 2010 at 6:24 pm
Why do they need a 160 teraflops supercomputer?
To make forcings ‘a la carte’ that fit with their desired predictions.
It is ecneics, science made backwards.
……….
How do you pronounce that? With a standard pronunciation, ecneics could become a useful word!

anna v
December 19, 2010 10:01 pm

old engineer says:
December 19, 2010 at 8:32 pm
If it were me, and I was trying to inform and convince people of the correctness of my work, I would go with what had the best chance of achieving my goal, regards of my personal preferences. Just a thought.
I agree with you, but I notice the “old”. Maybe it has to do with age. Eye sight is not what it was and backgrounds have to be masked out in the head.
On the other hand, having to use a slide rule and find logarithmic paper for the plots, and painstakingly transfer the histogram information from the primitive computer output, and taking it to a graphics technician to make a pretty slide for a conference or publication sort of discouraged adventurous backgrounds :). So it could just be a conditioned reflex for us.
Anna, an old physicist.

RockyRoad
December 19, 2010 10:15 pm

It absolutely does not matter how fast your computer flops if the algorithms used are themselves flops–garbage in/garbage out is the time-tested description. Might just as well write on a napkin and throw it away after “climsci” fails yet again.

December 19, 2010 10:16 pm

Squidly says:
December 19, 2010 at 3:57 pm

I am wondering what they do with all of those extra teraflops, sounds to me like I could do the same processing on my WII at 2.5 MIPS (million instructions per second) with equal results (and a little bit cheaper).

Almost. That MIPS is Mega (as opposed to Tera) instructions (as opposed to Floating Point Operations) per second.
1. I suspect, unless you are really still using a 2.5 M(ega)Hz computer, you mean G(iga)Hz, and therefore Billions of Instructions per second.
2. The floating point operations bit does make a difference since each involves many instructions, but not as much as a factor of 1,000 😉

Dinostratus
December 19, 2010 10:22 pm

“It’s doing a whole bunch of really really complicated mathematical stuff that represents the real climate and requires 160 teraflops to calculate, not some simple equation.”
This exercise points to an additional finding.
The terabytes and teraflops of capability the modelers have would say that the equations they are solving are incredibly complex. Since the complexity isn’t in the global forcings or the dampenings then it must be in the transfer functions. These transfer functions would depend on cloud type, mountain ranges, the shape of the arctic ice cap and the like. So it’s the interaction of the transfer functions with the forcings and dampenings that require all that processor capability.
Actually, no. Additional computational capability doesn’t lead to markedly better results. This means that the equations have been approximated in a way that they are inherently stable. So stable in fact that they don’t need “teraflops” of capability.
The obvious question is then, why is the capability needed if the equations have been approximated in a way that they are inherently stable?

Charlie A
December 19, 2010 10:54 pm

GISS should be able to explain how they calculated their model sensitivity as 0.7°C per W/m2 (2.7°C per doubling of CO2). The difference between Willis’s calculated sensitivity and the 2.7C/doubling is astounding.
Here’s one possible explanation:
The amount of carbon we add to the atmosphere can be estimated with reasonable accuracy, as can the actual increase. There is a discrepancy where about 50% of the added carbon is missing …… absorbed by the biosphere and oceans.
Perhaps we seeing the same sort of thing with forcings and heat content?
Net forcings go up, but only half of that shows up as temperature increase. The other half goes into warming the ocean. So the actual climate sensitivity is twice the 0.34°C per W/m2 (1.3°C per doubling of CO2) that your model shows (using the higher number that includes the 0.025C/decade warming trend)
Blame it all on Trenberths’s missing heat. I assume that the GISSE model predates the full deployment of the Argo network and reasonably accurate measurement of Ocean Heat Content. A decade ago, it was reasonable to assume a large increase in OHC each year as a sink for much of the energy from the increased forcings.

anna v
December 19, 2010 11:04 pm

Willis Eschenbach says:
December 19, 2010 at 10:37 p
Anyone with any insights on that question about sensitivity?
Not a mathematical insight, a psychological one. They probably doubled all “forcings” ( I hate the terminology) and thought they were doubling only CO2’s :; . I remember the Harry manipulations , or was it Henry?, in the climategate papers.
Here is a nice story of how group work can get off the tracks, which might matter not much for everyday business, but can be disastrous for scientific conclusions:
In a physics lab way back then, first year students were divided into groups of five and set to determine the parameters of a pendulum, they were given a stop watch. So, one of them got hold of the watch, another started the pendulum and when the time was up the stop watcher said, “how many oscillations?” . Nobody had been counting, everybody assuming that the others would! Now if there were a Harry among them, they could invent an approximate number :).