Guest Post by Willis Eschenbach
The GISS Model E is the workhorse of NASA’s climate models. I got interested in the GISSE hindcasts of the 20th century due to an interesting posting by Lucia over at the Blackboard. She built a simple model (which she calls “Lumpy”) which does a pretty good job of emulating the GISS model results, using only a model including forcings and a time lag. Stephen Mosher points out how to access the NASA data here (with a good discussion), so I went to the NASA site he indicated and got the GISSE results he points to. I plotted them against the GISS version of the global surface air temperature record in Figure 1.
Figure 1. GISSE Global Circulation Model (GCM or “global climate model”) hindcast 1880-1900, and GISS Global Temperature (GISSTemp) Data. Photo shows the new NASA 15,000-processor “Discover” supercomputer. Top speed is 160 trillion floating point operations per second (a unit known by the lovely name of “teraflops”). What it does in a day would take my desktop computer seventeen years.
Now, that all looks impressive. The model hindcast temperatures are a reasonable match both by eyeball and mathematically to the observed temperature. (R^2 = 0.60). True, it misses the early 20th century warming (1920-1940) entirely, but overall it’s a pretty close fit. And the supercomputer does 160 teraflops. So what could go wrong?
To try to understand the GISSE model, I got the forcings used for the GISSE simulation. I took the total forcings, and I compared them to the GISSE model results. The forcings were yearly averages, so I compared them to the yearly results of the GISSE model. Figure 2 shows a comparison of the GISSE model hindcast temperatures and a linear regression of those temperatures on the total forcings.
Figure 2. A comparison of the GISSE annual model results with a linear regression of those results on the total forcing. (A “linear regression” estimates the best fit of the forcings to the model results). Total forcing is the sum of all forcings used by the GISSE model, including volcanos, solar, GHGs, aerosols, and the like. Deep drops in the forcings (and in the model results) are the result of stratospheric aerosols from volcanic eruptions.
Now to my untutored eye, Fig. 2 has all the hallmarks of a linear model with a missing constant trend of unknown origin. (The hallmarks are the obvious similarity in shape combined with differing trends and a low R^2.) To see if that was the case I redid my analysis, this time including a constant trend. As is my custom, I merely included the years of the observation in the analysis to get that trend. That gave me Figure 3.
Figure 3. A comparison of the GISSE annual model results with a regression of the total forcing on those results, including a constant annual trend. Note the very large increase in R^2 compared to Fig. 2, and the near-perfect match of the two datasets.
There are several surprising things in Figure 3, and I’m not sure I see all of the implications of those things yet. The first surprise was how close the model results are to a bozo simple linear response to the forcings plus the passage of time (R^2 = 0.91, average error less than a tenth of a degree). Foolish me, I had the idea that somehow the models were producing some kind of more sophisticated, complex, lagged, non-linear response to the forcings than that.
This almost completely linear response of the GISSE model makes it trivially easy to create IPCC style “scenarios” of the next hundred years of the climate. We just use our magic GISSE formula, that future temperature change is equal to 0.13 times the forcing change plus a quarter of a degree per century, and we can forecast the temperature change corresponding to any combination of projected future forcings …
Second, this analysis strongly suggests that in the absence of any change in forcing, the GISSE model still warms. This is in agreement with the results of the control runs of the GISSE and other models that I discussed st the end of my post here. The GISSE control runs also showed warming when there was no change in forcing. This is a most unsettling result, particularly since other models showed similar (and in some cases larger) warming in the control runs.
Third, the climate sensitivity shown by the analysis is only 0.13°C per W/m2 (0.5°C per doubling of CO2). This is far below the official NASA estimate of the response of the GISSE model to the forcings. They put the climate sensitivity from the GISSE model at about 0.7°C per W/m2 (2.7°C per doubling of CO2). I do not know why their official number is so different.
I thought the difference in calculated sensitivities might be because they have not taken account of the underlying warming trend of the model itself. However, when the analysis is done leaving out the warming trend of the model (Fig. 2), I get a sensitivity of 0.34°C per W/m2 (1.3°C per doubling, Fig. 2). So that doesn’t solve the puzzle either. Unless I’ve made a foolish mathematical mistake (always a possibility for anyone, check my work), the sensitivity calculated from the GISSE results is half a degree of warming per doubling of CO2 …
Troubled by that analysis, I looked further. The forcing is close to the model results, but not exact. Since I was using the sum of the forcings, obviously in their model some forcings make more difference than other forcings. So I decided to remove the volcano forcing, to get a better idea of what else was in the forcing mix. The volcanos are the only forcing that makes such large changes on a short timescale (months). Removing the volcanos allowed me to regress all of the other forcings against the model results (without volcanos), so that I could see how they did. Figure 4 shows that result:
Figure 4. All other forcings regressed against GISSE hindcast temperature results after volcano effect is removed. Forcing abbreviations (used in original dataset): W-M_GHGs = Well Mixed Greenhouse Gases; O3 = Ozone; StratH2O = Stratospheric Water Vapor; Solar = Energy From The Sun; LandUse = Changes in Land Use and Land Cover; SnowAlb = Albedo from Changes in Snow Cover; StratAer = Stratospheric Aerosols from volcanos; BC = Black Carbon; ReflAer = Reflective Aerosols; AIE = Aerosol Indirect Effect. Numbers in parentheses show how well the various forcings explain the remaining model results, with 1.0 being a perfect score. (The number is called R squared, usually written R^2) Photo Source
Now, this is again interesting. Once the effect of the volcanos is removed, there is very little difference in how well the other forcings explain the remainder. With the obvious exception of solar, the R^2 of most of the forcings are quite similar. The only two that outperform a simple straight line are stratospheric water vapor and GHGs, and that is only by 0.01.
I wanted to look at the shape of the forcings to see if I could understand this better. Figure 5 has NASA GISS’s view of the forcings, shown at their actual sizes:
Figure 5: The radiative forcings used by the GISSE model as shown by GISS. SOURCE
Well, that didn’t tell me a lot (not GISS’s fault, just the wrong chart for my purpose), so I took the forcing data, standardized it, and took a look at the forcings in a form in which they could be seen. I found out the reason that they all fit so well lies in the shape of the forcings. All of them increase slowly (either negatively or positively) until 1950. After that, they increase more quickly. To see these shapes, it is necessary to standardize the forcings so that they all have the same size. Figure 6 shows what the forcings used by the model look like after standardization:
Figure 6. Forcings for the GISSE model hindcast 1880-2003. Forcings have been “standardized” (set to a standard deviation of 1.0) and set to start at zero as in Figure 4.
There are several oddities about their forcings. First, I had assumed that the forcings used were based at least loosely on reality. To make this true, I need to radically redefine “loosely”. You’ll note that by some strange coincidence, many of the forcings go flat from 1990 onwards … loose. Does anyone believe that all those forcings (O3, Landuse, Aerosol Indirect, Aerosol Reflective, Snow Albedo, Black Carbon) really stopped changing in 1990? (It is possible that this is a typographical or other error in the dataset. This idea is supported by the slight post-1990 divergence of the model results from the forcings as seen in Fig. 3)
Next, take a look at the curves for snow albedo and black carbon. It’s hard to see the snow albedo curve, because it is behind the black carbon curve. Why should the shapes of those two curves be nearly identical? … loose.
Next, in many cases the “curves” for the forcings are made up of a few straight lines. Whatever the forcings might or might not be, they are not straight lines.
Next, with the exception of solar and volcanoes, the shape of all of the remaining forcings is very similar. They are all highly correlated, and none of them (including CO2) is much different from a straight line.
Where did these very strange forcings come from? The answer is neatly encompassed in “Twentieth century climate model response and climate sensitivity”, Kiehl, GRL 2007 (emphasis mine):
A large number of climate modeling groups have carried out simulations of the 20th century. These simulations employed a number of forcing agents in the simulations. Although there are established data for the time evolution of well-mixed greenhouse gases [and solar and volcanos although Kiehl doesn’t mention them], there are no established standard datasets for ozone, aerosols or natural forcing factors.
Lest you think that there is at least some factual basis to the GISSE forcings, let’s look again at black carbon and snow albedo forcing. Black carbon is known to melt snow, and this is an issue in the Arctic, so there is a plausible mechanism to connect the two. This is likely why the shapes of the two are similar in the GISSE forcings. But what about that shape, increasing over the period of analysis? Here’s one of the few actual records of black carbon in the 20th century, from 20th-Century Industrial Black Carbon Emissions Altered Arctic Climate Forcing, Science Magazine (paywall)
Figure 7. An ice core record from the Greenland cap showing the amount of black carbon trapped in the ice, year by year. Spikes in the summer are large forest fires.
Note that rather than increasing over the century as GISSE claims, the observed black carbon levels peaked in about 1910-1920, and have been generally decreasing since then.
So in addition to the dozens of parameters that they can tune in the climate models, the GISS folks and the other modelers got to make up some of their own forcings out of the whole cloth … and then they get to tell us proudly that their model hindcasts do well at fitting the historical record.
To close, Figure 8 shows the best part, the final part of the game:
Figure 8. ORIGINAL IPCC CAPTION (emphasis mine). A climate model can be used to simulate the temperature changes that occur from both natural and anthropogenic causes. The simulations in a) were done with only natural forcings: solar variation and volcanic activity. In b) only anthropogenic forcings are included: greenhouse gases and sulfate aerosols. In c) both natural and anthropogenic forcings are included. The best match is obtained when both forcings are combined, as in c). Natural forcing alone cannot explain the global warming over the last 50 years. Source
Here is the sting in the tale. They have designed the perfect forcings, and adjusted the model parameters carefully, to match the historical observations. Having done so, the modelers then claim that the fact that their model no longer matches historical observations when you take out some of their forcings means that “natural forcing alone cannot explain” recent warming … what, what?
You mean that if you tune a model with certain inputs, then remove one or more of the inputs used in the tuning, your results are not as good as with all of the inputs included? I’m shocked, I tell you. Who would have guessed?
The IPCC actually says that because the tuned models don’t work well with part of their input removed, this shows that humans are the cause of the warming … not sure what I can say about that.
What I Learned
1. To a very close approximation (R^2 = 0.91, average error less than a tenth of a degree C) the GISS model output can be replicated by a simple linear transformation of the total forcing and the elapsed time. Since the climate is known to be a non-linear, chaotic system, this does not bode well for the use of GISSE or other similar models.
2. The GISSE model illustrates that when hindcasting the 20th century, the modelers were free to design their own forcings. This explains why, despite having climate sensitivities ranging from 1.8 to 4.2, the various climate models all provide hindcasts which are very close to the historical records. The models are tuned, and the forcings are chosen, to do just that.
3. The GISSE model results show a climate sensitivity of half a degree per doubling of CO2, far below the IPCC value.
4. Most of the assumed GISS forcings vary little from a straight line (except for some of them going flat in 1990).
5. The modelers truly must believe that the future evolution of the climate can be calculated using a simple linear function of the forcings. Me, I misdoubts that …
In closing, let me try to anticipate some objections that people will likely have to this analysis.
1. But that’s not what the GISSE computer is actually doing! It’s doing a whole bunch of really really complicated mathematical stuff that represents the real climate and requires 160 teraflops to calculate, not some simple equation. This is true. However, since their model results can be replicated so exactly by this simple linear model, we can say that considered as black boxes the two models are certainly equivalent, and explore the implications of that equivalence.
2. That’s not a new finding, everyone already knew the models were linear. I also thought the models were linear, but I have never been able to establish this mathematically. I also did not realize how rigid the linearity was.
3. Is there really an inherent linear warming trend built into the model? I don’t know … but there is something in the model that acts just like a built-in inherent linear warming. So in practice, whether the linear warming trend is built-in, or the model just acts as though it is built-in, the outcome is the same. (As a side note, although the high R^2 of 0.91 argues against the possibility of things improving a whole lot by including a simple lagging term, Lucia’s model is worth exploring further.)
4. Is this all a result of bad faith or intentional deception on the part of the modelers? I doubt it very much. I suspect that the choice of forcings and the other parts of the model “jes’ growed”, as Topsy said. My best guess is that this is the result of hundreds of small, incremental decisions and changes made over decades in the forcings, the model code, and the parameters.
5. If what you say is true, why has no one been able to successfully model the system without including anthropogenic forcing?
Glad you asked. Since the GISS model can be represented as a simple linear model, we can use the same model with only natural forcings. Here’s a first cut at that:
Figure 9. Model of the climate using only natural forcings (top panel). All forcings model from Figure 3 included in lower panel for comparison. Yes, the R^2 with only natural forcings is smaller, but it is still a pretty reasonable model.
6. But, but … you can’t just include a 0.42 degree warming like that! For all practical purposes, GISSE does the same thing only with different numbers, so you’ll have to take that up with them. See the US Supreme Court ruling in the case of Sauce For The Goose vs. Sauce For The Gander.
7. The model inherent warming trend doesn’t matter, because the final results for the IPCC scenarios show the change from model control runs, not absolute values. As a result, the warming trend cancels out, and we are left with the variation due to forcings. While this sounds eminently reasonable, consider that if you use their recommended procedure (cancel out the 0.25°C constant inherent warming trend) for their 20th century hindcast shown above, it gives an incorrect answer … so that argument doesn’t make sense.
To simplify access to the data, I have put the forcings, the model response, and the GISS temperature datasets online here as an Excel worksheet. The worksheet also contains the calculations used to produce Figure 3.
And as always, the scientific work of a thousand hands continue.
Regards,
w.
[UPDATE: This discussion continues at Where Did I Put That Energy.]









For those who are surprised that the result of such a complex model can be relicated so simply, consider modelling the following situation.
A beaker of water contains various objects of complicated shape in a known volume of water which is constantly stirred. A known amount of dye is added and the model is intended to predict the resulting concentration of the dye after a short time.
Method 1: Concentration = amount added/known volume.
Method 2: Set up the Navier-Stokes equation for the stirrer; assume a value for the viscosity (etc) of the water. Determine equations to describe the shapes of the objects (including the stirrer, which will need to be descibed as a function of time) in the beaker. Use these equations to set up the boundary conditions for the numerical solution to the Navier-Stokes equation. Get a supercomputer. Write the code, run it and come back tomorrow.
It would probably take quite a few attempts but I think you could eventually get Method 2 to give the same results as method 1.
Willis Eschenbach says:
“Second, this analysis strongly suggests that in the absence of any change in forcing, the GISSE model still warms. This is in agreement with the results of the control runs of the GISSE and other models that I discussed st the end of my post here. The GISSE control runs also showed warming when there was no change in forcing. This is a most unsettling result, particularly since other models showed similar (and in some cases larger) warming in the control runs.”
There is no reason to be unsettled by this result, if one understands the basic physics underlying the theory of global warming. The surface temperature can continue to increase even when forcing decreases. Forcing is an imbalance between the rate at which energy from the sun is absorbed by the earth and the rate at which it leaves the earth by radiation. The rate of change in temperature is the forcing integrated over the earths surface divided by the effective heat capacity of the earth. Actually, the earth’s heat capacity is not a simple number, because of the time constant for the heating of the ocean surface is much shorter than the long time constant for heating of the deep oceans, due the the imbalance between surface and deep ocean temperatures. The temperature will keep on increasing until the surface is warm enough that the outgoing radiation flux equals the incoming flux even while the forcing decreases. So an increase in forcing is not necessary to have an increase in temperature.
Finding that hindcasts exhibit linear correlation between the global temperatures and forcing is not sufficient grounds to conclude that there is some kind of general law relating the two variables.
Willis,
It’s not surprising that your simple model of near-straight lines can match the teraflop versions. This is because some of the critical assumptions for the teraflop model are extremely simple and no amount of computer power can add valid complexity to them.
For example, aerosols are one of the main downward-facing forcings. (Stratospheric) aerosols are part described on the GISS page you referenced. Here is an extract from Sato –
“Updates (April 2002)
Data for Krakatoa and Santa Maria were modified. The optical thickness for Krakatoa is 1.1 times that for Pinatubo, based on three-year integration of pyrheliometer data, with the spatial distribution based on Pinatubo but with the two hemispheres switched. The optical thickness for Santa Maria (0.55 times that of Pinatubo) has comparable aerosol amount in both hemispheres based on ice core data.
The effective radius of the aerosol particles is defined for large volcanoes as
reff = 0.20 + taumax(latitude)0.75 × f(t-t0) (µm)
where f(t-t0) is a function of time derived from the observed reff for Pinatubo, while keeping the observed values for El Chichon and Pinatubo.”
The optical thickness is measured at 550 nm (green light to the eye), but the absorption of light in stratospheric clouds could well be rather different at different wavelengths. (Dunno, have not measured it personally).
Now, the the period 1880 to 1920, we have a strong cluster of aerosols but no instruments. The “constants” used in equations like the one above seem to be from reconstructed proxy upon proxy, to about one significant figure, which seems a bit cavalier because of the dominance of the effect on the model. Elsewhere, it is said that aerosols are kept constant since 1990, which is a bit of a gasp as well, given reports of the increasing haze over China as more coal is used.
To cut a long story short, what is the point of a teraflop model when so many of the major constants or coefficients are just a guess, enlightened as the guess might be?
Thank you for making this so obvious.
davidc :
December 20, 2010 at 5:15 pm
this analogy of a dye in a beaker is great.
Willis, Just FWIW according to a lecture I heard at AGU there are 32 parameters that get fixed for a GCM. This researcher took an approach that was familiar to me. They did a perturbed physics ensemble. I believe that Tim Palmer talked about this in his “grand challenge” presentation. I had lunch with him, judith and Peter webster. I only wish it could have lasted longer, he’s got a book out ( he was signing it for a student when we met) you might give it a look see
Seems like the volcanic forcing should not be net negative (but is), because you would need a positive bias just to maintain a steady climate. This might mean that the GISS background warming is the response to an unusually clean stratosphere. The mean volcanic forcing should be subtracted out if this is considered a part of the natural forcing.
Here Willis
http://www.agu.org/meetings/fm10/lectures/lecture_videos/A42A.shtml
George E. Smith:
Sorry George but that was how I got into computer programming in the first place, and I was not speaking of a HP-35, my roommate in engineering had one during the early 70’s. But after looking up that $800, it was actually $700, from the OSU bookstore. Hot off the HP production line. 1975 in February. The price dropped as the month’s went by but that is what I had to fork over for it. It took my bother’s business from nobody to #5 in two years and all in hand written in RPG, three magnetic strips. So please check your facts. That story is a relished part of my life.
Now, if you still have a working one, great, I don’t but still have the manual, don’t you think that was such a classic model if you view it’s technology at those times? I do.
“Ferdinand Engelbeen says:
December 20, 2010 at 1:55 pm
…
The moment you use different sensitivities for different forcings, you can attribute any set of forcing x sensitivity and match the past temperature with better and better R^2, where the (mathematical! not necessarely the real) optimum may show a very low sensitivity for CO2, as Wayne calculated: December 19, 2010 at 11:36 pm.
Ferdinand, I see your point. That analysis I did above is saying some forcings have more weight but they are all given as firm Wm-2. If you fit to the anomalies as I did, that results is really saying, if real at all, that the 10 or so forcings Wm-2 are not really correct and their actual values should be found in the future to be per the weights as laid out above. This could be right or wrong or in-between. I do still see some curiosity in those weightings that did match the observed, they are the same forcings with high factors that have been pointed out here so many times in the past.
I have made a copy of my weightings above to see if in the following years if science finds that each forcing is in fact not really correct as assumed in 2010 and if some are corrections move in the direction given by the weightings that a simple Excel fit says it must be, for that is what the temps say they must be. That would be great, saying physics does actually work, even in climate science. (But it really, really also needs a column for the UHI synthetic forcing imprinted incorrectly in the GISS temps)
Thanks to Willis again for the data and the opportunity to actually learn, without the actual data, it’s a barren desert of conflicting opinions.
Speed says:
>>
Is anyone concerned that the GISSE Climate Model Results are much smoother than the GISS Global Temperature in Figure 1? While year to year variations are more weather than climate and we expect a climate model to reproduce the trend and average measured global temperature rather than exact annual values, why doesn’t the model reproduce the wide variation that occur from year to year?
>>
Probably the main reason is that they do not even attempt to model major variations in ocean currents. It’s a bit like cloud formation: they don’t have any real understanding of the processes so they can’t model them. Instead they IGNORE THEM.
Yes, I was gob-smacked when I found that out. If it had not come from a direct reply from someone at Hadley Met. Office research team I would have doubted it.
Apparently they “think” that what they call “internal variability” is unimportant and should average out over time.
Since ocean currents clearly do have profound effects on climate, without understanding the mechanisms and the timescales, that seems to be a gross assumption.
Climate models are decades away from being any use as predictive tools.
There is NO WAY they should be given the slightest consideration for future global energy policy.
“The IPCC actually says that because the tuned models don’t work well with part of their input removed, this shows that humans are the cause of the warming … not sure what I can say about that.”
Yup, that is pretty much how I understood that. Whenever they claimed that they could not reproduce the modern warming in their models without the Anthopological inputs, then my response has always been, “Your model is wrong, or the global homogenised temperature record is wrong.”
“One of the most surprising findings to me, which no one has commented on, is the sensitivity. Depending on whether we include a linear trend term or not, the sensitivity of the GISSE model is either half a degree C or 1.3°C per doubling of CO2. Regardless of the merits of my analysis, that much is indisputable, it’s just simple math.”
Wills
Playing Devil’s Advocate for a minute – are you assuming that the linear relationship continues into the future whereas the all powerful GISSE does not limit itself to a linear relationship once the positive feedbacks really get going.
Ed
Willis says:
One of the arguments against anthropogenic global warming, at least on the scale postulated by warmers, is that climate is quite variable — what some view as “global warming” or “climate change” may simply be natural variability. In the GISS temperature signal I see a lot of noise. Or variability.
If we postulate that individual GISSE runs are equally (or more) noisy but many runs are averaged together to reduce the noise (a type of signal averaging), are they not reducing the reported natural variability of their model output?
Perhaps there is a model run that mimics the story of Phaeton, “The running conflagration spreads below. But these are trivial ills: whole cities burn, And peopled kingdoms into ashes turn.” until the operator (echoing Zeus) mercifully terminates the run or averages it into oblivion, returning the GISSE model Earth to a more believable regime.
E = MC^2
If you need complex formulas then you are no Einstein. 17 years on a normal computer. Sounds like they have no clue and just kept adding pointless formulas to make it look complex.
When you have KNOWLEDGE things are not very complex
DNA with 4 bases – A, C, G and T
E = MC^2
etc
When you don’t have a clue, things are very complex, 17 years on a regular computer.
“”””” wayne says:
December 21, 2010 at 12:08 am
George E. Smith:
Sorry George but that was how I got into computer programming in the first place, and I was not speaking of a HP-35, my roommate in engineering had one during the early 70′s. But after looking up that $800, it was actually $700, from the OSU bookstore. Hot off the HP production line. 1975 in February. The price dropped as the month’s went by but that is what I had to fork over for it. It took my bother’s business from nobody to #5 in two years and all in hand written in RPG, three magnetic strips. So please check your facts. That story is a relished part of my life.
Now, if you still have a working one, great, I don’t but still have the manual, don’t you think that was such a classic model if you view it’s technology at those times? I do. “””””
Wayne,
When I rechecked your date (1975) I realized that was considerably later than the HP-35 introduction, and much more like the HP-65 release date; with the 45 coming in between.
I was using an HP Desktop “calculator”; the model 9830, to do lens designs; actually for the actual Litronix LED displays, that went into their el cheapo calculators which were also 1975 era. We were not in competition with HP’s market. I also wrote the whole Optical Ray Tracing program for that 9830, and used it to make better LED displays (for calculators); that even HP couldn’t match (the displays).
My HP-65 still operates flawlessly; but I most often use newer models like the HP-32 and I think HP-34.
I first ran into those scientific calculators with the Wang machines that multiplied by using logs. It didn’t make any sense to me to calculate a log, instead of doing a multiply or even a divide; and I spent eons trying to understand the algorithm that Wang was using. It was actually a very crude forerunner of the general CORDIC algorithm that was the core of the HP-35.
I tried to make my own scientific calculator and even strung up my own magnetic core memory, using 30 mil ferrite cores. I still have that mag memory plane somewhere.
I actually purchased the HP-9830 off Litronix, when I left the company, and continued to do lens designs for them on it; until some varmints broke into my house and stole the machine.
But as to the HP-35-45-65; I agree with you they were totally game changing products. I believe that the guy who brought the project to HP was named Osborne; maybe Tom Osborne; but I can’t be sure, in any case he made quite a name for himself; and wrote a number of books on computers and programming.
I know I didn’t pay $700 for my HP-65, but I was certainly not one of the early buyers, as I had the 9830.
Hell that stuff is all Pleistocene age now.
P. Solar says:
December 21, 2010 at 2:40 am (Edit)
Speed says:
>>
Is anyone concerned that the GISSE Climate Model Results are much smoother than the GISS Global Temperature in Figure 1? While year to year variations are more weather than climate and we expect a climate model to reproduce the trend and average measured global temperature rather than exact annual values, why doesn’t the model reproduce the wide variation that occur from year to year?
>>
Probably the main reason is that they do not even attempt to model major variations in ocean currents. It’s a bit like cloud formation: they don’t have any real understanding of the processes so they can’t model them. Instead they IGNORE THEM.
##################
this is a common misunderstanding. You do not model EMERGENT phenomena.
Let’s take a simple case of a fluid dynamics simulation. Modelling the flow of a fluide over a surface. You do NOT explicitly model a vortex. IF you model the fluid and the surface correctly, THEN you will see these flow structures emerge. Similarly, the atmosphere and the oceans are modelled using the same equations that we use to model fluid flows in say aircraft design. If, if you get the geometry right, if you get a good number of the physical factors correct ( largely), then the circulation patterns emerge. So you will see some of the well known circulation patterns EMERGE as the simulations run. This is evidence that the models are getting the problem right ( after all its just Navier Stokes) The devil is in the details. Do they get all the circulations?, do they get them with the correct frequency. matching timing against the historical record would probably require a data assimilation step– huge computation load. So for example a model would only match the 1998 el nino by chance, not by construction. The situation is very much the same in say CFD. We could predict with some accuracy that a vortex would form off the leading edge extension of an aircraft ( see the F/A-18) and we could predict that this high energy flow would be of tremendous help, but unfortunately the models could not predict the kind of buffet the vertical tails would see at high AOA.
Consequently, the production plane had problems with the tails:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.51.992
Nobody in aerospace concludes from this that models are junk. The physics is known. making them work on a computer is tough. What should concern you about GCMs is perhaps the inability to predict real catostrophic climate change.
Have a look at the Tim Palmer video I posted
I know I’m kind of slow.
It appears from the forcings used that GISS centered the forcings on 1880 values to zero.
Would one not have to establish that earth was in an equilibrium state in order to model the effect from any deviation in that state?
wayne says:
December 21, 2010 at 12:58 am
Wayne, indeed it doesn’t make any mathematical difference if you adjust the forcings themselves or the sensitivities for each forcing. The latter is somewhat more correct, if the forcing is known (as is in the case for CO2, at least theoretically, based on absorption spectra), but in other cases, like the forcing attributed to human aerosols, far from certain… What is certain is that the current models with “one sensitivity fits all” forcings are far away from the real world and only fit the past temperatures, thanks to lots of implied fudge factors (clouds, aerosols,…).
Steven Mosher says:
December 21, 2010 at 10:58 am
Do they get all the circulations?, do they get them with the correct frequency.
Current climate models don’t show any natural variability of any frequency between 2 and 90 years. See Fig. S1 in Barnett e.a. supplementary material:
http://www.sciencemag.org/content/suppl/2005/07/07/1112418.DC1/Barnett.SOM.pdf
With other words, they don’t have alle circulations included (or they don’t have them correctly).
Steven Mosher says:
December 21, 2010 at 10:58 am
“Similarly, the atmosphere and the oceans are modelled using the same equations that we use to model fluid flows in say aircraft design.”
Errr…not quite. The equations used to model atmospheric flows are a bit different, with many built-in assumptions to filter out undesirable solutions (like acoustic waves). See chapter 1 of the MIT GCM documentation here…
(NOTE: The foregoing manual shows that you can REALLY document a code well if you put your mind to it…right NASA GISS??).
“If you need complex formulas then you are no Einstein. ”
Would you please solve the Navier Stokes equations then? Please include conjugate heat transfer and phase change effects.
Thanks.
@george E. Smith:
December 21, 2010 at 10:44 am
George, now that brings back some old memories, HP-9830 desktop, one line l.e.d.. Only used one for a few months before our Wang got in. Your experiences seem pretty parallel to mine over that period. We used a Wang computer, eventually mvp/cdc drives, from that period until PCs came out in early 80’s. Those were fascinating days and i’m still cringe on how much was spent on those.
onion says:
December 20, 2010 at 4:13 pm
My point is that if the GISSE model can be that closely emulated with a linear model, then the GISSE model itself is very close to linear. But climate is very far from linear. Perhaps that doesn’t bother you. I think it should.
You’re missing the point. If I can very closely emulate a program that requires days of time on a supercomputer to run with a simple linear model … then the extra complexity of the supercomputer model is only making a trivial improvement in the model. Again, given the huge investment of time, energy, and belief in the models, this should be worrisome.
That may be so, and I commented on that in my post. But to make your point real, you need to demonstrate that it can be done, not simply claim it … and Lucia was unable to do it with anything like the fidelity that my model has. So your suggested improvement has to beat both my model and Lucia’s model …
In addition, you seem to forget that we know, not think but know, that the GISSE model warms when there is no change in forcing. In the main post I cited the study above that demonstrates that it warms without forcing change. So the onus is not on me to prove that the GISSE model warms with no change in forcing. The onus is on you to show that it doesn’t.
I await your contribution on both those issues. This is a scientific blog, and I have made, substantiated, and provided data, citations, and other backup for my claims. Time for you to do the same.
Go back and read the head post. I checked the sensitivity both with and without the extra constant term. Both are way, way below what GISSE modelers claim. Please do us all a favor and read my post very carefully. You are arguing against straw men.
“The answer is known”??? My friend, your faith is positively heartwarming, but tragically misplaced. Haven’t you noticed by now that this is climate science, and that the amount of misinformation is about equal to the amount of information?
My point is simple. My analysis shows a different answer. Your bet that it has to do with the exclusion of the 0.25° per century is a bet I am happy to take. How much money you want to put on it? Because I already calculated it without the 0.25°/century … you really should do your homework before you offer to bet.
And yes, certainly there may be a flaw in my method. I’ve been wrong lots of times, haven’t we all … including GISS.
You say that because Gavin Schmidt of GISS or someone else has given us the ‘official answer’ about sensitivity from on high, there must be a mistake in my method. While that might pass in church, it won’t pass here. It is a statement of faith and not a statement of science. Science works by someone building a scientific edifice, and other people using science (math, logic, data, etc.) to tear it down.
Saying “you are wrong, because the answer is known” is not just unscientific. It is anti-scientific. As Richard Feynmann commented, “Science is the belief in the ignorance of the experts.” And yes, that includes the experts at GISS who wrote the climate model.
I await your improved model that beats out Lucias and mine, and your demonstration that the GISSE model doesn’t warm when the forcings don’t change. That’s how science works, not by you claiming over and over again that I must be wrong. I certainly may be wrong … but you have to show that, not simply claim it.
w.
eadler says:
December 20, 2010 at 5:35 pm
The “basic physics underlying the theory of global warming” mean that a model should warm for eighty years in the absence of forcings? Really?? Then why do only some of the models show that warming during the same identical control runs? Are some of the modelers too stupid to put in the basic physics? Serious question, and if you can’t answer it, then you need to do some homework.
You’ll have to explain that better. Read up on the control runs done on these models, and come back and give a better explanation. Because in the model world, with unchanging forcings, none of the various processes you give above (forcing imbalance, etc.) are going on … so why is the model warming?
Frank K. says:
December 21, 2010 at 3:10 pm (Edit)
Steven Mosher says:
December 21, 2010 at 10:58 am
“Similarly, the atmosphere and the oceans are modelled using the same equations that we use to model fluid flows in say aircraft design.”
Errr…not quite.
#######
of course. . And yes the MIT GCM is a nicely documented piece of work. Over the years ModelE has been sprucing up their code and documentation.