Guest Post by Willis Eschenbach
After many false starts, thanks to Steven Mosher and Derecho64 I was able to access the forcings used by the CCSM3 climate model. This is an important model because its successor, the CESM3 model, is going to be used in the laughably named “CIM-EARTH Project.” Anyhow, just as new telescopes have “first light” when they are first used, so here I’ll provide the first light from the CCSM3 ozone forcings. These are the forcings used by the CCSM3 model in their hindcast of the 20th Century (called the “20C3M” simulations in the trade). How well did they do with the hindcast? Not all that well … but that’s a future story. This story is about ozone concentrations. Figure 1 shows the concentration at the highest-altitude of the 18 atmospheric levels, concentrations that were used as one of the forcings for the 20C3M climate model runs.
Figure 1. Ozone concentration at about 36 km altitude (23 mi), used as input to the CCSM3 20th century (20C3M) simulations.
There are so many things wrong with using that “data” as an input to a climate model that I scarcely know where to start.
First, the provenance. Is this historical data, some kind of record of observations? Nope. Turns out that this is the output of a separate ozone model. So instead of being observations, it’s like a Hollywood movie that’s “based on a true story”, yeah, right … and even then only for part of the time.
Second, what’s up with the strange sub-annual ups and downs (darker sections) in the annual cycle? They start out in the upper part of the annual swing, and then they change to the lower part after about 1970. Nor is this the only altitude level with this kind of oddity. There are 18 levels, and most of them show this strangeness in different forms. Figure 2 shows their claimed ozone concentrations from about half that altitude:
Figure 2. Ozone concentration at about 19 km altitude (12 mi), used as input to the CCSM3 20th century simulations.
Again you can see the sub-annual cycles, but this time only post-1970. Before that, it goes up and down in a regular annual variation, as we would expect. After that, we see the strange mid-year variation. Most other altitude levels show similar oddities. Again, it appears that the modelers are not applying the famous “eyeball test”.
Third, how on earth can they justify using this kind of manufactured, obviously and ridiculously incorrect “data” as input to a climate model? If you are trying to hindcast the 20th century, using that kind of hockeystick nonsense as input to your climate model is not scientific in any sense, and at least gives the appearance that you are cooking the books to get a desired outcome.
Anyhow, that’s not why I wanted to access the forcings. I wanted to compare them to the output of the model, to see if (like the GISS model) it is functionally equivalent to a trivially simple single-line equation. I’m working on that, these things take time. I just posted this up because it was so bizarre and … well … so hockeystick-like.
More results as they occur,
w.
And here:
ftp://toms.gsfc.nasa.gov/pub/nimbus7/data/monthly_averages/
ftp://toms.gsfc.nasa.gov/pub/eptoms/data/monthly_averages/
The output here is probably the result of the Magicc model. code is available. Since Ar4, you’ll probably have to check, but the models would take precursors as inputs and depleting substances as inputs, and not forcings
Understand also that if you are trying to understand model results by looking at forcings you only see the TCR in the first hundred years. ECR takes 600 years or so.
Also, you have to understand if they prescribed ozone forcing or not. Also dont assume the model used the ozone concentration as an input. You have to look at the notes for the model. Some models do not use all inputs. for example, some dont use volcanic forcing. They would have the data in the input files, but the code doesnt read it in.
So, simply because the input file is there is no guarantee that the code reads it in.
details, details, details.
Here, if you want to understand a model start with this guy.
http://www.gfdl.noaa.gov/blog/isaac-held/
If you really want to understand models start by asking somebody who works on them. He is very open and will answer any questions
here are some good places to start
http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/05/2-linearity-of-the-forced-response/#more-624
This is the best
http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/28/6-transient-response-to-the-well-mixed-greenhouse-gases/
“Foregoing a critique of this result for the time being, if we assume that natural variability does not, in fact, confound the century-long trend substantially, the observed warming provides us with a clear choice. Either
the transient sensitivity to WMGGs of the CMIP3 models is too large, on average, or
there is significant negative non-WMGG forcing– aerosols being by far the most likely culprit.
Two simple things to look out for when this issue is discussed:
Watch out for those who estimate the expected 20th century warming due to WMGGs by rescaling equilibrium sensitivity rather than TCR.
Conversely, watch out of those who compare the observed warming to the model’s response to CO2 only, rather than the sum of all the WMGGs. If we scale our expectations for warming down by the fraction of the WMGG forcing due to CO2, the model results (without aerosol forcing) happen to cluster in a pleasing way around the observed trend, but one cannot justify isolating CO2‘s contribution in this way.”
jeez says:
May 9, 2011 at 9:27 pm
Real data? But nobody in climate science uses actual observations, that would be so last century …
w.
I’ve seen this play before. It starts with “Well, you’ve got an old version of the data. I’ve misplaced the actual data somewhere buy my assistant will get it to you, really he will.”
Willis — along the lines of the next defense (that it doesn’t really matter anyway), how “robust” are the model results to this data? In all seriousness, though, does this particular input matter that much to the output? I’m sure they will have a reasonable sounding explanation for why they used fudgly data like this, but I’m not versed enough to know what that explanation will be. I’m sure you will figure it out when you have had time to dissect everything.
If the future of the planet weren’t hanging on this, I would have guessed this was an April 1st posting.
Holy mackeral, what more cr@p is there in climate “science”?
I’m surprised they didn’t run the data into the future. A 100 years of future data is just so much better at training the models to deliver the anwer you want than having to “fine tune” the model through trial and error.
Willis, the problem here is that Real World Data – at least from 1978 onwards – does not provide any information about the vertical distribution of ozone and its changes, which is required for your climate model if the model is not equiped with an atmospheric chemistry module. It is a 3-D model, and you therfore need to have this information. Hence, using Real World Data then requires some method to distribute ozone vertically. One way would be to use another model which is tested against whatever Real World Data there is available (I assume that to some extent is what the modellers are trying to do or claim to be doing, but that needs conformation …). I am not saying that the method is perfect or sufficient, I just don’t see another way around this. It is a commonly used method. Another method that is applied is to use addional balloon-observations to reconstruct a seasonally dependent climatological vertical distribution of ozone, but that suffers from the problem that there are insufficient balloon observations to make a detailed reconstruction, hence it also is imperfect.
Just don’t blame me, I don’t claim that models are perfect – far from that – but this is the reality of things.
Does anyone have a link to a graph of the real data for comparison?
I know there has been a link posted above, but for soom reason my work’s servers are blocking this!
“How on earth can they justify using this kind of manufactured, obviously and ridiculously incorrect “data” as input to a climate model?”
Willis, you wouldn’t want them leaving a known forcing out of the models, would you? – That would mean that their models were inaccurate! … er …
/sarc
Jos says:
May 10, 2011 at 12:45 am
Jos, first off, look at their forcings. They have numbers from 1880 onwards. Computer or no computer … say what?
Second, while I understand the use of models, and I use and have designed computer models, this one is a joke. I mean, no one has said one word about the wacky nature of what the model claims is happening, with the strange sub-annual variation.
Third, if we don’t have information, we don’t have it. We can’t just take our best guess and use it as though it were an observation. Or I guess we can, but the output isn’t science without error estimates … and without observations that’s tough.
Many thanks,
w
Another attempt at straw grabbing so as to keep the old GHG/CAGW story alive.
Flogging a dead horse I am afraid.
actually guys there is no such thing as ‘real’ data. You have an estimate
with the nimbus ‘data.’ Nimbus doesnt “sample” actual ozone. It estimates
ozone. And it uses a model to do that. Of course the MODEL used to calculate the Nimbus data uses……radiative transfer physics.
So. If you want to use “real observations” you have the following logical problem.
those “observations” of “ozone” are actually the output of a model.
That model assumes that certain physics is TRUE. which physics?
Why the physics of RTE of course. and that physics, yes that physics, is the same physics that tell us that C02 will warm the planet.
lets see.. the manual on Nimbus…
ho hum.
not ‘observations’ at all. But the uncertain results of calculations that depend upon the same physics that tell us that GHGs warm the planet. And dont even ask what happens to the ‘observations’ if their is S02 around.
In summary: there is no such thing as ozone observations. There are estimates made from sensor readings and complex physics models.
A real scientist predicted that the Ozone layer would deplete at the poles in the mid of winter, he then found a way to measure it. Alas this poor bugger was pushed aside and the discovered rarification of O3 blamed on us mortals, this fitted rather well into the agenda that Western civilization is evil. This study is just another snout in the money trough of stupid.
“Third, if we don’t have information, we don’t have it. We can’t just take our best guess and use it as though it were an observation. Or I guess we can, but the output isn’t science without error estimates … and without observations that’s tough.”
Actually we do this all the time.
In the case of a hindcast what you are trying to do is a reconstruction. So you very well guess at certain factors. In a hindcast the guessed at parameter IS the hypothesis, not the observation.
Observation 1. The historical record.
Observation 2. the GCM out put.
Hypothesis. if ozone looked like this, then Obs 1 will be equal to Obs 2.
Now, people dont describe it this way, but logically, the guess at historical ozone IS the hypothesis.
Joanna Haigh has pointed out that ozone concentrations below 45km decreased whilst the sun was quiet from 2004 to 2007 but increased above that level.
Comments?
FergalR says:
May 9, 2011 at 6:51 pm
Someone please tell me where my thinking is faulty here:
An oxygen molecule (O2) in the atmosphere absorbs UV light which splits it into free-radical Oxygen (O) which happily combines with O2 molecules to form ozone (O3). Thus creating an ozone layer at the top of the atmosphere.
Ozone is slightly better at absorbing UV than O2 – but absorption bands are widened by increasing concentration – so with O2 being 20% of our atmosphere in the >10km of the atmosphere below the ozone layer then . . .
I guess what I’m trying to say is that the ozone hole thing is utter bullshit. Almost all of the UV will be absorbed by oxygen before it reaches ground level with or without an ozone layer.
————————————————————————————–
The problem with your analysis is that you have not looked carefully enough at the mechanisms of absorption and the related wavelengths.
Nitrogen at very short wavelengths and oxygen at wavelengths below about 250nm absorb all the high energy photons entering the earth’s atmosphere. They do this largely within the thermosphere which is why it is so hot. The absorption process is one of photodissociation forming monatomic nitrogen and oxygen. As one descends through the stratosphere between 100km and about 15km the rate of absorption reduces and the temperature falls. Here monatomic oxygen and molecular oxygen combine to form ozone. This ozone absorbs almost all UVc below 280 nm and much of the UVb between 280nm and 320nm. Molecular oxygen is a poor absorber of UV at wavelengths longer than the 250nm necessary for photodissication so it is not relevant when discussing UV absorption within the stratosphere and troposphere since no photons of such high energy penetrate this far. The increased density at lower levels is therefore not relevant either.
Whether the ozone hole is bullshit or not is another question. But UVb would certainly be at least an order of magnitude higher if there was no ozone.
By the way I have mentioned on a previous post that, contrary to what you read in the scare press, there is no obvious link between Ozone depletion and an increase in skin cancer. The highest levels of skin cancer are found in Australia and New Zealand. These two countries sit beneath a band of HIGH ozone concentrations – the highest in the world. So the cause of their high levels of skin cancer cannot be due to low levels of ozone even if a reduction had happened. Research 10 years ago suggested that the cause of the high incidence was due to the relative high levels of UVa (which is not removed by ozone) compared with UVb (which is). High levels of UVb increase the levels of melanin in the skin which does block UVa very effectively. The obvious question was why evolution had found a way to remove UVa and not UVb? The implication was that UVa was the danger and other circumstantial evidence from the research seemed to support this.
I am very suspicious that this research was supressed by the pharmaceuticals industry because, if true, most sunscreens (which also remove UVb and not much UVa) would actually be increasing the risk of cancer by reducing our natural protection. The fact that the increased use of sunscreens and the rate of skin cancer have been directly correlated for many decades suggests that this might be true, although it has to be stressed that correlation does not prove causality.
It is interesting to note that now that sunscreens with good UVa protection, compared with UVb, are available (star rating) the earlier research is now being widely quoted. I raise this here because of my suspicion that science was manipulated to maximise profit, at the expense of peoples lives. Climate science is not unique.
Nice one! Thank god there are people like you examining this junk science in detail and exposing these blatent frauds.
steven mosher says:
May 10, 2011 at 2:50 am
Thanks as always for your thoughts, mosh. So is it your claim that the estimate of ozone used as a forcing and shown in Figure 1 is the result of a “complex physics model”? I’m not sure what your point is here, or even exactly which models you are talking about.
Let me be clear that I am asking a real, and not a rhetorical question here, mosh. So I’ll ask it again:
Because to me, it looks like the output of a bozo-simple AND INCORRECT computer model. Unless you think the historical record actually does look like Fig. 1.
A “yes” or “no” to my question will be sufficient, although of course an explanation is always welcome. Once I know which model you’re talking about, this might all be clearer.
w.
Mr. Eisenbach,
Regardless of the conclusions you were able to draw from the data, I am curious after having read the other thread. Was the data indeed not possible to access without special privileges, or were Mosher and Derecho correct all along and it was a case of user error, one which they helped you with?
You’re final update gave no sense of how it was resolved. You made some fairly interesting accusations that the data was locked from the common man. Was it? If not, it might be nice to set the record straight. If it was, did those guys get you access somehow?
steven mosher says:
May 10, 2011 at 3:13 am
We do? We make up fake historical “data” to feed a model, and on the basis of the results of using that fake data we claim that our model is accurate enough to forecast a century out into the future? We do that “all the time”?
An example would be worth a lot more than a bald assertion here … because maybe you do that all the time, but I sure don’t. I don’t feed half-data/half-straight-line-fantasy hockeysticks like Figure 1 into a model and call it anything but GIGO. It is certainly not science.
w.
So, the output of one computer simulation based on no actual measured data, is used as input to a subsequent computer simulation to imagine where ozone concentrations are heading.
I have two questions:
#1: What are these people smoking?
#2: When is funding for this mental masturbation going to be pulled?
willis.
I will make it brutally simple.
you posted “data” that is the output of a model that estimates ozone.
jeez posted ‘data’ that is supposedly observations from Nimbus.
You commented, sarcasticly, that nobody in climate science uses observations.
I am pointing out that the observations that jeez refers to are ACTUALLY the result
of a complex physics model plus sensor values. A physics model known as radiative transfer codes. The heart and soul of a GCM. most people miss that.
So the larger point is this. Very often on WUWT I see this knee jerk reaction.
“they use models, not observations” What this naive view misses is that ALL
observation and especially satillite observation relies on and is shot through
and through with modelling. We dont have ‘observations’ on one hand and models
on the other hand. We have models of laws and models of observation.
So, one cant simply reject an input series because it is the result of a model.
why? because all observation is shot through with modelling and theory.
What that means is that we have to understand the various models we have that create “observations” In some cases the model is simple. A thermometer doesnt measure temperature, it models it. In other cases, like ozone, the models are complex.
So I hestitate to have the knee jerk reaction and just conclude that a model that produces observations is wrong. because all observation is the result of some sort of modelling or theoretical assumptions. Until, I know what the model is, I’ll SUSPEND judgement rather than render judgement. That’s true skepticism. and consistent.
“A thermometer doesnt measure temperature, it models it. In other cases, like ozone, the models are complex.”
I think there is a level of complexity at which simple measuring segues into modelling and so the higher the level of complexity the less reliable the outcome.
In normal parlance it is unreasonable to call a measurement a model and unreasonable to call a model a measurement.
Thus, querying the output of a model is wise but querying the output from a measurement is unwise.
Mosh’s attempt to conflate the two is disingenuous and the silliest position of all.
“We do? We make up fake historical “data” to feed a model, and on the basis of the results of using that fake data we claim that our model is accurate enough to forecast a century out into the future? We do that “all the time”?”
It’s done routinely. I’ll give you a simple example.
I had to build a flight model of an airplane were we did not have all the subsystem data. In particular we had no data on the accelerometer used in the actual aircraft.
That data was was required to build a model of the flight control system.
What we did have was flight test data. How the plane actually flew. So, we start by building a model with a black box for the accelerometer. We have no data for it. What do we do?
well a couple of approaches.
1. Somebody guesses
2. somebody bounds the problem using back of the envelop estimates
3. somebody builds a model of the accelerometer.
4. somebody reasearches existing accelerometers and guesses from that using a model
of technology improvement.
Then you put in your best guess and compare your model with its made up historical data about the accuracy of the accelerometer to the actual flight test data.
Done all the time in certain fields. all the time.
Logically, as I explained, you are simply testing the hypothesis that the unknown historical data is equal to your guess.
Hypothesis: if we set unknown variable to X, then model output will match historical record.
done all the time. As I said, in these cases the historical data functions logically as you hypothesis. operationally it looks like you input, but epistemically its the hypothesis.
“So, the output of one computer simulation based on no actual measured data, is used as input to a subsequent computer simulation to imagine where ozone concentrations are heading. ”
That’s not how it works.
For historical data where you have no measurements you have to estimate the value.
For the recent instrumented period (1978..) you have “observation” data. which is
heavily processed sensor readings. Not ozone measures. For future ozone you have
scenarios: what ifs
1. what if ozone is constant
2. what if it goes up
3. what if it goes down.
Your familiar with this kind of analysis. What happens to your IRA if the rates go up, down, sideways etc.
All very normal. done all the time. You live by this stuff.