First Light on the Ozone Hockeystick

Guest Post by Willis Eschenbach

After many false starts, thanks to Steven Mosher and Derecho64 I was able to access the forcings used by the CCSM3 climate model. This is an important model because its successor, the CESM3 model, is going to be used in the laughably named “CIM-EARTH Project.” Anyhow, just as new telescopes have “first light” when they are first used, so here I’ll provide the first light from the CCSM3 ozone forcings. These are the forcings used by the CCSM3 model in their hindcast of the 20th Century (called the “20C3M” simulations in the trade). How well did they do with the hindcast? Not all that well … but that’s a future story. This story is about ozone concentrations. Figure 1 shows the concentration at the highest-altitude of the 18 atmospheric levels, concentrations that were used as one of the forcings for the 20C3M climate model runs.

Figure 1. Ozone concentration at about 36 km altitude (23 mi), used as input to the CCSM3 20th century (20C3M) simulations. 

There are so many things wrong with using that “data” as an input to a climate model that I scarcely know where to start.

First, the provenance. Is this historical data, some kind of record of observations? Nope. Turns out that this is the output of a separate ozone model. So instead of being observations, it’s like a Hollywood movie that’s “based on a true story”, yeah, right … and even then only for part of the time.

Second, what’s up with the strange sub-annual ups and downs (darker sections) in the annual cycle? They start out in the upper part of the annual swing, and then they change to the lower part after about 1970. Nor is this the only altitude level with this kind of oddity. There are 18 levels, and most of them show this strangeness in different forms. Figure 2 shows their claimed ozone concentrations from about half that altitude:

Figure 2. Ozone concentration at about 19 km altitude (12 mi), used as input to the CCSM3 20th century simulations. 

Again you can see the sub-annual cycles, but this time only post-1970. Before that, it goes up and down in a regular annual variation, as we would expect. After that, we see the strange mid-year variation. Most other altitude levels show similar oddities. Again, it appears that the modelers are not applying the famous “eyeball test”.

Third, how on earth can they justify using this kind of manufactured, obviously and ridiculously incorrect “data” as input to a climate model? If you are trying to hindcast the 20th century, using that kind of hockeystick nonsense as input to your climate model is not scientific in any sense, and at least gives the appearance that you are cooking the books to get a desired outcome.

Anyhow, that’s not why I wanted to access the forcings. I wanted to compare them to the output of the model, to see if (like the GISS model) it is functionally equivalent to a trivially simple single-line equation. I’m working on that, these things take time. I just posted this up because it was so bizarre and … well … so hockeystick-like.

More results as they occur,

w.

0 0 votes
Article Rating
65 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
John M
May 9, 2011 4:32 pm

So these are the stratospheric ozone levels that are supposed to account for the “goldilocks” effect in explaining TLS temperatures?
http://www.remss.com/msu/msu_data_description.html
Just enough to offset the monotonic increase in tropospheric ghgs.
Ri–i-i-i-i-ght.

May 9, 2011 4:32 pm

Talk about “biasing the model” to get what you want.
This has been one of my several CONSISTENT THEMES. Ozone is FORMED by high UV. It does not “protect” us from UV.
The ozone “holes” in at the poles are due to the lower amount of UV they recieve. (Lots of SPF 50 sold in Alaska, NOT! Unless you climb mountains…)
Since we have very sparse info on Ozone prior to WWII anything prior to WWII is speculation.
Alas, thanks again Willis!

Bob in Castlemaine
May 9, 2011 5:04 pm

Speaking of holes ozone, data and others – maybe this best sums up reasoning that underpins CLIM-EARTH’s methodology.

Jean Parisot
May 9, 2011 5:07 pm

Instead of trying to predict and model the future; why don’t we measure it well for one hundred years … then consider modeling it. We have plenty to fight over just in deciding what and how to measure it. Hopefully one hundred years will get the short term cyclic stuff bounded – then we can work on the trends.

Braddles
May 9, 2011 5:07 pm

Is there any direct observational evidence that the Antarctic ozone ‘hole’ was absent before 1975? I mean measurements not involving computer models. I can’t say I’ve seen any, though I’m happy to be proven wrong.

Cam (Melbourne, Australia)
May 9, 2011 5:19 pm

I would LOVE someone to do a in-depth dissection of each of the 20 or so GCMs that the IPCC refer to (as the Holy Gospel) and look at the inputs and data (like O3 discussed above for example) and then we can all see how wrong these models actually are.

May 9, 2011 5:29 pm

The great (re-)discovery of Climate Science: If you control the Garbage In, you get your preferred Garbage Out.

Wondering Aloud
May 9, 2011 5:32 pm

Nice, the entire first 2/3 of the “data” set is purely made and represents a time for which no ongoing measurements exist; and the rest only reflects gradually developing measurement techniques and wishful thinking?

MattN
May 9, 2011 5:32 pm

That is appaling science….

Charles Higley
May 9, 2011 5:38 pm

How in blazes do they pretend to have ozone data that is in any way relevant back to 1880? Are they hoping that the public will think that we were watching it way back then?
I love the idea that they assume the hole is not usual as we have never seen the Antarctic without one.

Keith D
May 9, 2011 5:39 pm

Sounds like true-to-form science to me. I know that for a guy like you Willis you don’t have a pre-formed conclusion, but wheres the fun in actually finding a conclusion when starting with on is so much easier? LOL. I love watching you work. I’m glad you help us sort this data mess out. Go get em!

Michael D Smith
May 9, 2011 5:50 pm

Just wait until you get to aerosols…

May 9, 2011 5:54 pm

Looks to me like we have the famous 60-year cycle once again. It was dropping in 1970, until around 2000, and then it started rising again, and probably will for 30-odd years.
Looks like the same happens to the temperature, except it is inverted.
That’s the problem when you start measuring something over time, and assume that it was all perfectly static before you started. You quickly discover (if you are bright enough) that assumption is the mother of all stuff-ups. CAGW is just the biggest of those stuff-ups to date, is all.

Latitude
May 9, 2011 5:55 pm

So let’s use the out put from one model – that can’t be verified – to feed into a second model – and verify it
FAIL

Tom T
May 9, 2011 6:01 pm

Wild guess in wild guess out.

wayne
May 9, 2011 6:09 pm

Bizarre. I agree Willis, they are cooking the books. But to what tune?

Mark T
May 9, 2011 6:24 pm

The oscillation leads me to believe the model that generated this data is marginally stable, i.e., there iis a pole at z = 1.
Mark

FergalR
May 9, 2011 6:51 pm

Someone please tell me where my thinking is faulty here:
An oxygen molecule (O2) in the atmosphere absorbs UV light which splits it into free-radical Oxygen (O) which happily combines with O2 molecules to form ozone (O3). Thus creating an ozone layer at the top of the atmosphere.
Ozone is slightly better at absorbing UV than O2 – but absorption bands are widened by increasing concentration – so with O2 being 20% of our atmosphere in the >10km of the atmosphere below the ozone layer then . . .
I guess what I’m trying to say is that the ozone hole thing is utter bullshit. Almost all of the UV will be absorbed by oxygen before it reaches ground level with or without an ozone layer.

oldgamer56
May 9, 2011 6:52 pm

Silly me, when I taught Middle School Science one year, we learned than when conducting an experiment, you could only adjust one factor/variable in the experiment in order to ensure a valid experiment.
Either you start with actual data or you use the same average of the actual data to fill in the missing years with the caveat that the data for the missing years is suspect. Instead they made up data readings that are completely out of the actual range.
So any model that uses this data is automatically invalid before 1970 just for that reason alone. And this is only one small layer of the onion.
How does one get a job like this and what sort of personal ethics must you put aside in order to do this day after day.

kadaka (KD Knoebel)
May 9, 2011 7:06 pm

MattN said on May 9, 2011 at 5:32 pm:

That is appaling science….

Nah, just appalling. ☺

Konrad
May 9, 2011 7:09 pm

Sadly it is painfully clear what this ozone graph represents. It does not represent ozone levels fluctuating with the quantity of UV at the top of the atmosphere, nor from volcanic gases. This graph represents the fantasy of CFCs depleting the ozone layer and the rebound after they were banned.
The Montreal Protocol could be seen as a test run for the AGW hoax. Possibly the AGW scammers wanted to include a piece of their first successful scam for luck.

JRR Canada
May 9, 2011 8:03 pm

Thank you Willis, is this another climax of climatology?The team keep chirping thats ,its worse than we thought, and every time you and others expose their work to the light its seems they are correct.Their grasp of the scientific method and integrety is worse than we thought. while I hold to the never attribute to malice what incompetence will cover, this models all the way down theology is an awful lot of incompetence to accept, especially when its funded by my taxes. I await your deeper analysis but suspect you have summed it up already.

Mike Bromley the Kurd
May 9, 2011 8:16 pm

Where do these people get off spreading this garbage around? It is as though the wilder the claim, the more likely it is to pass peer review. Instead of the Library Stacks, we can find more science in the checkout lane at Safeway. It seems that any science of worth is done by aging ‘old school’ thinkers who actually DO think. What the hell is going to happen when we start croaking en masse? Woe be to the droning druids of the future science. To quote Pete Townshend, “hope I die before I get old”..because watching “science” like this absolute crap….no, data-less flagellation…and it’s policy outcome (pigheaded pols) in action is agonizing, insulting, and frightening.
Thanks again, Willis, for your tireless efforts to expose this garbage for what it is. One can only hope that the leader of some funding committee gets fiercely fed up and pulls the plug. I said Hope. Only.

Doug in Seattle
May 9, 2011 8:34 pm

I thin Konrad’s got the underlying assumption the model’s incorporating, but what about the funky mid year fluctuations. Perhaps this is the hemispheric concentration of man-made CFC working inside the assumption?

May 9, 2011 9:27 pm

Willis, here is the real data, which begins in late 1978.
http://toms.gsfc.nasa.gov/n7toms/nim7toms.html
Everything before that date is speculation, largely based on the work of good ol”Susan Solomon.

May 9, 2011 10:12 pm

The output here is probably the result of the Magicc model. code is available. Since Ar4, you’ll probably have to check, but the models would take precursors as inputs and depleting substances as inputs, and not forcings
Understand also that if you are trying to understand model results by looking at forcings you only see the TCR in the first hundred years. ECR takes 600 years or so.
Also, you have to understand if they prescribed ozone forcing or not. Also dont assume the model used the ozone concentration as an input. You have to look at the notes for the model. Some models do not use all inputs. for example, some dont use volcanic forcing. They would have the data in the input files, but the code doesnt read it in.
So, simply because the input file is there is no guarantee that the code reads it in.
details, details, details.
Here, if you want to understand a model start with this guy.
http://www.gfdl.noaa.gov/blog/isaac-held/
If you really want to understand models start by asking somebody who works on them. He is very open and will answer any questions
here are some good places to start
http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/05/2-linearity-of-the-forced-response/#more-624
This is the best
http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/28/6-transient-response-to-the-well-mixed-greenhouse-gases/
“Foregoing a critique of this result for the time being, if we assume that natural variability does not, in fact, confound the century-long trend substantially, the observed warming provides us with a clear choice. Either
the transient sensitivity to WMGGs of the CMIP3 models is too large, on average, or
there is significant negative non-WMGG forcing– aerosols being by far the most likely culprit.
Two simple things to look out for when this issue is discussed:
Watch out for those who estimate the expected 20th century warming due to WMGGs by rescaling equilibrium sensitivity rather than TCR.
Conversely, watch out of those who compare the observed warming to the model’s response to CO2 only, rather than the sum of all the WMGGs. If we scale our expectations for warming down by the fraction of the WMGG forcing due to CO2, the model results (without aerosol forcing) happen to cluster in a pleasing way around the observed trend, but one cannot justify isolating CO2‘s contribution in this way.”

Dan White
May 9, 2011 10:34 pm

I’ve seen this play before. It starts with “Well, you’ve got an old version of the data. I’ve misplaced the actual data somewhere buy my assistant will get it to you, really he will.”
Willis — along the lines of the next defense (that it doesn’t really matter anyway), how “robust” are the model results to this data? In all seriousness, though, does this particular input matter that much to the output? I’m sure they will have a reasonable sounding explanation for why they used fudgly data like this, but I’m not versed enough to know what that explanation will be. I’m sure you will figure it out when you have had time to dissect everything.

Steeptown
May 9, 2011 11:06 pm

If the future of the planet weren’t hanging on this, I would have guessed this was an April 1st posting.
Holy mackeral, what more cr@p is there in climate “science”?

ferd berple
May 9, 2011 11:28 pm

I’m surprised they didn’t run the data into the future. A 100 years of future data is just so much better at training the models to deliver the anwer you want than having to “fine tune” the model through trial and error.

Jos
May 10, 2011 12:45 am

Willis, the problem here is that Real World Data – at least from 1978 onwards – does not provide any information about the vertical distribution of ozone and its changes, which is required for your climate model if the model is not equiped with an atmospheric chemistry module. It is a 3-D model, and you therfore need to have this information. Hence, using Real World Data then requires some method to distribute ozone vertically. One way would be to use another model which is tested against whatever Real World Data there is available (I assume that to some extent is what the modellers are trying to do or claim to be doing, but that needs conformation …). I am not saying that the method is perfect or sufficient, I just don’t see another way around this. It is a commonly used method. Another method that is applied is to use addional balloon-observations to reconstruct a seasonally dependent climatological vertical distribution of ozone, but that suffers from the problem that there are insufficient balloon observations to make a detailed reconstruction, hence it also is imperfect.
Just don’t blame me, I don’t claim that models are perfect – far from that – but this is the reality of things.

jamie
May 10, 2011 12:48 am

Does anyone have a link to a graph of the real data for comparison?
I know there has been a link posted above, but for soom reason my work’s servers are blocking this!

Steve C
May 10, 2011 1:27 am

“How on earth can they justify using this kind of manufactured, obviously and ridiculously incorrect “data” as input to a climate model?”
Willis, you wouldn’t want them leaving a known forcing out of the models, would you? – That would mean that their models were inaccurate! … er …
/sarc

May 10, 2011 2:44 am

Another attempt at straw grabbing so as to keep the old GHG/CAGW story alive.
Flogging a dead horse I am afraid.

May 10, 2011 2:50 am

actually guys there is no such thing as ‘real’ data. You have an estimate
with the nimbus ‘data.’ Nimbus doesnt “sample” actual ozone. It estimates
ozone. And it uses a model to do that. Of course the MODEL used to calculate the Nimbus data uses……radiative transfer physics.
So. If you want to use “real observations” you have the following logical problem.
those “observations” of “ozone” are actually the output of a model.
That model assumes that certain physics is TRUE. which physics?
Why the physics of RTE of course. and that physics, yes that physics, is the same physics that tell us that C02 will warm the planet.
lets see.. the manual on Nimbus…

“Total ozone is retrieved by calculating the radiances that would be measured for different column ozone amounts and
determining which column ozone amount yields the measured radiances. Detailed radiative transfer calculations are
used to determine backscattered radiance as a function of total ozone and the conditions of the measurement:
geometry, surface pressure, surface reflectivity, and latitude. A particular set of measured radiances is then compared
with the set of calculated radiances appropriate to the conditions of the measurement.
Some of the radiation detected by the satellite has been reflected by the surface below or scattered from the
atmosphere after such reflection. Thus, the reflecting properties of the surface must be known. If the reflectivity is
independent of wavelength, radiances at two wavelengths, one sensitive to atmospheric ozone and one not, can be
used to derive atmospheric ozone and reflectivity. This technique is the pair determination method used in previous
versions. The Version 7 algorithm allows for a component of reflectivity that is linear with wavelength. It uses
radiances at a third, longer, ozone insensitive wavelength to yield this linear term. The three wavelengths constitute a
triplet
.
An initial estimate of ozone is derived using a wavelength pair. Radiances are calculated for this ozone estimate.
Then, the ratios of calculated and measured radiances (in practice, the difference of the logarithms) at a triplet of
wavelengths can be used to solve simultaneously for the reflectivity, its wavelength dependence, and a correction to
the ozone estimate. This process may be iterated. The choice of triplet wavelengths is based upon the optical path
length of the measurement. ”

ho hum.

Uncertainties in the ozone values derived from the TOMS measurements have several sources: errors in the
measurement of the radiances, errors in the values of physical input from laboratory measurements, errors in the parameterization of atmospheric properties used as input to the radiative transfer computations, and limitations in the
way the computations represent the physical processes in the atmosphere.

not ‘observations’ at all. But the uncertain results of calculations that depend upon the same physics that tell us that GHGs warm the planet. And dont even ask what happens to the ‘observations’ if their is S02 around.
In summary: there is no such thing as ozone observations. There are estimates made from sensor readings and complex physics models.

wayne Job
May 10, 2011 3:01 am

A real scientist predicted that the Ozone layer would deplete at the poles in the mid of winter, he then found a way to measure it. Alas this poor bugger was pushed aside and the discovered rarification of O3 blamed on us mortals, this fitted rather well into the agenda that Western civilization is evil. This study is just another snout in the money trough of stupid.

May 10, 2011 3:13 am

“Third, if we don’t have information, we don’t have it. We can’t just take our best guess and use it as though it were an observation. Or I guess we can, but the output isn’t science without error estimates … and without observations that’s tough.”
Actually we do this all the time.
In the case of a hindcast what you are trying to do is a reconstruction. So you very well guess at certain factors. In a hindcast the guessed at parameter IS the hypothesis, not the observation.
Observation 1. The historical record.
Observation 2. the GCM out put.
Hypothesis. if ozone looked like this, then Obs 1 will be equal to Obs 2.
Now, people dont describe it this way, but logically, the guess at historical ozone IS the hypothesis.

Stephen Wilde
May 10, 2011 3:51 am

Joanna Haigh has pointed out that ozone concentrations below 45km decreased whilst the sun was quiet from 2004 to 2007 but increased above that level.
Comments?

cal
May 10, 2011 4:16 am

FergalR says:
May 9, 2011 at 6:51 pm
Someone please tell me where my thinking is faulty here:
An oxygen molecule (O2) in the atmosphere absorbs UV light which splits it into free-radical Oxygen (O) which happily combines with O2 molecules to form ozone (O3). Thus creating an ozone layer at the top of the atmosphere.
Ozone is slightly better at absorbing UV than O2 – but absorption bands are widened by increasing concentration – so with O2 being 20% of our atmosphere in the >10km of the atmosphere below the ozone layer then . . .
I guess what I’m trying to say is that the ozone hole thing is utter bullshit. Almost all of the UV will be absorbed by oxygen before it reaches ground level with or without an ozone layer.
————————————————————————————–
The problem with your analysis is that you have not looked carefully enough at the mechanisms of absorption and the related wavelengths.
Nitrogen at very short wavelengths and oxygen at wavelengths below about 250nm absorb all the high energy photons entering the earth’s atmosphere. They do this largely within the thermosphere which is why it is so hot. The absorption process is one of photodissociation forming monatomic nitrogen and oxygen. As one descends through the stratosphere between 100km and about 15km the rate of absorption reduces and the temperature falls. Here monatomic oxygen and molecular oxygen combine to form ozone. This ozone absorbs almost all UVc below 280 nm and much of the UVb between 280nm and 320nm. Molecular oxygen is a poor absorber of UV at wavelengths longer than the 250nm necessary for photodissication so it is not relevant when discussing UV absorption within the stratosphere and troposphere since no photons of such high energy penetrate this far. The increased density at lower levels is therefore not relevant either.
Whether the ozone hole is bullshit or not is another question. But UVb would certainly be at least an order of magnitude higher if there was no ozone.
By the way I have mentioned on a previous post that, contrary to what you read in the scare press, there is no obvious link between Ozone depletion and an increase in skin cancer. The highest levels of skin cancer are found in Australia and New Zealand. These two countries sit beneath a band of HIGH ozone concentrations – the highest in the world. So the cause of their high levels of skin cancer cannot be due to low levels of ozone even if a reduction had happened. Research 10 years ago suggested that the cause of the high incidence was due to the relative high levels of UVa (which is not removed by ozone) compared with UVb (which is). High levels of UVb increase the levels of melanin in the skin which does block UVa very effectively. The obvious question was why evolution had found a way to remove UVa and not UVb? The implication was that UVa was the danger and other circumstantial evidence from the research seemed to support this.
I am very suspicious that this research was supressed by the pharmaceuticals industry because, if true, most sunscreens (which also remove UVb and not much UVa) would actually be increasing the risk of cancer by reducing our natural protection. The fact that the increased use of sunscreens and the rate of skin cancer have been directly correlated for many decades suggests that this might be true, although it has to be stressed that correlation does not prove causality.
It is interesting to note that now that sunscreens with good UVa protection, compared with UVb, are available (star rating) the earlier research is now being widely quoted. I raise this here because of my suspicion that science was manipulated to maximise profit, at the expense of peoples lives. Climate science is not unique.

May 10, 2011 4:56 am

Nice one! Thank god there are people like you examining this junk science in detail and exposing these blatent frauds.

BPW
May 10, 2011 10:02 am

Mr. Eisenbach,
Regardless of the conclusions you were able to draw from the data, I am curious after having read the other thread. Was the data indeed not possible to access without special privileges, or were Mosher and Derecho correct all along and it was a case of user error, one which they helped you with?
You’re final update gave no sense of how it was resolved. You made some fairly interesting accusations that the data was locked from the common man. Was it? If not, it might be nice to set the record straight. If it was, did those guys get you access somehow?

May 10, 2011 2:24 pm

So, the output of one computer simulation based on no actual measured data, is used as input to a subsequent computer simulation to imagine where ozone concentrations are heading.
I have two questions:
#1: What are these people smoking?
#2: When is funding for this mental masturbation going to be pulled?

May 10, 2011 2:49 pm

willis.
I will make it brutally simple.
you posted “data” that is the output of a model that estimates ozone.
jeez posted ‘data’ that is supposedly observations from Nimbus.
You commented, sarcasticly, that nobody in climate science uses observations.
I am pointing out that the observations that jeez refers to are ACTUALLY the result
of a complex physics model plus sensor values. A physics model known as radiative transfer codes. The heart and soul of a GCM. most people miss that.
So the larger point is this. Very often on WUWT I see this knee jerk reaction.
“they use models, not observations” What this naive view misses is that ALL
observation and especially satillite observation relies on and is shot through
and through with modelling. We dont have ‘observations’ on one hand and models
on the other hand. We have models of laws and models of observation.
So, one cant simply reject an input series because it is the result of a model.
why? because all observation is shot through with modelling and theory.
What that means is that we have to understand the various models we have that create “observations” In some cases the model is simple. A thermometer doesnt measure temperature, it models it. In other cases, like ozone, the models are complex.
So I hestitate to have the knee jerk reaction and just conclude that a model that produces observations is wrong. because all observation is the result of some sort of modelling or theoretical assumptions. Until, I know what the model is, I’ll SUSPEND judgement rather than render judgement. That’s true skepticism. and consistent.

Stephen Wilde
May 10, 2011 3:18 pm

“A thermometer doesnt measure temperature, it models it. In other cases, like ozone, the models are complex.”
I think there is a level of complexity at which simple measuring segues into modelling and so the higher the level of complexity the less reliable the outcome.
In normal parlance it is unreasonable to call a measurement a model and unreasonable to call a model a measurement.
Thus, querying the output of a model is wise but querying the output from a measurement is unwise.
Mosh’s attempt to conflate the two is disingenuous and the silliest position of all.

May 10, 2011 3:30 pm

“We do? We make up fake historical “data” to feed a model, and on the basis of the results of using that fake data we claim that our model is accurate enough to forecast a century out into the future? We do that “all the time”?”
It’s done routinely. I’ll give you a simple example.
I had to build a flight model of an airplane were we did not have all the subsystem data. In particular we had no data on the accelerometer used in the actual aircraft.
That data was was required to build a model of the flight control system.
What we did have was flight test data. How the plane actually flew. So, we start by building a model with a black box for the accelerometer. We have no data for it. What do we do?
well a couple of approaches.
1. Somebody guesses
2. somebody bounds the problem using back of the envelop estimates
3. somebody builds a model of the accelerometer.
4. somebody reasearches existing accelerometers and guesses from that using a model
of technology improvement.
Then you put in your best guess and compare your model with its made up historical data about the accuracy of the accelerometer to the actual flight test data.
Done all the time in certain fields. all the time.
Logically, as I explained, you are simply testing the hypothesis that the unknown historical data is equal to your guess.
Hypothesis: if we set unknown variable to X, then model output will match historical record.
done all the time. As I said, in these cases the historical data functions logically as you hypothesis. operationally it looks like you input, but epistemically its the hypothesis.

May 10, 2011 3:35 pm

“So, the output of one computer simulation based on no actual measured data, is used as input to a subsequent computer simulation to imagine where ozone concentrations are heading. ”
That’s not how it works.
For historical data where you have no measurements you have to estimate the value.
For the recent instrumented period (1978..) you have “observation” data. which is
heavily processed sensor readings. Not ozone measures. For future ozone you have
scenarios: what ifs
1. what if ozone is constant
2. what if it goes up
3. what if it goes down.
Your familiar with this kind of analysis. What happens to your IRA if the rates go up, down, sideways etc.
All very normal. done all the time. You live by this stuff.

Jim Clarke
May 10, 2011 3:56 pm

(Willis)“Third, if we don’t have information, we don’t have it. We can’t just take our best guess and use it as though it were an observation. Or I guess we can, but the output isn’t science without error estimates … and without observations that’s tough.”
(Steve) “Actually we do this all the time.”
(Willis) “We do? We make up fake historical “data” to feed a model, and on the basis of the results of using that fake data we claim that our model is accurate enough to forecast a century out into the future? We do that “all the time”?”
(Me) Yes, Willis, they do that all the time, but that is not the only thing ‘we’ do. If there is data that appears viable, but does not support the desired theory, that data is ignored or rewritten, such as in the infamous Hockey Stick. The long-accepted, natural climate fluctuations of the last 2,000 years were simply dismissed by one, poorly done ‘study’. All evidence to the contrary was abandoned and ignored. On the other hand, if there is actual data that we know for a fact is inaccurate, but supports the desired theory, it is accepted without question. This was the case with the studies that linked AGW to increasing hurricanes, in particular, and severe weather in general. We know for a fact that historical hurricanes were under sampled, both in number and intensity, as were floods, tornadoes, severe thunderstorms, lightning strikes, hail, snowstorms and even earthquakes. In AGW logic, none of these things actually existed before people began recording the events. It is the ‘Schrodenger’s Cat’ hypothesis of the human impact on climate.
So here is how data is handled in almost all climate science that points to a significant human influence on global climate:
1. If it supports a significant human influence on global climate, but is obviously flawed, use it and defend it as is.
2. If it does not support a significant human influence on global climate, but is robust, ignore it or discredit it with less robust data.
3. If it does not exist…make it up in such a way to support a significant human influence on global climate.
It has been like this now for 20 years at least. There is no climate ‘crisis’ science that would get a passing grade at a middle school science fair! NONE! All such studies ‘cook’ the data!

May 10, 2011 4:08 pm

‘Because to me, it looks like the output of a bozo-simple AND INCORRECT computer model. Unless you think the historical record actually does look like Fig. 1.
A “yes” or “no” to my question will be sufficient, although of course an explanation is always welcome. Once I know which model you’re talking about, this might all be clearer.”
I’m not convinced of the following.
1. that you’ve downloaded the data properly.
2. that you’ve plotted it properly.
3. that the model actually uses the data.
Once I looked into all those matters and the actual provenance and the model
and read the papers, then I’d hazard an opinion on the data. I’d also contact
the PI.
Since I’ve seen people be wrong about the simplest things, I wouldt say something looked wrong. It would make me curious, not incredulous.
There are two forms of skepticism.
1. X is wrong
2. I dont know if X is wrong.
I always adopt #2. folks should know that by now

May 10, 2011 4:38 pm

Wilde.
“A thermometer doesnt measure temperature, it models it. In other cases, like ozone, the models are complex.”
I think there is a level of complexity at which simple measuring segues into modelling and so the higher the level of complexity the less reliable the outcome.
#######
philosophically, there is no sharp line. Pragmatically we do act differently toward certain types of “observations” But that pragmatic difference is not really under pinned by any epistemic feature. Now of course, your assumption that
‘the higher the level of complexity” then the “less reliable the result” IS a testable hypothesis. What we would find is that this theory about complexity and reliability is most likely confirmed by our experience. But it’s not a necessary truth. It’s a good rule of thumb. What you see here is that the methods and assumptions that we actually USE in science are shot through with theory, rules of thumb, pragmatic choices.
the point of this is to be very cautious when saying ‘that’s not science, or science works only this way. The idea that there is a model over here and data over there is an ideal. An ideal that upon close philosophical inspection is not real.
None of this means that the model which produced the ozone is correct or incorrect.
But it does bear investigation. There is also the ironic issue that all satillite data relies on the very physics at the heart of the AGW debate. So that if you want to reject AGW physics in TOTAL, you’d best ask yourself why you rely on satillite data.. or your GPS for that matter. Some fellows around here, for example, questioned general relativity.
BUT they use a GPS. that device uses GR to perform its function. It relies on GR being true. Part of what I’m doing is pointing out to folks is this. Some of the data they rely on, relies on the physics of AGW. and you dont even know that. So you use something that you dont believe in. weird.

May 10, 2011 8:34 pm

This is the first AGW discussion I’ve ever seen that devolved to The Allegory of the Cave.

Stephen Wilde
May 10, 2011 11:13 pm

” There is also the ironic issue that all satellite data relies on the very physics at the heart of the AGW debate. So that if you want to reject AGW physics in TOTAL, you’d best ask yourself why you rely on satillite data.. or your GPS for that matter.”
That’s a neat sidestep, Steve, but I’m not sure it is good enough.
The thing is that predictability out in the real world is key. If one actually measures a temperature then on the basis of well established general physical principles there are measurable and predictable outcomes in any well defined situation.
If one merely assumes a temperature (or an ozone quantity) on the basis of a model then the same does not follow and the test of the validity of the model is a comparison with separate real world measurements of any type that are verifiable well enough to constitute a meaningful comparator.
As far as I can see ALL climate science is in the latter category and the ONLY available comparator is what we can measure separately in the global environment so the further one goes towards a modelled scenario the more likely that it will diverge from any identifiable independent reality.
So then we come to the difference between our respective approaches.
I think that the reliance by you and many others on modelling and assumptions rather than measurements and logical conceptual constructs based on first principles is silly whereas you think that my reliance on measurements and logical constructs based on first principles rather than modelling is silly.
In reality there is merit in both approaches but each of us needs to accept the limitations of our differing methods. Both are faithful to the scientific method within reason but my approach is age old whereas the modelling approach is very recent.
There is an overlap in that in a sense my logical constructs are also primitive models but I think they are far more flexible by reason of their simplicity and closeness to established physical laws. The more complex a model becomes the more it adds contentious assumptions to those established physical laws and the more it will diverge from reality unless it is constantly challenged and adjusted to reduce that divergence.
There is currently a remarkable tendency on the part of AGW proponents to resist the implications of developing divergences.
Anyway my point here is to assert that my approach is as valid as anyone else’s and perhaps more so (certainly more in line with historical scientific endeavour over the ages before sophisticated models) because there are numerous bits of data that could come in and many climate events that could occur to invalidate, or require adjustment of, the propositions that I have put forward here and elsewhere.
Just don’t expect me to make any concessions to mere models or so called reconstructions. Climate science has been so influenced by wishful thinking for over 25 years now that I think EVERY reconstruction has been skewed by the biases of those who created them. I keep coming across reconstructions each from reputable sources that say the opposite of each other. A particular example arose in the BCP thread where it became clear that one reconstruction suggests El Nino getting stronger for the last 500 years and another suggests El Nino getting weaker for the last 500 years.
So all in all I would like to establish cordial relations with you but would appreciate some expression of regret for the thoughtless insults you directed at me on the BCP thread.

Alexander K
May 11, 2011 4:06 am

Thanks, Willis, your pricking the bubbles of scientific pomposity and unexamined self-righteousness that arise from the swamp that science has become always makes me smile in delight.
On the other hand, Mosh’s wild justifications for making stuff up are not funny at all.

nandheeswaran jothi
May 11, 2011 9:51 am

steven mosher says:
May 10, 2011 at 4:38 pm
There is also the ironic issue that all satillite data relies on the very physics at the heart of the AGW debate. So that if you want to reject AGW physics in TOTAL, you’d best ask yourself why you rely on satillite data.. or your GPS for that matter.
That statement by you is quite non-sequitur.
It is possible to start theories/hypotheses from the same principles of physics, but introduce an error or fake data in one of them, come up with two separate, unrelated results. Then, if I say one of the results was wrong, you can’t claim that both results have to be wrong.
If the same principles of physics were used in developing temperature model based on what a satellite measures and AGW mumbo-jumbo, you cannot say i accept both or neither. We can use measurements using other techniques and validate satellite temps.
When we take the same approach to all the AGW predictions, They fail consistently. You cannot equate the science used in temperature models generated from sat measurements and AGW models

Stephen Wilde
May 11, 2011 3:03 pm

Phew, that’s a relief.
Mosh getting a hammering after giving me a bit of anxiety on another thread just because of his ‘big’ name.
Naturally everyone puts a lot of themselves into their opinions so the big hitters are just as vulnerable as the rest of us.
So, whoever you are, don’t give in to intellectual bullying.

EllisM
May 12, 2011 7:26 pm

Quick question, Willis – have you looked at the other levels in that data? Did you properly area-weight the values, or just sum them? Have you thought about looking at the geographic distributions of both the stratospheric and trop ozone?
A vertical integral over the relevant sigma ranges might be interesting.

EllisM
May 15, 2011 4:51 pm

Are you sure you did area-weighting? I grabbed the file too and couldn’t replicate your plot with it. A simple sum of the values over all the grid boxes for the ozone data, then dividing by the total number of boxes did replicate your plot. Not that it really changes anything, but it’s more correct.