"People underestimate the power of models. Observational evidence is not very useful."

Guest post by Alec Rawls

Nasa Cosmic Rays

Andrew Orlowski at the UK Register has an anecdotal account of Downing College’s skeptics-vs-believers mash-up. Ace of Spades pulled the juiciest bit:

In short, the day lined up Phil Jones, oceanographer Andrew Watson, and physicist Mike Lockwood, the latter to argue that the sun couldn’t possibly have caused recent warming. He was followed by the most impressive presentation from Henrik Svensmark, whose presentation stood out head and shoulders above anyone else. Why? For two reasons. The correlations he shows are remarkable, and don’t need curve fitting, or funky statistical tricks. And he has advanced a mechanism, using empirical science [image above], to explain them.

At the other end of the scale, by way of contrast, the Met’s principle research scientist John Mitchell told us: “People underestimate the power of models. Observational evidence is not very useful,” adding, “Our approach is not entirely empirical.”

Yes, you could say that.

Lockwood’s failed argument against a solar explanation

Orlowski on Lockwood:

The strongest argument, according to Lockwood, for the sun not being a driver in recent climatic activity is that “it has been going in the wrong direction for 30 years.”

Hmmm. So as soon as solar magnetic activity passed its peak, when it was still at some of the highest levels ever recorded, these very high levels of solar activity could no longer have caused warming?

As I have noted a number of times, this argument depends on an unstated assumption that, by 30 years ago (by 1980 or so), ocean temperatures had equilibrated to whatever forcing effect the 20th century’s high level of solar activity might be having. Otherwise the continued high level of forcing would continue to create warming until equilibrium was reached, regardless of whether solar activity had peaked yet. (The actual peak seems to have been solar cycle 22, from 1986-96, not 1980, as Lockwood claims.)

When I pressed Lockwood on his implicit equilibrium assumption he justified it by citing evidence that ocean temperature response to solar activity peters out (as measured by decorrelation) within a few years:

Almost all estimates have been in the 1-10 year range.

But decorrelation between surface temperatures and solar activity is very different from equilibrium. All decorrelation is measuring is the rapid temperature response of the upper ocean layer when solar activity rises or falls. That rapid response indicates that the sun is indeed a powerful driver of global temperature, but it says next to nothing about how long it takes for heat to carry into and out of deeper ocean layers.

This was brought out by AGW believers like Gavin Schmidt who are concerned about the energy balance implications of equilibration-speed. In a simple energy balance model, rapid equilibration implies (other things equal) that climate sensitivity must be low. Since belief depends on high climate sensitivity, the rapid equilibration claim cited by Lockwood had to be shot down, which was managed quite successfully (ibid).

In sum, Lockwood’s rapid equilibrium assumption is dead and buried, leaving him no grounds for dismissing a solar explanation for post 70’s warming. I’ll keep an eye out for video of Lockwood’s presentation, but I doubt he mentioned the rapid equilibrium assumption upon which his argument depends.

More punk students

Remember these graduate student “climate scientists,” going all Clockwork Orange for the planet or something:

Sounds like they made an appearance at Downing College too:

The audience had been good enough to heed Howard’s opening advice that “if anybody mentions Climategate, they’ll be evicted”. Nobody ambushed the CRU crew all day – it was all very polite. I noted that the skeptics made a point of listening politely to the warmists, and applauding them all. A group of students and a few others, simply giggled and mocked the skeptics, however from start to finish. One of their tutors (I presume) was in hysterics all day.

Give ’em an A. They learned their “observational evidence is not very useful” lesson well.

Get notified when a new post is published.
Subscribe today!
5 2 votes
Article Rating
132 Comments
Inline Feedbacks
View all comments
RexAlan
May 17, 2011 3:24 am

Well said Cassandra: my thoughts entirely!
Never before has so few words described so perfectly the demise of a mass delusion.

Theo Goodwin
May 17, 2011 3:37 am

John A. Fleming says:
May 16, 2011 at 6:26 pm
“The actual observational data is just another instance of those climate experiments. All you have to do is show that that your model could have generated the historical measurements, if only you had had enough butterfly wings to initialize the model with.”
I guess your subconscious is rational and brilliant. This is the heart of the matter. Notice that the big problem here is a confusion of science (my model produced and I selected the results as the model’s product) with scientific methodology (your model could have generated those results, though it didn’t). It is not enough that models are open to investigation through scientific method; they must also produce results that become reasonably well-confirmed in experience.

Steve from Rockwood
May 17, 2011 3:42 am

Laurie says:
May 16, 2011 at 8:40 pm
[snip]… There are many words …[snip] should I worry my pretty little head …[snip] Are you smart enough to write clearly…[snip]?
Laurie, I am not a great writer. I was enrolled by my boss once in a writing class where was asked to submit a sample (of my writing). The teacher wrote “wordy” at the top and nothing else. The sarcasm was burning as I looked around to see everyone else had only written a single page. I had written three.
But let’s put your pretty little head in charge of climate science. You run the show. Judging from your earlier posts, probably the first thing you would do is to tell scientists to stop making shrill comments about the end of the world. Next you tell them to stop blaming extreme weather conditions on global warming. Then you move on to detach environmental causes and political motivation from climate science. You’ve done a great job in your first day on the job. Now what? You really want to advance the science of climate science and not just add to the noise.
So you look to the observational record which has large swings in both directions from El Ninos, volcanic eruptions etc and a definite warming trend starting in the 1880s that was interrupted twice by cooling trends. At the same time humans have been adding CO2 to the atmosphere by burning fossil fuels. And you know that humans do have an impact on temperatures at a local level and you measure it as the Urban Heat Island Effect.
People are pressing you for predictions about the future. You remind people that climate science is something that happens over several decades. You can’t point to 1998 and compare it to 1934 as the warmest year anymore than the last ten years show an end to a long term multi-decadal, greater than a century warming trend.
So you look to the models. What do they say? Well if you add aerosols to the atmosphere it tends to have a cooling effect. If you introduce more CO2 to the atmosphere it tends to have a warming effect.
Then some guy (on your team) shows up at a conference and out of a short fit of honesty concedes that the observational data is almost useless – less useful than the models.
So Laurie, as you are now head of climate science, I’ll let you finish the story because I’m not very good at endings.

Theo Goodwin
May 17, 2011 3:44 am

Steve from Rockwood says:
May 16, 2011 at 7:32 pm
“My gut feeling is that the variation in output of these models is far below the observational variance (that was really the point of my earlier post).”
To readers of this forum, you are likely to come across as unaware of the relevant context. Most everyone here will probably agree that there is no connection between models used by climate scientists and observation.

cedarhill
May 17, 2011 3:50 am

Now we know who writes all those lottery prediction apps that have been floating around the internet for years – John Mitchell and Phil Jones. Actually, they’ve got an order of magnitude greater than their climate models.

1DandyTroll
May 17, 2011 4:02 am

So, essentially, nobody can trust his observation about neither models nor observations.
Has the ever so LSD colorful output of their models anything to do with they putting their trust in the “great and all knowing model-god”?

old44
May 17, 2011 4:25 am

There goes 2000 years of science down the gurgler, no useful evidence until they started using computers in the 50’s.

Brian H
May 17, 2011 4:39 am

Since so many are complaining about that one quote, I fixed it:
“People underestimate the power profitability of models. Observational evidence is not very useful convenient,” adding, “Our approach is not entirely noticeably empirical scientific.”

Allan M
May 17, 2011 4:42 am

John Mitchell told us: “People underestimate the power of models.”
These people are “power” mad.
——
Mike McMillan says:
May 16, 2011 at 5:58 pm
“I am a climate scientist”
“I am the walrus”
“I Am the Very Model of a Modern Major-General”
Beatles and Gilbert & Sullivan got ‘em beat, homey.

You left out one of the best:
http://en.wikipedia.org/wiki/Michael_Flanders
I’m a gnu. I’m a gnu.
The g-nicest work 0f g-nature in the zoo.

MarkW
May 17, 2011 5:26 am

Over at NRO, I recently had a warmista tell me that not only have the models perfectly hindcasted past climates, but that models predicted the existence of oceanic cycles such as the PDO years before they were recognized by other scientists.

Nicola Scafetta
May 17, 2011 5:45 am

The arguments advanced by Lockwood are very questionable in many and several aspects.
In this peer review paper of mine I explicitly show the several limits of Lockwood’s assumptions and methodology.
N. Scafetta, “Empirical analysis of the solar contribution to global mean air surface temperature change,” Journal of Atmospheric and Solar-Terrestrial Physics 71 1916–1923 (2009), doi:10.1016/j.jastp.2009.07.007.
http://www.fel.duke.edu/~scafetta/pdf/ATP2998.pdf
See also:
http://wattsupwiththat.com/2009/08/18/scafetta-on-tsi-and-surface-temperature/
This is the comment on Lockwood contained on my paper
“The above wide range strongly contrasts with some recent
estimates such as those found by Lockwood (2008), who
calculated that the solar contribution to global warming is
negligible since 1980: the sun could have caused from a -3.6%
using PMOD to a +3+1% using ACRIM. In fact, Lockwood’s model is
approximately reproduced by the ESS1 curve that refers to the
solar signature on climate as produced only by those processes
characterized with a short time response to a forcing. Indeed, the
characteristic time constants that Lockwood found with his
complicated nonlinear multiregression analysis are all smaller
than one year (see his table 1) and the climate sensitivity to TSI
that he found is essentially equal to my k_{1S}! Likely, Lockwood’s
model was unable to detect the climate sensitivity to solar
changes induced by those climate mechanisms that have a
decadal characteristic time response to solar forcing: mechanisms
that must be present in nature for physical reasons. As proven
above, these mechanisms are fundamental to properly model the
decadal and secular trends of the temperature because they yield
high climate sensitivities to solar changes.”
A similar response works also for Lean and Rind’s approach. Always from my paper
“Analogously, my findings contrast with Lean and Rind (2008),
who estimated that the sun has caused less than 10% of the
observed warming since 1900. The model used by Lean and Rind,
like Lockwood’s model, is not appropriate to evaluate the multidecadal
solar effect on climate. In fact, Lean and Rind do not use
any EBM to generate the waveforms they use in their regression
analysis. These authors assume that the temperature is just the
linear superposition of the forcing functions with some fixed
time-lags. They also ignore ACRIM TSI satellite composite. While
Lean and Rind’s method may be sufficiently appropriate for
determining the 11-year solar cycle signature on the temperature
records there used, the same method is not appropriate on
multidecadal scales because climate science predicts that time-lag
and the climate sensitivity to a forcing is frequency dependent.
Consequently, as Lockwood’s model, Lean and Rind’s model too
misses the larger sensitivity that the climate system is expected to
present to solar changes at the decadal and secular scales.
I have shown that the processes with a long time response to
climate forcing are fundamental to correctly understanding the
decadal and secular solar effect on climate (see ESS2 curve). With
simple calculations it is possible to determine that if the climate
parameters (such as the albedo and the emissivity, etc.) change
slowly with the temperature, the climate sensitivity to solar
changes is largely amplified as shown in Eq. (10).”

Nicola Scafetta
May 17, 2011 5:58 am

Moreover, about the issue of whether Lockwood or Lean methodology agree with the climate models.
It is important to note that both Lockwood and Lean’s methodologies apparently agree with the climate models claiming that the sun is a small driver of the climate. However the agreement is only apparent.
In fact, Lockwood and Lean’s methodologies assume that there exist only a very fast characteristic climate time response to solar variations that imply a very small heat capacity of the system. Lockwood uses a characteristic time response of T<1 year and Lean, with her linear regression model assume essentially T=0 year!
The climate model instead have at least a decadal climate time response by more properly model the heat capacity of the ocean.
Thus the apparent agreement between Lockwood and Lean's methodologies and the climate models is due to the fact that on one side Lockwood and Lean uses methodologies that imply only very fast time responses (= very small heat capacities) to solar related forcings and on the other side the climate models that do not contain a lot of alternative solar-climate mechanisms such as the sun-cosmic ray-cloud system.

Carbonicus
May 17, 2011 6:01 am

“People underestimate the power of models. Observational evidence is not very useful,” adding, “Our approach is not entirely empirical.”
Funny how “observational (empirical) evidence” is “very useful” when it nicely fits your predetermined outcome. Witness Mann, Briffa, etal.
When your paleo- reconstruction hits a point in time where the extrapolations no longer suit your predetermined outcome, and you splice in an “observational” (empirical) instrumental record to help overcome the problem, it’s funny how “useful” the empirical becomes.
The abrogation of scientific method is unabashed and blatant.
And the best disinfectant for this deadly virus is sunlight. The more of it these “scientists” are exposed to, the quicker we can dispense with the Thermageddon politics.

May 17, 2011 6:06 am

To think that a scientist could be this delusional, is not only sad, but that he is excepted as sane by his peers is madness. The belief system these people carry is a two edged sword with no handle , no matter which way it falls it will hurt. Time and tide belong to no man, and I would suggest neither does the weather, that we can control the weather by taxation on CO2 has to be the greatest scam in the history of the world. Good grief!!

May 17, 2011 6:16 am

Steve from Rockwood says; ” I’ll let you finish the story because I’m not very good at endings.” How about, “Gee whiz, we don’t know squat.”

Francisco
May 17, 2011 6:19 am

Here is another load of quack science and fakery dumped on us from above.
[…] TSA faked its safety data on its X-ray airport scanners. […] We now live in an age where the federal government simply fakes whatever documents, news or evidence it wants people to believe, then releases that information as if it were fact.
http://www.naturalnews.com/032425_airport_scanners_radiation.html
[…]
The evidence of the TSA’s fakery is now obvious thanks to the revelations of a letter signed by five professors from the University of California, San Francisco and Arizona State University. You can view the full text of the letter at: http://www.propublica.org/documents
[…]
From the letter we learn:
• To this day, there has been no credible scientific testing of the TSA’s naked body scanners. The claimed “safety” of the technology by the TSA is based on rigged tests.
• The testing that did take place was done on a custom combination of spare parts rigged by the manufacturer of the machines (Rapidscan) and didn’t even use the actual machines installed in airports. In other words, the testing was rigged.
• The names of the researchers who conducted the radiation tests at Rapidscan have been kept secret! This means the researchers are not available for scientific questioning of any kind, and there has been no opportunity to even ask whether they are qualified to conduct such tests
[…]
• The final testing report produced from this fabricated testing scenario has been so heavily redacted that “there is no way to repeat any of these measurements,”
[…]
• The dose rates of X-rays being emitted by the Rapidscan machines are actually quite high — comparable to that of CT scans, say the professors. Yes, the dose duration is significantly lower than a CT scan, but the dose intensity is much higher than what you might think. And as anyone who knows a bit about physics and biology will tell you, the real danger from radiation is a high-intensity, short-duration exposure. That’s exactly what the TSA’s backscatter machines produce.
• The radiation detection device used by Rapidscan to measure the output of the machines — an ion chamber — is incapable of accurately measuring the high-intensity burst of radiation produced by the TSA’s naked body scanners, say the professors.
• At the same time, the radiation field measurement device used by the TSA — a Fluke 451 instrument — is incapable of measuring the high dose rates emitted by backscatter machines. The measurement devices, in effect, “max out” and cannot measure the full intensity of the exposure. Thus, the TSA’s claims of “low radiation” are actually fraudulent.
[…]
• The amount of electrical current applied to the X-ray tubes has been redacted by the TSA (working with John Hopkins). This makes it impossible for third-party scientists to accurately calculate the actual radiation exposure, and it hints at yet more evidence of a total TSA cover-up. As explained by the professors:
…the X-ray dose is proportional to the current through the X-ray tube. Not having access to the current used in the JHU test, or in the field application of the scanner means that the measurements at JHU are irrelevant to the dose at the airport. There is also no data on the pixel size and overscanning ratio, which also bear directly on the dose delivered to subjects. The statement in the HHS letter that the fluence is not a relevant quantity ignores fundamental physics.
[…]
There shall be no independent testing whatsoever.
The TSA adamantly refuses to allow independent testing of the radiation levels being emitted by the machines.
[…]
Actual radiation emitted by the machines is far higher than what the TSA claims.
John Sedat, a professor emeritus in biochemistry and biophysics at UCSF and the primary author of the letter says, “..the best guess of the dose is much, much higher than certainly what the public thinks.” This indicates the public has been deeply misled by the actual amount of radiation emitted by the machines.
• Peter Rez, the physics professor from Arizona State, says that the high-quality images described by the TSA could not be produced with the low levels of radiation being claimed by the TSA. The images, in other words, don’t match up with the TSA’s cover story. Rez estimates the actual radiation exposure is 45 times higher than what we’ve previously been told.
• The TSA machines are capable of firing even higher levels of radiation into a “region of interest” (such as your anus or scrotum, in which the TSA seems to be taking great interest these days), thereby exposing that region to even higher levels of radiation than the rest of your body.

Andy Wehrle
May 17, 2011 6:29 am

Is there a link to the presentation made by Henrik Svensmark?

G. Karst
May 17, 2011 6:29 am

I OBSERVE… THEREFORE I AM

Wayne
May 17, 2011 6:40 am

The key here is that a model is something that he created, while observational data is only something that he observed. Ego trip.
Or, if we are being charitable, perhaps he was trying to quote “Without models there is no learning” (can’t find a reference, but I’ve heard it somewhere and in context I believe it’s true).

May 17, 2011 6:45 am

Francisco,
Fascinating comments, thanks. The TSA has plenty to hide. They’re not protecting us, they are endangering us. El-Al airlines doesn’t play these x-ray games, and when is the last time an Israeli airliner was hijacked or flown into a building? The only countermeasure that really works is profiling.

izen
May 17, 2011 6:47 am

@- He was followed by the most impressive presentation from Henrik Svensmark, whose presentation stood out head and shoulders above anyone else. Why? For two reasons. The correlations he shows are remarkable, and don’t need curve fitting, or funky statistical tricks. And he has advanced a mechanism, using empirical science [image above], to explain them.
I suspect this comment is intended to be ironic?
After all Svensmark is notorious for dubious statistical methods to obtain a correlation in the graphs first used to justify the CERN experiments, and clearly no correlation is possible between the last ~50 years of stable or falling solar output and GCR flux.
And while the mechansim may be ’empirical’, as yet there is no empirical evidence of it, and substantial evidence that cloud nucleation is provided by other empirical processes.
The claim advanced by some that a consistantly rising temperature trend is compatable with a stable or falling solar output/GCR flux because the higher absolute output, and lower resultant cloud cover is still acting to warm the oceans that have not reached equilibrium is unsupported by any empirical evidence, and must therefore be an example of where empirical evidence is inferior to modeling arguments.
But it has the implication that if the additional energy from the reduced cloud cover during the high solar activity period is STILL warming the planet because the oceans have not reached equilibrium then the additional energy from the DLR from the increased CO2 will ALSO continue to warm the atmosphere for ~30 years after it has ceased to increase when emissions are reduced.

May 17, 2011 7:03 am

izen says:
“The claim advanced by some that a consistantly rising temperature trend is compatable with a stable or falling solar output/GCR flux because the higher absolute output, and lower resultant cloud cover is still acting to warm the oceans that have not reached equilibrium is unsupported by any empirical evidence, and must therefore be an example of where empirical evidence is inferior to modeling arguments.”
Empirical [real world] evidence is never inferior to modeling arguments.

JasonS
May 17, 2011 7:13 am

I luv the rapid equilibrium finger-trap Lockwood got himself caught in! Hilarious.

Frank
May 17, 2011 7:42 am

Maybe we should all model our income for the IRS and get away from misleading observational evidence of earnings.

Jeremy
May 17, 2011 7:48 am

“People underestimate the power of models. Observational evidence is not very useful,” — John Mitchell
Indeed, John, that’s why we spend hundreds of billions on satellites and barely a hundred million or so on computational power. Observational evidence is obviously not very useful when you consider that we continue to add to long chains of orbiting satellites pointing back at earth to gather this “observational evidence”.
I agree that the human eye can be fooled by evidence, but it’s the human part of that equation that does the fooling.