Models All The Way Down

Guest Post by Willis Eschenbach

A learned man was arguing with a rube named Nasruddin. The learned man asked “What holds up the Earth?” Nasruddin said “It sits on the back of a giant turtle.” The learned man knew he had Nasruddin then. The learned man asked “But what holds up the turtle”, expecting Nasruddin to be flustered by the question. Nasruddin simply smiled. “Sure, and as your worship must know being a learned man, it’s turtles all the way down …”

I’ve written before of the dangers of mistaking the results of the ERA-40 and other “re-analysis” computer models for observations or data. If we just compare models to models and not to data, then it’s “models all the way down,” not resting on real world data anywhere.

I was wondering where on the planet I could demonstrate the problems with ERA-40. I happened to download the list of stations used in the CRUTEM3 analysis, and the first one was Jan Mayen Island. “Perfect”, I thought. Middle of nowhere, tiny dot, no other stations for many gridcells in any direction.

Figure 1. Location of Jan Mayen Island, 70.9°N, 8.7°W. White area in the upper left is Greenland. Gridpoints for the ERA-40 analysis shown as red diamonds. Center gridpoint data used for comparisons.

How does the ERA-40 reanalysis data stack up against the Jan Mayen ground data?

Figure 2. Actual temperature data for Jan Mayen Island and ERA-40 nearest gridpoint reanalysis “data”. NCAR data from KNMI. Jan Mayen data from GISS.

It’s not pretty. The ERA-40 simulated data runs consistently warmer than the observations in both the summer and the winter. The 95% confidence intervals of the two means (averages) don’t overlap, meaning that they come from distinct populations. Often the ERA-40 data is two or more degrees warmer in the winter. But occasionally and unpredictably, ERA-40 is 3 to 5 degrees cooler in winter. Jan Mayen’s year-round average is below freezing. The average of the ERA-40 is above freezing. The annual cycle of the two, as shown in Figure 3 below, is also revealing.

Figure 3. Two annual cycles (Jan-Dec) of the ERA-40 synthetic data and Jan Mayen temperature. Photo Source

The ERA-40 synthetic data runs warmer than the observations in every single month of the year. On average, it is 1.3°C warmer . In addition, the distinctive winter signature of Jan Mayen (February averages warmer than either January or March) is not captured at all in the ERA-40 synthetic data.

So that’s why I say, don’t be fooled by people talking about “reanalysis data”. It is a reanalysis model, and from first indications not all that good a reanalysis model. If you want to understand the actual winter weather in Jan Mayen, you’d be well-advised to avoid the ERA-40, or February will bite you in the ice.

The use of “reanalysis data” has some advantages. Because the reanalysis data is gridded, it can be compared directly to model outputs. It is mathematically more challenging to compare the model outputs to point data.

But that should be a stimulus to develop better mathematical comparison methods. It shouldn’t be a reason to interpose a second model in between the first model and the data. All that can do is increase the uncertainty.

In addition, due to the fact that both models involved (various GCMs and the ERA-40) are related conceptually (being current generation climate models), we would expect the correlations to be artificially high. In other words, a model’s output is likely to have a better fit to another related model’s output than it does to observational data. Data is ugly and has sudden jumps and changes. Computer model output is smooth and continuous. Which will fit better?

My conclusion? The ERA-40 is unsuited for the purpose of validating model results. Compare model results to real data, not to the ERA-40. Comparing models to models is a non-starter.

Regards to everyone,

w.

[UPDATE] Several people have asked about the sea surface temperatures in the area. Here they are:

Figure 4. As in Figure 2, but including HadSST sea surface temperature (SST) data for the gridcell containing Jan Mayen. SST data from KNMI

Figure 5. As in Figure 3, but including HadSST sea surface temperature (SST) data for the gridcell containing Jan Mayen. SST data from KNMI

Note that SST is always higher than the Jan Mayen temperature. This is not true for the ERA-40 reconstruction model output.

0 0 votes
Article Rating
136 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
John V. Wright
March 8, 2011 2:33 am

Willis, if you did not exist we would have to invent you.

Jim Greig
March 8, 2011 2:41 am

“If your experiment needs statistics, you ought to have done a better
experiment.”
– Lord Ernest Rutherford

P Gosselin
March 8, 2011 2:46 am

Post-normal science involves comparing models to expectations, and not real measured data. Willis, keep bringing us the “normal” science.

P Gosselin
March 8, 2011 2:59 am

Some info and photo galleries here: http://www.jan-mayen.no/
Click on: “Meteorological station – The station”, left menu and you can see the Stevenson screen.

Pytlozvejk
March 8, 2011 3:14 am

@ John V. Wright. You are right, there is only one Willis, and the world would be a lot poorer without him.
Mind you, I’m a bit suspicious of that particular Nasruddin story. I’m a Nasruddin fan from way back, and I don’t recall that one. Which may just prove my ignorance, of course.
Just as an aside, if you walk down Swanston St Melbourne (VIC), you will find a Chinese Muslim dumpling place. Its English name is something bland like Swanston St Dumpling Restaurant, but its name in Chinese characters is Afanti. That’s the Chinese name for Nasruddin, derived from the Turkish word Effendi. I first made that correspondence in the northern winter of 1985/86, when I was in China studying to improve my Chinese. The teacher told us a few anecdotes about Afanti – looking for the lost key under the street-light, that sort of thing. A couple of us had delved into Sufism and comparative religion and stuff, so we asked “Isn’t Afanti just a variant of Nasruddin?” And eventually, after lots of discussion, we agreed that the answer was “yes”. I mention that mainly to show the enormous geographic and cultural spread of the Nasruddin stories. That very spread, in itself, pretty much negates my claim to know what might be, and what might not be, an authentic parable.

pesadia
March 8, 2011 3:25 am

Another very interesting offering Willis, grounded in common sense. There wouyld appear to be no shortage of pieces which demonstrate the fragility of the science used to promote AGW.
Having said that, none of these revealing articles and explanations seem to be making any inroads on policy decisions.
I think that the science has now become secondary to the precautionary principle and would dearly love to see somebody, yourself or Judith Curry. look into this aspect of the debate.

Erik
March 8, 2011 3:35 am

John V. Wright says:
March 8, 2011 at 2:33 am
“Willis, if you did not exist we would have to invent you.”
————————————————————–
Yes but could We make him as fast and strong? – We don’t have the money, We don’t have the technology

Alan the Brit
March 8, 2011 3:36 am

Well done Willis!
When I had my two pints of bitter with my fellow chorister back in March ’10, who works at the Wet Office on climate science & puter modelling (including the notorious ash cloud models), to convince me of my evil ways about not believing in the new godless faith of CAGW. He could not refute one jot of the science I had quoted in a letter to the Parish magazine! He proceeded to describe the “abilities” of the models to reproduce past climates, (allegedly), & they produce the same temperature graph of the last 150 years that one could produce by hand if necessary, as if this was some kind of mystical magic. He seemed quite purturbed & disturbed that I pointed out that a computer programe has to be programmed to produce certain effects, by humans prone to error & misjudgement & bias, & that these models cannot produce these effects or likenesses of climagte without being made to by their programmers, based on uncertainties & assumptions of how what interacts with what in the climate. He couldn’t grasp that simple fact at all. Just look for example at every attempt by scientists to establish a real link with Homepopathic medicines (no offence I know they work for some). Every effort has stalled at the final hurdle because they produced the result they wanted, not what actually occurred! These were good people, not fools, but they fell for the bias trap by accident! AND they were picked up by statisticians to boot!

March 8, 2011 3:43 am

“Compare model results to real data, not to the ERA-40.”
Aww – ‘real data’, nooo, that can’t be done! Modelers would actually have to leave their well-ventilated computer rooms and go outside where there’s the chance they could meet real weather – far too dangerous!
And anyway, real data have this nasty habit of not agreeing with the shiny, smooth models. Best get rid of real data, no?
/very heavy sarc!
“Comparing models to models is a non-starter.”
Oh no it’s not!
Not, that is, if one’s a climate ‘scientist’, working one’s butt off to get the next tranche of funding, while pal-reviewing papers and getting one’s own through to publication in Nature!
/more heavy sarc …
One would like to know why climate scientists seem to be so unwilling to leave their labs. There are loads of proper, high tech clothes for surviving in the deep freeze which they surely can afford? I mean, if amateurs manage to climb Everest, they ought to be able to visit Jan Mayen, which is at sea level, no? Or are they scared of meeting some irate poley bears?
As soon as I read the title of this post, I knew it would be by you, Willis!
😉

jheath
March 8, 2011 3:44 am

Of course, to judge by my eyeballing, a cynic might wonder why winter temperatures under ERA-40 are only cooler prior to 1980 and the impact of CO2 hypothesis requires a warming trend from that time.

amicus curiae
March 8, 2011 3:45 am

if the result doesnt match their theory, they rewrite the data to make it so…
science? what science?

stan
March 8, 2011 3:47 am

Willis,
You are very inconvenient for those credentialed experts who are your betters.

SteveE
March 8, 2011 3:47 am

How big is the grid cell you are comparing with the point data for Jan Mayen Island?
As it’s an island would you expect the sea surrounding the island to be warmer or cooler than the island itself?
I’d expect it to be warmer and as a result I’d expect the averged value for the whole grid cell to be warmer than the point data for the island.
I’d be interested to hear your thoughts on this matter.

Beth Cooper
March 8, 2011 3:52 am

As someone else said somewhere(?) ‘This is so pre post-normal.’
Willis, thanks once again for highlighting the relationship of theory to its data.

slow to follow
March 8, 2011 3:52 am

“It’s turtles all the way down” has the ring of a catch phrase about it – sort of like “It’s worse than we thought”…
Will it find its way into the lexicon in the same way?! 🙂

John Johnston
March 8, 2011 4:12 am

To echo a comment in the earlier post about the forthcoming Spectator Global Warming debate, they should have invited you, Willis.
The only way to develop useful predictive models is to look for ways to falsify their output. That is, to validate their output by comparing it to real data. It seems that is the very last thing climate modellers want to do.

Admin
March 8, 2011 4:13 am

This famous email: http://www.eastangliaemails.com/emails.php?eid=419
Has an interesting exchange that is relevant here.
From: Phil Jones

To: “Michael E. Mann”
Subject: HIGHLY CONFIDENTIAL
Date: Thu Jul 8 16:30:16 2004
Mike,
Only have it in the pdf form. FYI ONLY – don’t pass on. Relevant paras are the last 2 in section 4 on p13. As I said it is worded carefully due to Adrian knowing Eugenia for years. He knows the’re wrong, but he succumbed to her almost pleading with him to tone it down as it might affect her proposals in the future ! I didn’t say any of this, so be careful how you use it – if at all. Keep quiet also that you have the pdf.
The attachment is a very good paper – I’ve been pushing Adrian over the last weeks to get it submitted to JGR or J. Climate. The main results are great for CRU and also for ERA-40. The basic message is clear – you have to put enough surface and sonde
obs into a model to produce Reanalyses. The jumps when the data input change stand out so clearly. NCEP does many odd things also around sea ice and over snow and ice.
The other paper by MM is just garbage – as you knew. De Freitas again. Pielke is also losing all credibility as well by replying to the mad Finn as well – frequently as I see it.
I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is !
Cheers
Phil
Mike,
For your interest, there is an ECMWF ERA-40 Report coming out soon, which shows that Kalnay and Cai are wrong. It isn’t that strongly worded as the first author is a personal friend of Eugenia. The result is rather hidden in the middle of the report.
It isn’t peer review, but a slimmed down version will go to a journal. KC are wrong because the difference between NCEP and real surface temps (CRU) over eastern N. America doesn’t happen with ERA-40. ERA-40 assimilates surface temps (which NCEP didn’t) and doing this makes the agreement with CRU better. Also ERA-40’s trends in the lower atmosphere are all physically consistent where NCEP’s are not – over eastern US.

Domenic
March 8, 2011 4:13 am

Hide the Nitrogen and Oxygen!
That’s the game they are playing.
I awoke this morning and suddenly realized that AGW warmist scientist believers in general, are playing the same game Mann, et al, engaged in with their ‘Hide the Decline!”
I posted this on Ira’s “Visualizing ‘Greenhouse effect’…” topic a few days ago, and watched to see how various readers here would respond to it.
——————————–
Domenic says:
March 2, 2011 at 7:22 am
Are the data in HITRAN observed or calculated?
The parameters in HITRAN are sometimes direct observations, but often calculated. These calculations are the result of various quantum-mechanical solutions. The goal of HITRAN is to have a theoretically self-consistent set of parameters, while at the same time attempting to maximize the accuracy. References for the source are included for the most important parameters on each line of the database.
http://www.cfa.harvard.edu/hitran/
The whole basis for the AGW argument rests on their misunderstanding of the true ‘greenhouse effect’. N2 and O2 are indeed members of ‘greenhouse gases’. ALL component gases of the atmosphere contribute to the ‘greenhouse effect’.
It is because the AGW proponents ignore that fact, that the miniscule effects of a trace gas like CO2 is blown way out of proportion and they make absurd assumptions, claims, and predictions.
For example:
http://www.realclimate.org/index.php/archives/2005/04/water-vapour-feedback-or-forcing/
Gavin Schmidt’s article here shows the complete fiction, fabrications of people completely deluded and ignorant of the basic sciences.
Gavin noted his puzzlement over Lindzen’s observation here:
“So where does the oft quoted “98%” number come from? This proves to be a little difficult to track down. Richard Lindzen quoted it from the IPCC (1990) report in a 1991 QJRMS review* as being the effect of water vapour and stratiform clouds alone, with CO2 being less than 2%. However, after some fruitless searching I cannot find anything in the report to justify that (anyone?). The calculations here (and from other investigators) do not support such a large number and I find it particularly odd that Lindzen’s estimate does not appear to allow for any overlap.”
If Gavin had even a tiny background in heat transfer, and thermal properties, he would have recognized that the 98% H2O, 2% CO2 numbers come from their relative specific heat capacities times their respective amounts in the atmosphere.
I actually think Lindzen was being generous regarding CO2 effects here. To me, the more accurate number would be
CO2 0.05%
ALL other ‘greenhouse gases 99.95%.
CO2 literally can’t absorb enough heat to do much of anything even if it doubles or triples or quadruples….”
———————————-
The whole argument of the AGW scientists rests on ignoring the largest, by far, components of the atmosphere, thus the largest components of the ‘greenhouse effect’.
This needs to be pointed out.
Most people now recogonize Mann’s “Hide the Decline!”…
Now it is time to point out their other delusion: “Hide the N2 and O2!”
A call for papers and articles….
I think from this point of view we can capture the imagination of readers and drive the debate towards a real truth.

Dave in Delaware
March 8, 2011 4:43 am

I have heard the comment that the models can’t be independently evaluated against historical data, because they have ‘used up’ all the data in tuning the model.
But – it seems they didn’t – they used up the homogenized ‘cheese spread’ synthetic data. The REAL data is still out there waiting to be used for comparison.
Comparing model results to real data can’t be that hard. Back in grad school days, we would establish comparison points to determine goodness of fit, Delta = (Calculated – Actual), and sum the Deltas over all comparison points. To avoid the ‘overs’ from canceling the ‘unders’, it is common to compute the square of the difference, (Calculated – Actual) squared, then add up the totals for the squared Deltas across all comparison points. For a climate model, the comparison points would be the weather station locations, compared to the model ‘grid result’ at that location. The goal, of course, is to get that total difference as low as possible.
Some cells may not have actual station data, but that is not critical to the goodness of fit evaluation; comparison where there is real station data is the key. This type of evaluation would also open up comparisons like Model vs Rural stations, or Model vs Airports, or Model vs all Northern Hemisphere, and even Model versus other Models. If climate modelers have not yet done this, why not? If they have (and I am betting someone has), where are those results published? Seems to me this is (or should be) standard practice for Model evaluation.

Pete H
March 8, 2011 5:01 am

SteveE says:
March 8, 2011 at 3:47 am
“I’d expect it to be warmer and as a result I’d expect the averged (sic) value for the whole grid cell to be warmer than the point data for the island.”
Yep Steve, god forbid we should use an actual thermometer reading without playing around with the reading! Please explain to me how you extrapolate to adjacent grid cells and the ones adjacent to them when you have no idea of the local sea or air temperature! Have they been out there and checked the areas in the adjacent cells before to get a figure to work from?
I am not being facetious, this thing still does not sink in for me! In my mind it is pure guess work and that is without a human getting the figure wrong when inputing data or some sensor going tits up!

SteveE
March 8, 2011 5:11 am

Dave in Delaware
Grid cell data is not point data and so a direct comparison isn’t an easy thing as this article shows.
If you compare one point on land to a grid cell that include 10’s of square km of ocean you’d expect there to be a difference as the graphs clearly show. The grid cell is an average of the whole area and so won’t reflect the exact value for the point data.
The averaging will remove the small scale hetrogenity in an area like this, however the larger scale picture will still be valid. The key is looking at the scale you are modelling to try and keep the hetrogenity that is important. In this case the tiny island of Jan Mayen isn’t important when trying to model global temperatures.

Alberta Slim
March 8, 2011 5:18 am

@Anthony Watts says:
March 8, 2011 at 4:13 am
Thanks Anthony.
The proverbial smoking gun e-mails, yet whitewashed at any inquiry.
The CAGW fanatics will just ignore things like this.
Pointman has a very good comment on “fanatics”
see: http://thepointman.wordpress.com/2011/03/04/some-thoughts-on-fanatics-and-how-to-fight-them/

Admin
March 8, 2011 5:22 am

I’ve located a photo of the weather station, this building appears to be the Meteorological Office:

Note the tall garage doors for filling then release of weather balloons.
Here’s the actual instruments. Note the albedo of the ground cover and the two cinder block heat sinks to keep the step from blowing away in the summer:

More here: http://www.jan-mayen.no/met-stn.htm
I did note that they say: “The Meteorological Station or the “Met” is located 3 km. north of Olonkin City. ” which is a good thing since it gets it away from the mini UHI there.
The weather station(s) of Jan Mayen have quite a colorful history according to Wikipedia, including Nazis, shipwrecks, and volcanoes. It reads like a movie plot:
==========================================================
The League of Nations gave Norway jurisdiction over the island, and in 1921 Norway opened the first meteorological station.[9] The Norwegian Meteorological Institute annexed the island for Norway in 1922. On 27 February 1930, the island was made de jure a part of the Kingdom of Norway.
During World War II, continental Norway was invaded and occupied by Germany in spring 1940. The four man team on Jan Mayen stayed at their posts and in an act of defiance began sending their weather reports to Great Britain instead of Norway. The British codenamed Jan Mayen Island X and attempted to reinforce it with troops to counteract any German attack. The Norwegian gunboat Fridtjof Nansen ran aground on one of the islands’ many uncharted lava reefs and the 68 man crew abandoned ship and joined the Norwegian team on shore. The British expedition commander, prompted by the loss of the gunboat, decided to abandon Jan Mayen until the following spring and radioed for a rescue ship. Within a few days a ship arrived and evacuated the four Norwegians and their would-be reinforcements after demolishing the weather station to prevent it from falling into German hands. The Germans attempted to land a weather team on the island on 16 November 1940. The German naval trawler carrying the team crashed on the rocks just off Jan Mayen after a patrolling British destroyer had picked them up on radar. Most of the crew struggled ashore and were taken prisoner by a landing party from the destroyer.[9]
The Allies returned to the island on 10 March 1941, when the Norwegian ship Veslekari, escorted by the patrol boat Honningsvaag, dropped 12 Norwegian weathermen on the island. The team’s radio transmissions soon betrayed its presence to the Axis, and German planes from Norway began to bomb and strafe Jan Mayen whenever weather would permit it, though they did little damage. Soon supplies and reinforcements arrived and even some antiaircraft guns, giving the island a garrison of a few dozen weathermen and soldiers. By 1941, Germany had given up hope of evicting the Allies from the island and the constant air raids stopped.
On 7 August 1942, a German Focke-Wulf Fw 200 “Condor”, probably on a mission to bomb the station, smashed into the nearby mountainside of Danielsenkrateret in fog, killing all 9 crewmembers.[10] In 1950, the wreck of another German plane with 4 crew members was discovered on the southwest side of the island.[11] In 1943, the Americans established a radio locating station named Atlantic City in the north to try to locate German radio bases in Greenland.
After the war, the meteorological station was located at Atlantic City, but moved in 1949 to a new location. Radio Jan Mayen also served as an important radio station for ship traffic in the Arctic Ocean. In 1959, NATO decided to build the LORAN-C network in the Atlantic Ocean, and one of the transmitters had to be on Jan Mayen. By 1961, the new military installations, including a new airfield, were operational.
For some time, scientists doubted if there could be any activity in the Beerenberg volcano, but in 1970 the volcano erupted, and added another three square kilometres (1.2 sq mi) of land mass to the island during the three to four weeks it lasted. It had more eruptions in 1973 and 1985.
During an eruption, the sea temperature around the island may increase from just above freezing to about 30 degrees Celsius (86°F).
========================================================
Wow, volcanic temperature spikes!

SteveE
March 8, 2011 5:33 am

Anthony Watts
Thanks for that Anthony.
As the last paragraph says; “the sea temperature around the island may increase from just above freezing” which is what the average for the modelled data is, +0.3C.
The point data for the island that is covered in snow and ice is unlikely to be representative of the whole grid cell which is modelled composed mostly of ocean.

pablo an ex pat
March 8, 2011 5:41 am

“There are three kinds of lies. Lies, d*mn lies and statistics”
Benjamin Disraeli

1DandyTroll
March 8, 2011 5:42 am

John V. Wright
“Willis, if you did not exist we would have to invent you.”
Wouldn’t that be rather counterintuitive for he’d just be artificial, much like a computer model, so he’d be constantly off the charts and therefor much inline with the rest of the computer models out there, you know groupthink and all that. :p

Editor
March 8, 2011 5:50 am

Erik says:
March 8, 2011 at 3:35 am
John V. Wright says:
March 8, 2011 at 2:33 am
“Willis, if you did not exist we would have to invent you.”
————————————————————–
Yes but could We make him as fast and strong? …

Perhaps we could just write a model.

March 8, 2011 5:54 am

Simple explanations that make a scientific truth self evident for everyone are the best ones. Feynman was the master and Willis is in the same league. “If you can’t explain something to a first year student, then you haven’t really understood it.” Also, note his famous Space Shuttle demonstration with icewater and the 0-ring. Willis always seems to clear out all the obfuscation, smoke and mirrors and flimflam when he explains something to us. Perhaps that should be ‘smoke and mainframes’…
“Turtles all the way down” reminded me of reading Ken Wilbur which features the following version:
Ken Wilbur from a Brief History of Everything
There’s an old joke about a king who goes to a Wiseperson and ask how come it is that the world does not fall down?
The Wiseperson replies, “The Earth is resting on a lion.” “On what, then, is the lion resting?” “The lion is resting on an elephant.” “On what is the elephant resting?” “The elephant is resting on a turtle. On what is the turtle resting on ? You can stop right there, Your Majesty. It’s turtles all the way down
Turtles all the way down, holons all the way down. No matter how far down we go, we find holons resting on holons resting on holons. Even subatomic particles disappear into a virtual cloud of bubbles within bubbles, holons within holons, in an infinity of probability waves. Holons all the way down.

Tenuc
March 8, 2011 6:08 am

Thanks Willis for a telling post. Current climate scientists would rather rely on models than the horribly spiky and uncooperative real data that is observed. So using one or more sets of model data to confirm the output of the final model gives results more in line with theory. The fact that observational data refute this is seen as a travesty of our inability to do real world observation right.
Bit like modern physics really, where one has to be able to suspend all you know from your senses to even start to understand what the current ‘dream of the day’ really means, but of course it must be correct because a convoluted bunch of maths proves it.
Science has got itself into a really bad state!

March 8, 2011 6:19 am

Reanalysis. Homogenisation. Why don’t they call it for what it is? Fraudulent!

March 8, 2011 6:21 am

Willis, nice one! Crystal clear and succinct, as usual. The NZ Maori version of Nasruddin is Maui, a sort of Polynesian imp/everyman who features in all the important myths there.
I suspect Maui may have had a sneaky hand in helping the NZ Climate Coalition to best NIWA’s UEA-related myths about how the temperatures had mysteriously risen over the last century in NZ while nobody was looking.

March 8, 2011 6:27 am

SteveE says: UMMM Steve from what I read on that last paragraph the volcano heats the surrounding water to 30C you might want to re-read that.

Chris Smith
March 8, 2011 6:32 am

How easy is it to write a program to check the reanalysis against each station data to try to locate and rank the worst offenders for consistent +/- bias [I suppose in terms of the mean difference as a fraction of the standard deviation of the measured data?]?
The purpose of the reanalysis is to interpolate the data onto a uniform grid. Are there bound to be these types differences at the measured points or are there interpolation methods which can make sure that the interpolation field values converge to the data at the locations of the measurements? I wonder how easy it is to try to do that.
To be fair to the re-analysers, I suspect it is not an easy task at all – and they cannot force AGW to emphasise the technical issues when the data is used as though it is observational data. I reckon that the documentation/publications/presentations from the guys who do the re-analyses might mention the technical difficulties and biases that come out from their imperfect and developing methods.

Grant Hillemeyer
March 8, 2011 6:36 am

SteveE says..,
The water temperature will be different from the air temps. Anthony states sea water temps as just above freezing, not air temps. Is the air at the station on the island warmer than the air over the water surrounding it? I don’t think you can assume it is.

Steinar Midtskogen
March 8, 2011 6:37 am

I suspect that the Jan Mayen weather is difficult to model. The surrounding ocean sometimes freezes, but the ice might be absent for many years like in recent years. The climate is harsh. Wind speeds up to 70 m/s have been recorded. Intruments get packed with snow and even sand. The surrounding mountains cause local wind and Föhn effects. The weather station has been moved 9 times since 1921.
There are descriptions and some metadata for the station in “Stasjonshistorie for norske meteorologiske målinger i arktis” by E. Steffensen, P.Ø. Nordli, I. Hanssen-Bauer. I can translate excerpts if anyone is interested. I can be reached at steinar@latinitas.org.

SteveE
March 8, 2011 6:42 am

paulID says:
March 8, 2011 at 6:27 am
SteveE says: UMMM Steve from what I read on that last paragraph the volcano heats the surrounding water to 30C you might want to re-read that.
————-
It says “During an eruption, the sea temperature around the island may increase from just above freezing to about 30 degrees Celsius (86°F).”
In other words the water temp is just above freezing before the eruption. There fore it’s not surprising that the point data is not representative of the grid cell data. The grid cell includes large amounts of ocean which would increase the average temperature of the grid cell in the winter which is exactly what is observed on the graph.
Willis’ analysis is flawed as he’s comparing point data and averaged grid cell data and surprised that they don’t match.
Big FAIL.

Jit
March 8, 2011 6:46 am

@ SteveE
So you think the model better fits sea temperatures around Jan Mayen? Possibly, but I have doubts.
For example, the modeled temps hit -15 which is impossible (unless Jan Mayen becomes ice-bound). The averages hit -4 each year which means the sea freezes over every year (maybe it does, but I don’t think so). Also the range in temperatures is 10 C, which seems large.
Of course we don’t want to confuse sea temperatures with air temperatures – but I would like to see real air temps over the sea near Jan Mayen as well before making a conclusion. It could be that the model fits better to Jan Mayen than the surrounding ocean.

March 8, 2011 6:59 am

There are certain things you can model on a computer as long as you have an known and acceptable error margin and there’s some way to validate what’s it’s predicting against real world data. Climate is very definitely not one of those things.
http://thepointman.wordpress.com/2011/01/21/the-seductiveness-of-models/
Pointman

Pamela Gray
March 8, 2011 7:01 am

Love the “cheese spread” comment. This Christmas my man and I tried eating the IMITATION “cheese product”. Yes, you read that right. It was an imitation of a cheese product. It didn’t taste like cheese and it didn’t melt like Velveeta. Sounds like the computer product that was developed from Hansenized Jone-ish imitation data and then used to check the model output. When compared to real data, you find yourself wondering exactly which part of the cow that product came from. For sure, it had none of mother’s milk in it.

SteveE
March 8, 2011 7:02 am

Grant Hillemeyer says:
March 8, 2011 at 6:36 am
http://wattsupwiththat.com/reference-pages/atmosphere/
Have a look at the Global – Two Meter Temperature, particularly around islands such as Iceland and Svalbard and tell me which is warmer, the area over the sea or the area over the land.

tty
March 8, 2011 7:03 am

The weather station is 10 m. a. s. l. and quite close to the coastline:
http://dokipy.met.no/projects/iaoos-norway/janmayen-stasjon2.jpg
I would expect the temperature to be very close to the SST in the area. If anything it should be slightly higher in summer since the area is dark lava which absorbs sunlight.
Admittedly the station is close to the airstrip, but since it is a gravel strip with similar albedo to its surroundings and there is only 8 flights a year to the island, this is hardly a major concern.

Richard Sharpe
March 8, 2011 7:12 am

Willis, this:

“Sure, and as your worship must know being a learned man, it’s turtles all the way down …”

reminds me of Joel Shore. Here is why. Joel is always claiming that sceptics are creationists.
A prominant scientist is said to have been giving a rebuttal to normal creationist arguments and was expounding on the creation of life and perhaps the earth. An old man stood up in the back of the audience to ask a question, and when his turn came, he said: “I don’t believe this malarkey! Don’t you know that the world is help up by a giant turtle?”
The scientist responded with: “So, what is holding up the turtle?”
The old man then said: “You are very clever young man, but it’s turtles all the way down!”
I suspect that this is one of those apocryphal stories that goes around. It has a certain appeal.

SteveE
March 8, 2011 7:19 am

Hi Jit,
I’m suggesting that the model has a better fit for the whole grid cell rather than just the land based measurements that are shown in the graphic. The air over the ocean would be warmer than that over the land as you can see from the teh link below, you can see the difference along the Aleutian islands off of Alaska. Over the sea the temps are warmer than over the land. When you average this out over the whole grid cell you get a warming bais when compared to purely land based measurements.

Paul Linsay
March 8, 2011 7:24 am

They could generate temperature maps of the lower troposphere and compare directly to the satellite measurements which are gridded, but for some reason they don’t.

March 8, 2011 7:25 am

SteveE wrote:
“Grid cell data is not point data and so a direct comparison isn’t an easy thing as this article shows.
If you compare one point on land to a grid cell that include 10′s of square km of ocean you’d expect there to be a difference as the graphs clearly show. The grid cell is an average of the whole area and so won’t reflect the exact value for the point data.
The averaging will remove the small scale hetrogenity in an area like this, however the larger scale picture will still be valid. The key is looking at the scale you are modelling to try and keep the hetrogenity that is important. In this case the tiny island of Jan Mayen isn’t important when trying to model global temperatures.”
and
“As the last paragraph says; “the sea temperature around the island may increase from just above freezing” which is what the average for the modelled data is, +0.3C.
The point data for the island that is covered in snow and ice is unlikely to be representative of the whole grid cell which is modelled composed mostly of ocean.”
This is a valid point, with regards to Jay Mayen island, which sparked my curiousity about the actual grid size of the ERA-40 grid. The link to this paper
http://goo.gl/0Jk4G
suggests that the ERA-40 grid size is 100km x 100km.
So I think that the key question is how well is the ocean temperature known to within such a grid size as the oceans constitute about 71% of the earth’s surface?
The completed ARGO ocean temperature/salinity measurement system
Wiki: http://goo.gl/7v2Kx
has a nominal gridding of about 300km,
Wiki: http://goo.gl/sED5J
the actual distribution being quasi-random as the sensors are floating about and are carried by ocean currents.
Wiki: http://goo.gl/JeOx8
So one question that arises is what are the systematic errors in interpolating down to 100km^2 from 300km^2 which is a not insignificant factor of nine in grid area.
As the ARGO project was completed in Nov 2007, it’s unlikely that the ERA-40 model has much merit, if any, the further back in time one goes before this completion data as interpolating over 1000’s of km of ocean data without taking effects of local variation in ocean temperature (ocean currents, interactions with the atmosphere, etc) into account.
From the above two points, I suspect that much of ERA-40 is GIGO.

SteveE
March 8, 2011 7:28 am

http://www.coaps.fsu.edu/~maue/extreme/gfs/current/x_t2m_012.png
Forgot to include the link on my previous post.

Dave in Delaware
March 8, 2011 7:35 am

SteveE says: March 8, 2011 at 5:11 am
Grid cell data is not point data and so a direct comparison isn’t an easy thing as this article shows. … The grid cell is an average of the whole area and so won’t reflect the exact value for the point data.
True enough, but my observation was intended to be general, not specific to this island. Where did the ERA-40 reanalysis data come from? It was “homogenized up” from the point data, yes? So it is legitimate to estimate grid area from point data, but not to re-check climate model grid calculations back to that point data?
I am proposing that we close the loop:
1) Point data -> Area grid
2) Grid values -> Climate Model
3) Climate Model calculated value -> Point Data
The Harry_Read_Me file should have convinced you that there may be significant errors introduced even at step 1). Poor Harry was not even able to re-create the previous version from his predecessors programs and data.
At Step 3), there must be some reason to expect the point and calculated grid values to be at least somewhat related (::chuckle::). After all, GISS extrapolates Arctic data out 1200 km. Surely it is less of a stretch to compare inward within a grid cell? There may even be value in comparing to each point value where there are multiple points within a grid. The beauty of the multi-point comparison is, some may be higher, some lower, some even out of phase, but the overall comparison gives you a numerical yardstick for ‘how am I doing’.

Pascvaks
March 8, 2011 7:38 am

The world of humans is becoming confused with numerous paradigms. It’s as if no one actually speaks the same language anywhere on the globe, or even on the ISS. Have we reached too high? Is someone up there mad at us? Or.. perhaps… maybe we have exceeded our own limits, and no one even noticed we were in the Land of Babel once again. One day I fear we will walk away from all that we have built, dazed, confused, dejected. And the sorry part was we were soooooo close. (Well… closer than the last time anyway;-)
PS: I know! You can’t understand what I’m talking about.

Scott Covert
March 8, 2011 7:49 am

Since the Thor of Norse mythology and the Thor of Marvel Comics have strong correlation, Thor exists.
We have metadata from two sources. The Marvel data only goes back to 1962 but the Norse Data spans back to pre-industrial times.

izen
March 8, 2011 7:50 am

As others have observed a comparison is being made between the model results for a grid square and measurements from a small island within that grid square.
It is not likely that the gid square REAL data match the island data, a better comparison may have been with the satellite data for the grid square and the model.
The range of annual variation seems to be quite closely matched, 11degC in the real data and a 0.3deg average, there are actually several averages for different periods around for the Jan Meyer island data, the early measurements by the Austrians in the 1800s are by far the lowest. The actual data certainly confirms the warming and its magnitude over the global increase as predicted from AGW theory that higher latitudes will show greater effect.
The ‘Elephants all the way down’ story is usually attributed to Betrand Russell and a little old lady theophist of limited rationality but great certainty.

DJ
March 8, 2011 7:53 am

The obvious…..
This is one perfect example of the concerns of why validation of data with resulting claims is so important. And this is just one example.
With a few more independent surveys of this quality done, there should be sufficient quality control to use to measure the results from the BEST in Berkeley against. We should be able to verify their verification. We must be able to find out where the turtles stop.
We need to be able to see that it’s not just one turtle in between 2 mirrors.

Juice
March 8, 2011 8:09 am

Jim Greig says:
March 8, 2011 at 2:41 am
“If your experiment needs statistics, you ought to have done a better
experiment.”
– Lord Ernest Rutherford

Rutherford said a lot of dumb things in his life. This is one of them.

Rod Everson
March 8, 2011 8:14 am

I think Steve E raises a valid point. The air temp over the land mass is being compared to (modeled) air temp over water, so there’s some comparing of “apples to oranges” going on here.
What I’m curious about is what does the annual pattern of water temps look like around the island? Because if it’s relatively stable (being a huge heat sink) then you’d expect the air temp over the island to be warmer than the air over the ocean during summer month and then cooler in winter months. But if the ocean warms significantly beyond “just above freezing” then the heat sink argument fails. So it would be nice to have data on the water temps too.
I don’t think Steve’s point should be dismissed though. It’s appears to me to be a valid one.

SteveE
March 8, 2011 8:16 am

Dave in Delaware says:
March 8, 2011 at 7:35 am
SteveE says: March 8, 2011 at 5:11 am
Grid cell data is not point data and so a direct comparison isn’t an easy thing as this article shows. … The grid cell is an average of the whole area and so won’t reflect the exact value for the point data.
True enough, but my observation was intended to be general, not specific to this island. Where did the ERA-40 reanalysis data come from? It was “homogenized up” from the point data, yes? So it is legitimate to estimate grid area from point data, but not to re-check climate model grid calculations back to that point data?
I am proposing that we close the loop:
1) Point data -> Area grid
2) Grid values -> Climate Model
3) Climate Model calculated value -> Point Data
—————-
I’m not sure how you could back calculate point data from an averaged grid cell. If I gave you an average of ten numbers and then asked you to tell me what the 3rd one was how could you do that?
The method I usually use is to compare the averaged distribution with the point data distribution.
I’d be interested to know if there is a better way of doing it though.
Cheers
Steve

March 8, 2011 8:17 am

Great work and most interesting. Model calculations that do not properly honor the nearest data points are only slightly more useful then unless, unless the precision and accuracy required is so loose as to render the thing meaningless. I only have one bone to pick. Synthetic data or numbers from any source pretending to be real measured values i.e. data, of anything are an oxymoron. It is just something that bugs me about how we science people communicate between ourselves and with others. I wrote about it in one of my essay “Thoughts About Data,” (http://retreadresources.com/blog/?p=16).

John F. Hultquist
March 8, 2011 8:23 am

Jim Sorenson says:
March 8, 2011 at 5:54 am
Re: Richard Feynman, o-rings, and ice water

I saw a video years ago in which Dr. Feynman explained he was tipped off to the o-ring demonstration by an employee of Morton-Thiokol, a contractor for the shuttle motor. I don’t find that video on the web at the moment and may not remember it exactly. Nevertheless, I think he would think it strange to find his mental abilities so entangled with this o-ring and ice episode.
For example, the issue of the cold o-rings was known:
http://www.samizdata.net/blog/archives/2004/02/reflections_on_nasas_grim_anni.html
Engineers at Morton Thiokol, the Utah-based firm that made the solid rocket modules, had been concerned about the cold-weather performance of the seals, so much so that they took the unprecedented step of issuing a “no-launch warning” to NASA the day before the doomed flight.

John T
March 8, 2011 8:23 am

“It is mathematically more challenging to compare the model outputs to point data.”
For some reason that made me think of the line about how an engineer calculates the impact of a charging bull on a matador.
1) To simplify the problem, assume the bull is a sphere…

March 8, 2011 8:25 am

Seconded. I agree that SteveE has a valid point that this is an “apples to oranges” comparison.
Also I’d be interested in reading SteveE’s response to my ocean gridding question of
March 8, 2011 at 7:25 am above.

DJ
March 8, 2011 8:26 am

To the land v. water temps question, how do the ARGO data compare to remote island temp records…and to comparable grid output?

March 8, 2011 8:37 am

Thanks willis.

bob
March 8, 2011 8:42 am

Anthony Watts:
Thanks for the information on Jan Mayen. Your history of the island is interesting in that it shows the relative importance of weather stations. Who but a weather man would forsake home and hearth to go to such a God forsaken place?
In my Ham Radio life, I have contacted Jan Mayen, and that was in CW (Morse Code). The Jan Mayen operator almost had to be one of those weather guys.

March 8, 2011 8:48 am

Don’t know much about statistics.
Don’t know much about Jan Mayen isle.
But today at least for a while.
I do know that one and one are two.
And I know that it’s quite a trick.
To bend an isle into a Hockey Stick.
* * *
The real story here is boring. Arguments back and forth about statistics are silly to me since just looking at the GISS chart is enough to convince any rational person that there is no upswing in the natural warming trend. Here is a dirt simple plot of the GISS data to squish down ithe ridiculously inflated and thus noise-instead-of-signal-emphasizing Y axis that hides the obvious linearity of the trend: http://oi53.tinypic.com/1qqt6w.jpg
This island is near Iceland and Greenland, north of the UK. The UK also shows a boringly linear trend, and it’s in good company throughout Europe and even North America, going back not 90 years but over 300 (!). Those I have in a single glance here: http://i49.tinypic.com/rc93fa.jpg
Not even the global average as presented by the NOAA on their Climate.org web site shows any sign of divergence from a linear trend, as I plot here minus the usual deceptive chartsmanship tricks here: http://i49.tinypic.com/2mpg0tz.jpg

juanslayton
March 8, 2011 8:58 am

“turtles all the way down”
Surely someone should mention William James….

Frank Perdicaro
March 8, 2011 9:00 am

Mr. Wright’s comment is correct, but has been used in many other contexts.
The original seems to be Voltaire commenting on god.
The most famous (infamous?) is Adolph Hitler commenting on Jews. That
particular discussion has a long write-up by Eric Hoffer in “The True Believer”.
A good book in many respects, and one that can be read with AGW in mind.
Bob Dylan said it about himself.

bob
March 8, 2011 9:01 am

Willis:
You state: “The 95% confidence intervals of the two means (averages) don’t overlap, meaning that they come from distinct populations. “
That’s an accurate statement. But, even if the CI’s did overlap, does that necessarily mean that the two means were from the same population?
On Lucia’s Blackboard I believe she had a similar discussion when she was setting up her tests for the IPCC projected temperatures. I don’t remember her conclusion, though.
Would it be better to compare the mean of the model outputs to the CI of the data?
Also, Steve (I think) made the statement that your analysis fails because you are comparing point data with gridded calculations. Is it not the purpose of the gridding process to massage the data within a single cell to make it comparable to a single point?
Thanks for the article. I apologize if my questions are too elementary

Mattias
March 8, 2011 9:05 am

Interesting. But I would be great to see a comparison for more then just one single place. Coparison for several places both over continents, in coastal areas, on Ilands and on different latitudes would be interesting. I know it would mean more work, but the result would be more useful.

Billy Liar
March 8, 2011 9:10 am

SteveE says:
March 8, 2011 at 5:33 am
…As the last paragraph says; “the sea temperature around the island may increase from just above freezing” which is what the average for the modelled data is, +0.3C…
That’s 2.1C above freezing for seawater, -0.9C would be ‘just above freezing’ for seawater.
The point data for the island that is covered in snow and ice is unlikely to be representative of the whole grid cell which is modelled composed mostly of ocean.
Your point about temperature over the ocean: I’ve just got one word for you -advection.
The 2m temperature over the ocean depends on where the air mass came from and how long it’s been in place.

Jeff Carlson
March 8, 2011 9:14 am

I have a theory …
models are not science … they are not the observation of nature in search of an answer as to how it works … the modelers stopped observing nature a long time ago …

Ron Cram
March 8, 2011 9:22 am

Willis,
Out of curiosity, have you seen this paper?
http://www.scirp.org/Journal/PaperInformation.aspx?paperID=3438

Doug Proctor
March 8, 2011 9:32 am

The last 20 years of temperature records show – using the Hansen scenarios of 1988 as the culmination of all these computer models – the temperature history to be significantly lower than any of the “real world” considerations of the models. If no red rocket hits the planet by 2015, the projections will be so far off the mark the case for CO2 warming as the dominant factor will be untenable. Personally, I’d say the models have already failed.
What is it that still keeps the models in fashion? The temperature rise, the lack of mid-trosopheric heating, the humidity decline, the lack of accelerated storms, and the “missing heat” of the oceans seem – to me – significantly at odds with the models’ predictions. So what keeps them healthy?
To kill the credibility of a Mann or a Jones is one thing, as all you need to do is throw doubt about, or find a failing in one area to tar the rest (not a pun, but it could be). To kill the credibility of model requires at least a lack of correlation to reality. Which we seem to have. Regardless of individual stations – and I understand that 1.5C differences are germane to a “danger” measured in points of a degree – what are the models apparently “getting right”?

Phil.
March 8, 2011 9:32 am

So that’s why I say, don’t be fooled by people talking about “reanalysis data”. It is a reanalysis model, and from first indications not all that good a reanalysis model. If you want to understand the actual winter weather in Jan Mayen, you’d be well-advised to avoid the ERA-40, or February will bite you in the ice.
Should be borne in mind when there’s discussion here about the DMI 80ºN Arctic temperature.

March 8, 2011 9:34 am

Just a problem, Willis.
NCAR and ERA40 have no link.
ERA40 is an old reanalysis by ECMWF, i.e. it’s European.
NCAR and NCEP have their reanalysis and is American.

Editor
March 8, 2011 9:46 am

Willis, you cannot compare reanalysis data to station data and expect it match up one-to-one. Your post is a straw man argument. Furthermore, you are comparing against first generation reanalysis models, which have been significantly improved since the 1990s when NCEP Reanalysis and the ERA-40 were implemented. The spectral resolution of the models does not come close to resolving the very complicated topography (tiny islands) at the station you chose to prove your point.
You are proving nothing with your analysis. I work with operational and reanalysis numerical weather prediction models on a daily basis — and what you just showed is a model bias in the ERA-40, which is a function of the model itself. Try comparing the station data to the newest ECMWF T1279 operational grids, which are readily available from the TIGGE archive.

SteveE
March 8, 2011 10:01 am

Willis Eschenbach says:
March 8, 2011 at 9:24 am
Hi Willis,
I apologise for accusing you of a big fail, correct in your comment.
However your new graph does add weight to my point. The Summer SST are higher than the point data temps measured on land and also warmer than the winter temps.
Surely this explains why the grid cell data is higher than the point data measured on the island.

tty
March 8, 2011 10:15 am

Ryan Maue says:
“The spectral resolution of the models does not come close to resolving the very complicated topography (tiny islands) at the station you chose to prove your point.”
A single tiny island in the middle of a big ocean is “a very complicated topography”? I would think that is about as simple as topography can possibly be.
In that case, just how bad is ERA40 in areas with complicated topography, like e. g. Central Asia or Europe?

eadler
March 8, 2011 10:39 am

As I understand it, the objective of the modeling to fill in data is to produce an estimate of the temperature anomaly in the area. That is a different thing from trying to determine the exact temperature. A consistent warm or cold bias doesn’t make a difference under those circumstances.
I doubt that the people doing the modeling expect their results to be spot on. As Anthony himself points out, the local environment can affect the station temperature.
If you could show that the temperature anomaly was significantly affected, you might have an argument that a significant error is created.
Of course, the alternative is to omit the temperature from a grid when there is no data. In fact, the HADCRUT data does that, and seems to underestimate the global anomaly increase because it leaves out a lot of the Arctic region.
It is pretty clear that the temperature anomaly of an Island surrounded by hundreds of miles of ocean would not be representative of the anomaly of territory inside the grid point.
It has also been pointed out, that there are better models out there than the one you used and found to be less accurate than you would like to see.
So I don’t consider your analysis very telling at all, despite the applause you have gotten from so many posters.

Editor
March 8, 2011 10:47 am

tty: yes, a little piece of land surrounded by ocean is a difficult analysis or forecasting situation for a model where a grid cell is 100 km x 100 km.
ERA-40 is not meant to represent every square kilometer of the Earth at street-level resolution. It is a large-scale model, just like the climate models. No one should attempt to compare station data to a grid point and expect the issue of representativeness to automatically disappear.
Similarly, when a forecast model is run for the next 7-days, it is verified afterward often against radiosondes at the given locations. However, it is not individual radiosondes, but usually a composite or collection of them to determine any vertical biases in the model analysis and forecast.

Kev-in-Uk
March 8, 2011 10:47 am

I am in total agreement with Willis apparent main thrust argument – that modelling based on ‘other’ model data is essentially unrealistic (my summation!).
As I see it – and I am sure if I am wrong, someone will correct me – a bunch of station data is averaged (in a given grid), if there are limited (or no) stations, the grid is extrapolated from adjacent grids (?) – then the grids are averaged together (so we have an average of an average of an extrapolation) to give us a gridded dataset ‘summed’ together to give us a potential ‘global’ (or regional) anomaly…..is that right?
then some comedian decides to reanalyse this but instead of comparing the reanalysis model outputs to actual recorded station data – they compare output to the gridded average? – if the model doesnt fit, either the gridded averaging is wrong or the model is wrong, so they ‘tweak’ either (or both?)…..
whichever way you cut it – I cannot see that as making good science or even sense? It strikes me that more and more, we find the use of ‘actual’ (as in REAL observed) data further and further removed from the methodology….

March 8, 2011 10:50 am

NikFromNYC,
Good graphs, I wonder if Izen looked at them? Izen commented:
“The actual data certainly confirms the warming and its magnitude over the global increase as predicted from AGW theory (sic) that higher latitudes will show greater effect.”
The basis for AGW is models. [The trend is simply emergence from the Little Ice Age.] But lots of folks still believe that runaway global warming is right around the corner. Convincing them otherwise is going to take time.
Men, it has been well said, think in herds; it will be seen that they go mad in herds, while they only recover their senses slowly, and one by one.
~ Charles Mackay, Extraordinary Popular Delusions and the Madness of Crowds

March 8, 2011 10:53 am

>> SteveE says:
March 8, 2011 at 10:01 am
However your new graph does add weight to my point. The Summer SST are higher than the point data temps measured on land and also warmer than the winter temps.
Surely this explains why the grid cell data is higher than the point data measured on the island. <<
I understand your concern about the comparing apples and oranges. However, looking at the island data and the sea surface data, it appears that the grid cell was made from just the island data, otherwise the average would be much closer to the sea surface temperatures.
This then begs the original question, what transformed the only data used for that grid cell to make it result in a warmer, and warming, grid cell?

Eduardo Ferreyra
March 8, 2011 11:00 am

Hi, Willis, good post, though Ryan Maue has a point there.
But what I wanted to tell you is about Nasruddin, a “saint man” in the Persian dervish philosophy. He can be at the same time an extremely wise man and and a very dumb one. One story I like because it depicts what the whole AGW scare means is:
A man saw Nasruddin throwing bread crumbles around his garden and asked him why. Nasruddin said: “It keeps tigers away from my house”.
-“But there are no tigers in this region,”
the man said.
“So, you see, it works!”– said Nasruddin.

Editor
March 8, 2011 11:17 am

John F. Hultquist says:
March 8, 2011 at 8:23 am
Jim Sorenson says:
March 8, 2011 at 5:54 am
Re: Richard Feynman, o-rings, and ice water
> I saw a video years ago in which Dr. Feynman explained he was tipped off to the o-ring demonstration….
I read that in one of Feynman’s books. It confirmed my suspicion that it was staged, and that Feynman did it in part to use his authority figure make an impression on people who might have ignored the same demonstration from a Morton-Thiokol engineer.
It kinda confirm to me that understanding and using office politics is important. I was also surprised that it seems to have boosted Feynman’s stature in the eyes of the beholders. I’m sure Feynman wasn’t looking for that, but it was something that the non-technical people apparently had never seen before but could understand.

March 8, 2011 11:20 am

If I wanted to pick a place to live I would pick the warmest of the three temperature profiles, but I can’t. One is water which I cannot live on and the other is make believe. So I am stuck with the coldest. Where people actually live matters.

March 8, 2011 11:25 am

Folks should note this.
1. DMI, the arctic temps some people like to quote?.. based on NCEP reanalysis.
2. Even “ground truth” “data” is filled with theory. Does anyone think that a thermometer records the physical property known as temperature?
its theory all the way down. all data is theory laden. At the bottom the theory that infuses the data is very very hard to give up.

jim hogg
March 8, 2011 11:35 am

Seems to me that a little knowledge of statistics drags us away from the reality we’re attempting to identify and explain, and a lot of statistical knowledge too often (I didn’t say always!) serves only to obscure matters more.
I’m still hoping that someone with access to the necessary data will identify a few hundred stations – or more or less – around the world which have not been moved and not been subjected to environmental changes, and whose equipment over a lengthy period is consistent/has not been changed and can reasonably be assumed to be accurate.
Then the plan would be to plot the average (the only arithmetical processing) of the data – the raw data and only the raw data – for as long a period as is feasible – given such conditions – to see what the modern temperature record really looks like – so far as it’s possible to get an accurate representation. Only when we have that can we attempt to explain it and perhaps reach conclusions that are reliable, such as possibly – yes, it’s getting warmer, or surprisingly, no, it’s getting colder, or whatever, but we don’t know why exactly.
But that would be boring of course. It wouldn’t be sophisticated or adequately intellectual, and wouldn’t need its own arcane language. It would be easily within the grasp of the proles. And wouldn’t generate research funds. And would puncture one more pointless political football. But might help to restore some respect to the field of climate science. I’ve been waiting awhile, and I’m not optimistic.

greg holmes
March 8, 2011 11:43 am

The common sense approach shown here is quite breathtaking and I applaud it.
Here in the UK we have in the past been known for a common sense approach, I know it scares the hell out of the “great and the good” (sarc) who rule over us. The solid explanation above is brilliant and cannot reasonably be denied.
Many thanks Willis.

izen
March 8, 2011 12:55 pm

@-Smokey says:
March 8, 2011 at 10:50 am
“Good graphs, I wonder if Izen looked at them? ….”
Yes, very pretty. It rather backs up the point that comparing station data with model reanalysis of a grid cell is of limited value. And now it appears that the ERA-40 model reanalysis may not be recent, of sufficient resolution to make it relevant and superseded by better re-analysis.
“The basis for AGW is models. [The trend is simply emergence from the Little Ice Age.]
No, the basis for AGW theory is measured physical quantities in the LWIR back radiation and outgoing spectra and the known thermodynamics of the constituents of the atmosphere.
Whenever people claim that the observed trend is “simply emergence from the Little Ice Age” it surely begs the question; WHY are we emerging from the LIA? Why did it not continue, or get even colder as it has around this point in the last 3 interglacial periods?
“But lots of folks still believe that runaway global warming is right around the corner. Convincing them otherwise is going to take time.”
Only the scientifically ignorant would think that RUNAWAY global warming is possible. The SB relationship of energy emitted is proportional to the temperature raised to the fourth power means that as any factor amplifies a warming effect it must exceed the fourth power magnitude to achieve a ‘runaway’ effect. As anyone with a modicum of knowledge in this field will be aware, CO2 increases have a logarithmic influence so are NEVER going to generate a runaway effect.
Of course the fact that a runaway effect is virtually impossible does not preclude the probability of a measurable rise in global surface temperature from the measured rise in CO2.

Kev-in-Uk
March 8, 2011 12:57 pm

Steven Mosher says:
March 8, 2011 at 11:25 am
valid points – but just because a thermometer is an imperfect way of measuring something does not make it useless (I mean in real terms, taking a physical temp measurement automatically affects the temperature of the medium being measured) – indeed, the whole basis of any physical measurement, is that it is compared to another one!, thus it is a comparable measurement – this is definately not the case with modelled or ‘adjusted’ data – in that case, one is perhaps comparing processed data with other processed data – in other words, neither has a ‘fixed’ point of physical reference.
Even if you define historical records as ‘inaccurate’ by some degree (no pun intended) – they are still actual measurements – they are not predicted/modelled/made up figures – they may be erroneous due to various reasons, but the error is a physical one – not a data processing one!

Kev-in-Uk
March 8, 2011 1:09 pm

Am I missing something?
either we are dealing with real measurements, or we aren’t.
If I measure, say, the size of an orange, and then produce an average of x million oranges – I will get an average size. But if I take only a few hundred oranges, and try to extrapolate/interpolate a ‘gaussian’ type distribution, will it be correct? To make it more correct (or realistic, if your prefer) – you need more data points, but if I used the model curve from my few hundred samples to predict ‘new’ data points – I’d be stupid because I’m using processed data to produce/verify more processed data? The only REAL way to get more points is to bloody measure more damned oranges(?).
Isn’t that the basic process that Willis is referring/objecting to?

Dave in Delaware
March 8, 2011 1:10 pm

SteveE says: March 8, 2011 at 8:16 am
I’m not sure how you could back calculate point data from an averaged grid cell.
————–
I am not suggesting that one should ‘back calculate point data’ from the climate model output.
The model output gives an averaged grid cell value at a point in time. Compare the model averaged value directly with the point data at that place and time. Don’t make it more difficult than that. I guarantee that the climate modelers are doing this kind of comparison of climate model output to the grid cell synthetic data. That is how they tweak the model parameters. Take it the next step and also compare the model output to the original measured data in that time window. If there are 5 measured values within the grid cell- then do the comparison 5 times.
We are not all that far apart. I believe the point you make is .. it won’t be exactly right. And I say, doesn’t matter. What matters is, if I do this comparison for 1000 or more point measurements, and compute a total for all of the Delta-squared values, I now have a yardstick measure of how well that particular climate model run performed. If the model inputs are tweaked, and re-run, does my yardstick get better or not? If a different climate model is run, and the yardstick computed against the same 1000+ points, is the yardstick value better in model A or model B? Does a particular climate model have ‘tendencies’ to run hot in summer or cold in winter, or vice versa? That would impact the yardstick value. Is the model stable – that is, are the year over year yardstick values about the same, getting bigger, getting smaller?
And to the point made by Willis – if there is a big problem (like UHI) in the homogenization process that created the synthetic grid data, then the climate model has no chance, but unless there is a comparison back to the measured data, how would you know?
I’m off my soapbox now. Thanks for listening. And thanks once again Willis, for thought provoking articles.

John Johnston
March 8, 2011 1:24 pm

@Juice, your supposed Lord Rutherford of Nelson quote is a misquote:
‘“If your experiment needs statistics, you ought to have done a better
experiment.”
The actual quote was: “If your result needs a statistician then you should design a better experiment”
There is a huge difference. Rutherford did say the odd dumb thing (who the hell does not?) but this was definitely not one of them. Lord Rutherford was the New Zealand Chemist and Physicist who laid the groundwork for the development of nuclear physics by investigating radioactivity. and who first split the atom. He collaborated with Bohr in describing atomic structure, and won the Nobel Prize in 1908, when that award still meant something.
He knew that statistics could be used to show correlation and thus point the way to formulating theories and identifying subjects worthy of investigation. He also knew that they could never be used to prove the theories. Only data will serve as results.

Stephen Brown
March 8, 2011 2:00 pm
Christian
March 8, 2011 2:06 pm

Put very (too) simply:
1. You collect the data from various sampling points.
2. You perform statistics on it to model it together with its neighbours, in an attempt to create a ‘regional’ model.
3. You generate values for areal subsets that represents averages, including a ‘global’ average for the entire dataset.
4. You use these values to model ‘regional’ patterns.
The problem is, the ‘regional’ average generated for a subarea can differ markedly from the original point value. There are many possible reasons, but the main one is that the errors are large because there are not enough sample points.
It’s all about the errors. ‘Regional’ air temperatures are also flawed e.g. because the sampling is not optimised for ‘regional’ purposes, the sampling points are too few and irregularly distributed, one can go on and on.
The key observation is that, for climate science and even modelling to be useful, the land temperature data and the sampling density and protocols needs to be optimised. I have never read any study that discussed these critical issues by the policy makers but they emerge from studies like http://www.surfacestations.org.
In the meantime, we can do our best with the satellite and Argos datasets which are still young but hold out the hope that they will produce reliable regional and global datasets.

George E. Smith
March 8, 2011 2:43 pm

Presumably the “Gridded Data” that you mentioned is actual measurements from grid points on the earth, so the models can recreate the data measured at those actual points ? or am I looking at this too simplisically ?
When people talk about gridded data, I visualize an Electrolytic Tank, used to model some electron optics setup, with EO electrodes, immersed in a conducting fluid, so you can apply Voltages to each electrode, and then map the electric field with a probe dipped into the electrolyte, at different “grid Points”.
Is this along the same lines as your gridded data measurements ?

George E. Smith
March 8, 2011 2:54 pm

“”””” John Johnston says:
March 8, 2011 at 1:24 pm
@Juice, your supposed Lord Rutherford of Nelson quote is a misquote:
‘“If your experiment needs statistics, you ought to have done a better
experiment.”
The actual quote was: “If your result needs a statistician then you should design a better experiment”
There is a huge difference. Rutherford did say the odd dumb thing (who the hell does not?) but this was definitely not one of them. Lord Rutherford was the New Zealand Chemist and Physicist who laid the groundwork for the development of nuclear physics by investigating radioactivity. and who first split the atom. He collaborated with Bohr in describing atomic structure, and won the Nobel Prize in 1908, when that award still meant something. “””””
Well we Kiwi are quite proud of our Lord Rutherford; but I don’t remember him splitting the atom; but I might have been doing something else that day.
What he did do I believe is fire alpha particles at very thin sheets of mica, and observe the scattering angles on the other side of the sheet. He had calculated the expected scattering angles from the then current plum pudding model of the atom, which had the chartges spread over the atomic volume, and the expected deflection of the doubly ionised Helium, could be calculated from how far from the CG of the chage it passed.
He observed quite unexpectedly, that some alphas scattered over very large angles, over 90 degrees (back scatter), and he concluded, that there must be something very dense, and localised in the middle of his plum pudding, for the alphas to ricochet off.
Thus was born the nuclear Atom. Maybe it was Fermi, who first “split the atom”, I don’t remember that either, or maybe he was the first to observe a chain reaction in nuclear fission. But I doubt that Rutherford, ever split an atom, so that anybody noticed.

George E. Smith
March 8, 2011 3:03 pm

Well it is apparently quite urban mythology, that rutherford split the atom. Maybe cockroft and Walton did in the Cavendish Laboratory, when Rutherford was running the Lab; but that was more than a decade, after he got his Nobel prize, which was NOT for splitting any atoms.
We actually had a 600 KeV Cockroft Walton accelerator , in our Physics Department, which was used to fire Deuterons, at heavy ice targets, to make beams of polarized neutrons (14 MeV, I believe) and Grad Students did double scattering experiments on those polarized neutron beams. I built a very efficient Neutron Scintillation detector, to count those neutron beams, so they didn’t have to run the accelerator for weeks, to get good statistics.

Michael Larkin
March 8, 2011 3:15 pm

From Jorge Luis Borges’ story “On Exactitude in Science”:
“. . . In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.”
In climate science, the “map”, which is confused with the territory itself, is huge and of dubious utility. Borges’ work is reputed, incidentally, to have had linkages with Sufism and, if only indirectly, with Nasrudin tales, both of which have interested me for nearly forty years. Somehow, it doesn’t come as a surprise that some others here are interested, too.
Nasrudin is great for inoculating the reflective mind against rote conditioning.

March 8, 2011 4:47 pm

I am looking at the graphs that Willis has posted. Clearly the ERA-40 is slightly biased warmer than the observation at the station. Also, as expected the SST have a lower variance and are generally warmer.
Note that you can scale up from a point measurement to a grid cell measurement but you cannot “downscale” from a grid cell to a point measurement – if the latter were possible you could measure real data at a lower resolution and then “magically” recover information at a higher resolution.
However, we can say something about the change of properties under upscaling. The mean is generally unchanged and the variance goes down as we upscale. In the case shown here by Willis we have a very surprising result. This is a tiny island in a large grid cell dominated by water. What I find surprising is that the ERA-40 appears to have the dynamic range of the MET station result, when it should have a much smaller variance and look like the SST curve, because the spatial averaging of the station data and the SST data would look very much like the SST only curve – the station data influence should be very small as its a tiny rock in a very large ocean grid cell.
Or am I misunderstanding the information being presented on the graphs?

Dave Springer
March 8, 2011 5:06 pm

“Models all the way down” isn’t neccessarily a bad thing:
http://tinyurl.com/4uzcyh2

March 8, 2011 5:09 pm

Isn’t there a Dr. Nasruddin who works for the IPCC?

HankHenry
March 8, 2011 5:15 pm

Hey guys. This “turtles all the way down” thing was funny the first time I heard it.
http://www.cafepress.com/turtleswaydown

Philip Peake (aka PJP)
March 8, 2011 5:27 pm

ThinkingScientist says:
March 8, 2011 at 4:47 pm

I don’t think you are misinterpreting the graphs at all.
Either the results represent a grid square, in which case the variation should be much lower, or they represent a point, in which case they are too high.

eadler
March 8, 2011 5:29 pm

Willis Eschenbach says:
March 8, 2011 at 12:23 pm
eadler says:
March 8, 2011 at 10:39 am
As I understand it, the objective of the modeling to fill in data is to produce an estimate of the temperature anomaly in the area. That is a different thing from trying to determine the exact temperature. A consistent warm or cold bias doesn’t make a difference under those circumstances.
Thanks, eadler. The problem arises when (as is often the case) we don’t have any data, or only scarce data, for a gridcell. I say if we want to analyze GCM model results, if we have no data for that gridcell, we don’t compare it to anything.
Instead, people use the ERA-40 climate model to manufacture imaginary, synthetic data for that gridcell. Then they compare the GCM results to the imaginary, synthetic data and TA-DA!. They announce that their model matches the observations. Which is what the Nature flood folks said, their model matched the observations.
But they weren’t observations at all. They were just the results of another model. You end up comparing two sets of synthetic temperatures. I don’t see the value in that when you have real temperatures to compare to. Nor do I see the value in that when (as is often the case) you have no real temperatures to compare to.
Finally, whether a warm or cold bias makes no difference as you claim depends on what you are analyzing and what the bias looks like. If the bias is not constant year-round, for example, it may not be a problem for some kinds of annual analyses but it would be a problem for most seasonal analyses.
Regards,
w.

It does not seem to me as big an issue as you are making it. In fact the Hadcrut and GISS data are not so different except for the regions where there is no data, where GISS interpolates and Hadcrut leaves it out. It turns out that climate change is the fastest in the Arctic, and Hadcrut in leaving out data, underestimates the amount of climate change as a result.
In the first place, the only way to run a global model is to start with data at all grid points. An undetermined initial condition at a number of your grid points is a nonstarter. Approximate data achieved by interpolation using some kind of model based on actual nearby data is superior to no data at all.
Looking at the evolution of climate in time takes a different model than using a model which interpolates the data to determine the temperature from neighboring grid points at each time step. Your analogy of turtles on turtles is way overstated. Most of the structure holding up the models is temperature data. There are a few holes in the data, which are plugged using model interpolation. For past data, it is the only thing we can do.
Regarding the Island example, would you prefer that it be used as the data representing the entire grid in which it lies, which is predominantly ocean?
If not, then I don’t see the basis for your criticism.

March 8, 2011 5:38 pm

Dave Springer,
I like your models better than climatologists’ models.
* * *
Izen says:
“Only the scientifically ignorant would think that RUNAWAY global warming is possible.”
Where were you when Algore and Michael Mann were scaring the ignoratii with tales of runaway global warming? We could have used your help. Their alarming charts with a vertical line showing temperatures increasing exponentially were the cause of the taxpayers getting fleeced.
“Runaway global warming” was the operative phrase up until recently, when it became clear that Ms Gaia wasn’t cooperating. So the new Orwellian phrase became “climate change.”
But you know what? I’m sticking with “runaway global warming” when the opportunity arises, to remind folks that that was the purported reason for Cap & Tax, carbon credits, wind farms, and just about every other bad, expensive idea.
I’m holding their feet to the fire on “runaway global warming.” They need to start refunding all the wasted loot, and admit that they were wrong.
And BTW, AGW was a hypothesis, not a theory. It’s never been a theory. Theories must be able to make consistent, validated predictions. Since the AGW hypothesis has been so wrong [and CAGW has always been wrong, as you admit], the AGW hypothesis must be regarded as only a conjecture at this point. You can learn about the differences here.

eadler
March 8, 2011 6:05 pm

ThinkingScientist says:
March 8, 2011 at 4:47 pm
However, we can say something about the change of properties under upscaling. The mean is generally unchanged and the variance goes down as we upscale. In the case shown here by Willis we have a very surprising result. This is a tiny island in a large grid cell dominated by water. What I find surprising is that the ERA-40 appears to have the dynamic range of the MET station result, when it should have a much smaller variance and look like the SST curve, because the spatial averaging of the station data and the SST data would look very much like the SST only curve – the station data influence should be very small as its a tiny rock in a very large ocean grid cell.
Or am I misunderstanding the information being presented on the graphs?

I am also puzzled. The synthesized data referred to by Eschenbach seems to be a ERA gridpoint data from an online data base. Why is it so close the the data from a specific island. The grid points shown on the map are over the ocean not on land.
I had assumed that the ERA model was run specially to get a prediction for an point on the map, but reading carefully, he says the data came from an online data base : http://climexp.knmi.nl/data/iera40_t2m_-9E_70.5N_n_su.dat
But the location of JanMayen Island is given by him as:
Figure 1. Location of Jan Mayen Island, 70.9°N, 8.7°W. White area in the upper left is Greenland. Gridpoints for the ERA-40 analysis shown as red diamonds. Center gridpoint data used for comparisons.
It doesn’t make sense to demand the sort of accuracy Eschenback is asking for under the circumstances.

Roger Knights
March 8, 2011 8:55 pm

Smokey says:
March 8, 2011 at 5:38 pm
Where were you when Algore and Michael Mann were scaring the ignoratii with tales of runaway global warming? We could have used your help. Their alarming charts with a vertical line showing temperatures increasing exponentially were the cause of the taxpayers getting fleeced.
“Runaway global warming” was the operative phrase up until recently, when it became clear that Ms Gaia wasn’t cooperating. So the new Orwellian phrase became “climate change.”
But you know what? I’m sticking with “runaway global warming” when the opportunity arises, to remind folks that that was the purported reason for Cap & Tax, carbon credits, wind farms, and just about every other bad, expensive idea.
I’m holding their feet to the fire on “runaway global warming.” They need to start refunding all the wasted loot, and admit that they were wrong.

Good idea. “Runaway” is much more pointed than “catastrophic,” because it keeps the focus on Gore’s discredited movie the discredited hockey stick, and on the alarmists’ attempt to panic the public.

Roger Knights
March 8, 2011 8:57 pm

PS: “Runaway” also keeps the focus on the alarmists’ unjustified reliance on presumed positive feedbacks.

March 8, 2011 9:06 pm

So far, no one has contradicted my understanding of the graphs, so i will state again what I find surprising: If the ERA-40 is an upscaled large grid cell temperature value then it is very surprising that the ERA-40 looks so like the station data. It shouldn’t : it should look like the SST curve because the station data area contribution is trivial in the upscaling to this grid cell. The SST response should dominate.
The basics of upscaling like this are well known in mining and reservoir engineering, so why does the ERA-40 result behave like this and apparently reproduce the station curve (in terms of variance, albeit slightly biased to a slightly higher temperature) when it should actually reproduce something closer to the SST curve? This to me points to a fundamental problem in the way the gridding is done in models – it suggests they are simply gridding point values instead of upscaling to block averages weighted by land/sea area within the target grid cell.

Alan S. Blue
March 8, 2011 9:45 pm

The very first solid piece of work merging the point-data and the gridcell data should be the complete cross-calibration of the satellite data with the ground data at as many stations as possible. Hell, one would be nice. Not correlations with anomalies – but cross-calibrations that would allow the actual calculation of the error inherent in using our point-source instruments to estimate global gridcells.

TomVonk
March 9, 2011 3:15 am

Same point as ThinkingScientist .
The grid value should look like the SST because the grid is ocean.
Nobody knows what the real SST average is but the variance should be near to the point SST measure at Mayene.
It is not. The grid average looks like a land station, not a sea station – it is cool instead of warm and varies much instead of little.
If the red curve is really some “reanalysis” leading to a grid average where sea is dominating then it is clearly garbage.
But Willis makes another valuable and much finer point.
Knowing that :
a) the real spatial grid average around the real world Mayene is unknown
b) the point measure of the real world Mayene is known but can’t be compared to the grid average
c) models only produce numerical spatial grid averages
what allows to validate/compare the virtual numerical grid averages?
The data we have can’t be used and the data that must be used we have it not .
So Willis is right, it’s completely circular – models produce grid averages which are compared to grid averages produced by other models. It is indeed turtles all the way down .
To make it fit, nothing easier – just change something in one or several models .
No pesky real world can be allowed to get in the way .
Somebody would like to compare the numerical games to real good old fashioned temperature measures that we have ?
Too bad , they are irrelevant 🙂

Vidar
March 9, 2011 3:21 am

First, I would like to stress the importance of validating models against observations. However, Willis here displays a very poor validation, which would never have passed any peer review (or critical eye of any researcher working on the matter). What Willis has actually done, is to compare a single point to a single model gridcell, and conclude that the model is a failure. If that isn’t cherry-picking, nothing is!
And honestly, Willis, I don’t buy your argument that you chose Jan Mayen because it was such an easy area to compare – because Jan Mayen is a lonely island in the middle of nowhere. In fact, you have performed a test that the model by no means is expected to pass. Either you didn’t know, or you did it on purpose. If the first, you should have known, else you have very little knowledge of modeling. If the latter, you are only trying to score cheap points, and should be left with little trustworthiness. Sorry. Therefore, it is puzzling to see how many thumbs up you get…
As I see it, there are 3 points explaining why the modeled failed your test, and at least one of them have already been mentioned:
1. The resolution of ERA40 is 100*100 km, which means that each gridcell represent an area of 10.000 km2. For comparison, Jan Mayen is ~50 km long and ~6 km wide, a total of ~350 km2 (i.e. look up in Wikipedia). How do you expect Jan Mayen to be represented in ERA40? Let me tell you: It’s not! The model does not “see” Jan Mayen, except that recordings of temperature and pressure are assimilated into it. Hence, picking a lonely, small island for your validation is a plain stupid thing to do! UNLESS you explicitly state that you in fact expect a bias in the model due to the lack of proper representation of the island. Nobody would except any close fit between model and observations in such a validation – well, nobody with any knowledge of modelling, that is.
2. Jan Mayen is located in the vicinity of the Arctic Front, that is, the oceanic front between the cold polar/arctic water masses in the Greenland Sea and the relatively warm atlantic water masses in the Norwegian Sea. Thus, being located in an area with large horizontal temperature gradients, you would expect locally large deviations between model and observations because 1) the model is coarse and will smooth out the gradients, and 2) the model may displace the front somewhat. Again, not the smartest area to validate a coarse resolution, global model….
However, I don’t expect you to be all that familiar with the oceanic conditions within the Nordic Seas. But if your analysis was expected to be a good one, you should definitely have some background knowledge of the area. Being rude, I could even suspect that you chose an area to which your AUDIENCE is unfamiliar…
3. A re-analysis model is dependent on observations to be realistic. That is, more observations: the model will be kept on track; less observations: the model will tend to “live it’s own life” and perhaps, move away from reality. You chose an area with exceptionally low abundance of observations….
Again, you chose an area where nobody would expect the model to do a good job. It’s OK to perform a tough test, but to run a close to impossible test will leave you with as little answers as performing a too easy test.
4. I also have to throw in a fourth point, although it is partly related to #1. Having a resolution of 100 X 100 km, ERA40 hardly even resolves cold-air outbreaks from the sea-ice in the Greenland Sea, which will heavily influence the temperature at Jan Mayen at each occurence, and will certainly contribute to the model being biased high. Not to mention more local effects at Jan Mayen itself…
Last, a small comment on the discussion on the SST vs. ground temperature, and the fact that the model agrees more with the ground temperature than the SST. The model gives SAT (Surface Air Temperature), and not SST (Sea Surface Temperature), and therefore, it is no wonder why the ERA40 temperature plunges below freezing temperature of seawater in winter, although the temperature in ERA40 reflects temperature above the sea surface, and NOT temperature on Jan Mayen.
As I fist said, validation of models is very important, and robust validation techniques are needed. But unfortunately, as I see it, Willis here demonstrates how NOT to validate a model. It could have been an interesting excersice, but he fails to recognize the limitations of his study, and he also reveals a severe lack of understanding when designing his analysis. And I am a bit worried when I see all the praise he gets for his (failure of an) attempt. This is NOT the kind of analysis I would like to use to replace what is being done at institutes around the world. But obviously, the “crowd” of climate sceptics would more than welcome such “science”. Am I right?

Skeptical Chymist
March 9, 2011 3:52 am

Re: Jorge Luis Borges’ story “On Exactitude in Science”
From ‘Sylvie and Bruno Concluded’ by Lewis Carroll (1889):
Mein Herr looked so thoroughly bewildered that I thought it best to change the subject. “What a useful thing a pocket-map is!” I remarked.
“That’s another thing we’ve learned from your Nation,” said Mein Herr, “map-making. But we’ve carried it much further than you. What do you consider the largest map that would be really useful?”
“About six inches to the mile.”
“Only six inches!” exclaimed Mein Herr. “We very soon got to six yards to the mile. Then we tried a hundred yards to the mile. And then came the grandest idea of all! We actually made a map of the country, on the scale of a mile to the mile!”
“Have you used it much?” I enquired.
“It has never been spread out, yet,” said Mein Herr: “the farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well. Now let me ask you another question. What is the smallest world you would care to inhabit?”
Was it Picasso that said “All artists copy. Great artists steal!”?

March 9, 2011 4:32 am

Vidar says:
“The model gives SAT (Surface Air Temperature), and not SST (Sea Surface Temperature), and therefore, it is no wonder why the ERA40 temperature plunges below freezing temperature of seawater in winter, although the temperature in ERA40 reflects temperature above the sea surface, and NOT temperature on Jan Mayen”
Thank you for addressing one of my points above: I am misreading the information of the temperature curves. I was expecting a largely ocean grid cell to look the SST curve due to upscaling, but it looks more like the MET station curve because it is air temperature shown by ERA-40. But this then begs the question: what can ERA-40 be validated against? It suggests it can only be compared to station data on land, and preferably where the station data is dense as this allows the consequence of upscaling to be examined – comparing upscaled grid cells to point data tells us very little. perhaps, unless the upscaling argument is irrelevent on Jan Maren, ie the air temperature from ERA-40 should follow the station data, even for a large grid cell, because the local air mass is spatially very smooth and homogenous?

tallbloke
March 9, 2011 5:08 am

Willis says:
“The ERA-40 synthetic data runs warmer than the observations in every single month of the year. On average, it is 1.3°C warmer”

So this represents an error of around 1/2 a percentage point. approximately equal to the global temperature change in the last 200 years. Not bad for a computer model.
I’m liking NCEP reanalysis data more and more. Whether it is useful or not all depends on what purpose you put it to.

Bernd Felsche
March 9, 2011 6:17 am

Me = Pedant
Air temperatures by themselves are meaningless unless one also knows the moisture content (via e.g. the wet-bulb temperature). Only then can the heat content of the air be determined.
“Warming” may simply be due to drier air. The wetter the air, the more heat (energy) it takes to raise its temperature.
Enthalpy is the light side; entropy the dark. 😉

March 9, 2011 6:45 am

Vidar wrote:
“1. The resolution of ERA40 is 100*100 km, which means that each gridcell represent an area of 10.000 km^2.”
“5. Last, a small comment on the discussion on the SST vs. ground temperature, and the fact that the model agrees more with the ground temperature than the SST. The model gives SAT (Surface Air Temperature), and not SST (Sea Surface Temperature), and therefore, it is no wonder why the ERA40 temperature plunges below freezing temperature of seawater in winter, although the temperature in ERA40 reflects temperature above the sea surface, and NOT temperature on Jan Mayen.”
While I agree with your comments, I think that they raise more questions than answers:
1. Both SteveE and you have raised valid points with regards to Jay Mayen island, which sparked my curiousity about the actual grid size of the ERA-40 grid. The link to this paper
http://goo.gl/0Jk4G
suggests that the ERA-40 grid size is indeed 100km x 100km = 10,000km^2
So I think that the key question is how well is the ocean temperature known to within such a grid size as the oceans constitute about 71% of the earth’s surface?
The completed ARGO ocean temperature/salinity measurement system
Wiki: http://goo.gl/7v2Kx
has a nominal gridding of about 300km x 300km = 90,000km^2,
Wiki: http://goo.gl/sED5J
the actual distribution being quasi-random as the sensors are floating about and are carried by ocean currents.
Wiki: http://goo.gl/JeOx8
So one question that arises is what are the systematic errors in interpolating down to 10,000km^2 from 90,000km^2 which is a not insignificant factor of 9 in grid area. Thus ERA-40 claims to give SAT [Surface Atmospheric Temperatures] values over 71% of the planet’s surface, the oceans, at nearly an order of magnitude higher resolution than the best current SST [Sea Surface Temperature] measurements. How can this model then be validated?
As the ARGO project was completed in Nov 2007, it’s unlikely that the ERA-40 model has much merit, if any, the further back in time one goes before this completion data as interpolating over 1,000,000′s of km^2 of ocean data without taking effects of local variation in ocean temperature (ocean currents, interactions with the atmosphere, etc) into account.
Of course, the situation is even worse as SSTs are only a proxy measurement for SATs.
2. As you pointed out, ERA-40 gives SATs not SSTs. Now unless SATs have been measured to 100km x 100km resolution over the oceans surfaces over a period of time, how can such a model be validated? Some might argue that sampling of of a subset of grid cells is sufficient. However, this would only be true if SATs were slowly varying across space and relatively constant in time. Neither is the case, as you yourself have referred to the Artcic Front and it’s temperature gradients. If there are no measurements of SATs at 100km x 100km resolution against which to validate over the time of the model, then how are it’s outputs anything more than guesswork.
Based on my above two points, I’d be interested in an explanation at to why the ERA-40 model is not pure GIGO or at best a crude model requiring much much more observational empirical data as input. Certainly not a model with any predictive skill on which to base economic decisions.

izen
March 9, 2011 7:47 am

Smokey says:
March 8, 2011 at 5:38 pm
“Where were you when Algore and Michael Mann were scaring the ignoratii with tales of runaway global warming? We could have used your help. Their alarming charts with a vertical line showing temperatures increasing exponentially were the cause of the taxpayers getting fleeced. ”
AlGore is a politician so it is inevitable he would spout nonsense, but I would like to see a link to a quote of Micheal Mann using the ‘runaway global warming’ meme in the sense of unlimited and accelerating warming from CO2 or temperatures increasing exponentially. I think you just attributed this to him when it was written by some media hack to sensationalize a story and is NOT a scientific claim and never has been.
If you google ‘runaway global warming’ the vast majority of the time it is used by ‘skeptics’ and the shallow end of the media pool.
“And BTW, AGW was a hypothesis, not a theory. It’s never been a theory. Theories must be able to make consistent, validated predictions. ”
It was a hypothesis when proposed by Arrenhius and Fourier.
By the time of Callender it was a theory, but with very little experimantal surport.
By the late 50s when Revelle, Keeling and Plass had established the rising CO2, the radiatiove transfer functions and the inability of the oceans to stabilise CO2 levels in less than geologic timescales it was a well surported theory.
Nowdays it is a theory with such strong surport from direct observation, physical measurement and confirmed theoretical predictions.

Rod Everson
March 9, 2011 8:15 am

Vidar wrote:
3. A re-analysis model is dependent on observations to be realistic. That is, more observations: the model will be kept on track; less observations: the model will tend to “live it’s own life” and perhaps, move away from reality. You chose an area with exceptionally low abundance of observations….Again, you chose an area where nobody would expect the model to do a good job.
But the fact that the model will “live it’s own life” is exactly the point of all this. 2010 (or 2009?) was supposedly one of the warmest years on record, but this was due to the model showing the Arctic to be exceptionally warm, in spite of the lack of factual observations, which necessitated relying instead upon the model results.
Given the obvious bias in the research community toward global warming being a reality (due, I’m convinced, to the massive infusion of our tax money as long as one toe’s the party line), I’m very much afraid that this was another case of the model, as you put it, living it’s own life. The problem is that the “own life” of the model is heavily determined by the modeler and his/her own perspective (See Hockey Stick for illustration.)
It seems to me that temps in the summer, over land, should rise above the SST temps, due to the ocean’s mixing and it’s huge heat sink capabilities, and that in summer the SAT over the ocean should have fallen below the SAT over Jan Mayan. But I don’t know. Just curious. Besides, I’m skeptical of numbers displayed on the order of tenths of a degree, anyway. All of the numbers are probably suspect, including the datapoints on Jan Mayan. After all, that’s what the Surface Stations project illustrated. We can’t trust most of the data.
But what the hey, let’s devote another few $trillion to stopping a problem we can’t measure close enough to even know if we have it, much less whether we’re fixing it.

ThomasU
March 9, 2011 8:18 am

Vidar says:
March 9, 2011 at 3:21 am
As I fist said, validation of models is very important, and robust validation techniques are needed. But unfortunately, as I see it, Willis here demonstrates how NOT to validate a model. It could have been an interesting excersice, but he fails to recognize the limitations of his study, and he also reveals a severe lack of understanding when designing his analysis. And I am a bit worried when I see all the praise he gets for his (failure of an) attempt. This is NOT the kind of analysis I would like to use to replace what is being done at institutes around the world. But obviously, the “crowd” of climate sceptics would more than welcome such “science”. Am I right?
No, Vidar , you are wrong! The “crowd” of people who are sceptical of climatism wants facts, real observations, conclusions drawn from carefully relating observations and assessing the outcome, in short: THE SCEPTICAL CROWD WANTS REAL SCIENCE! And not untestable models which use highly questionable input figures instead of data, as Willis rightly pionted out.
I want to thank Colonel Sun (March 9, 2011 @ 6:45 am) for his effort in sourcing the information on the Argo resolution! Has it ever occurred to you, Vidar, to question the ERA-40 resolution, did you ever attempt what Willis did – to question the model and try to validate it? If so, it would be nice to share your findings. If you didn´t, then it might be time to begin. Go for it, question the models as well as you can, help all of us – not just the “crowd” of sceptics – to better understand these issues. AND TRY TO BEHAVE LIKE A TRUE SCIENTIST: Look for the facts, try as hard as you can to find facts which invalidate your models and theories.
As I see it, ERA-40 can not be validated at all! If this is true, then why is it used? Is there no scientist around who has the courage to stand up for science, for sound and solid labor? Eisenhower was just all too right!

March 9, 2011 8:27 am

Izen says:
“I would like to see a link to a quote of Micheal Mann using the ‘runaway global warming’ meme in the sense of unlimited and accelerating warming from CO2 or temperatures increasing exponentially.”
Here you go.
And obviously you didn’t read [or maybe didn’t comprehend] the link I provided with the definitions of a Conjecture, a Hypothesis and a Theory. There is no possible way that the CAGW conjecture could fit the definition of a theory.
Your argument is simply a consensus argument, with no factual evidence. Arrhenius recanted his 1896 paper with a newer 1906 paper – which reduced climate sensitivity to less than the lower end of the IPCC’s current guesstimate. The claim that Arrhenius formulated a “theory” [which by definition must be able to make reliable and reasonably accurate predictions] has been debunked.
If you have a problem with the definition of a theory, Dr Glassman has a link at the end of his article. You can ask him directly to explain the differences to you. Words matter, and in this case you are using incorrect words.

Edim
March 9, 2011 8:46 am

Izen says:
“Only the scientifically ignorant would think that RUNAWAY global warming is possible.”
Is it possible on VENUS?
I think the scientifically ignorant (IPCC, consensus…) are being exposed.

wsbriggs
March 9, 2011 9:13 am

With all the discussion/counter-discussion of Willis’ blurb, the one point that seems to be missing from the discussion is the resolution is too poor from a Nyquist sampling point of view. All the fun things they’re doing to upscale the data, just means they’re trying to get around the lack of resolution. Peanut buttering data across wide swaths of the planet doesn’t get anything but useless, garbage, numerical collections – I won’t dignify the results as datasets, they’re not. As a starting point for GIGO, they are, however, perfect.
There is no substitute for measured data with error bands and a knowledge of the time sequence of the samples. When we get measurements with decent temporal resolution, and reasonable error bands, then we can start seriously discussing how the data can help provide insight into what’s going on with the climate.

March 9, 2011 11:26 am

wsbriggs wrote:
March 9, 2011 at 9:13 am
“With all the discussion/counter-discussion of Willis’ blurb, the one point that seems to be missing from the discussion is the resolution is too poor from a Nyquist sampling point of view. All the fun things they’re doing to upscale the data, just means they’re trying to get around the lack of resolution. Peanut buttering data across wide swaths of the planet doesn’t get anything but useless, garbage, numerical collections – I won’t dignify the results as datasets, they’re not. As a starting point for GIGO, they are, however, perfect.
There is no substitute for measured data with error bands and a knowledge of the time sequence of the samples. When we get measurements with decent temporal resolution, and reasonable error bands, then we can start seriously discussing how the data can help provide insight into what’s going on with the climate.”
Well said.
Thank you for succintly making the point I was belabouring.

March 9, 2011 1:19 pm

I see much angst in some of the comments regarding the temps recorded on Jan Mayen vs the temps over the ocean. Being a simplistic guy, I tend to think, well, simplistically.
We are measuring AIR temps, here, right? Well, the isle is 34 miles long (please forgive the units. Even after decades in research, I still prefer the mile to km) and VERY narrow. A quick search shows that the AVERAGE wind speed on the isle is over 14 mph. So, on average, the island air is replaced with air from over the ocean in less than 4 seconds, max (more likely less than one second unless the wind is in the exact direction of the isle’s length), unless you believe the wind is simply rotating only over the isle (that would be a neat trick for nature!).
So why wouldn’t the AIR temps of the isle be exactly the same as the ocean AIR temps? I would welcome a reasoned response as to why the air temps should differ considering the size of the isle and the wind speed.

March 9, 2011 2:03 pm

Alas, in rereading my post I see that my fingers got ahead of my brain. Those seconds should read hours, and to be more precise, at average windspeeds, the maximum time any given air stays over the isle ranges from about twenty minutes to 2.5 hours, with the probability of 20 minutes being closer to reality than hours (look at the shape of the island).
The jest of the argument remains. How much of a temp change will the air pick up while over the isle? Also consider where the temperature is being measured. If the temperatures are measured at a location in the center of the isle, the air would have been over the land from about 10 minutes to 1 and one-quarter hours. I would posit that the island/ocean air is too well-mixed to have any significant temperature deviation between them.

Mark W
March 9, 2011 6:51 pm

Hi Willis,
I’m just trying to get my head around all of this and learn a bit of basic statistics along the way, and I wonder if you could clarify something?
I don’t understand how you computed your 95% confidence intervals on the data. I was under the impression that the 95% CI for normally distributed data (which I am guessing you assume) is 1.96*SD (where SD is the standard deviation). For both the data sets you present the SD is fairly large, order 4 degrees I think. So how did you possibly get such tiny numbers as your “error” on the mean of the datasets? I expected it to be much larger from looking at the scatter in the data. Please could you clarify?
many thanks!!

TomVonk
March 10, 2011 5:40 am

So why wouldn’t the AIR temps of the isle be exactly the same as the ocean AIR temps? I would welcome a reasoned response as to why the air temps should differ considering the size of the isle and the wind speed.
Indeed why ?
And why should the rest of the air which is only in contact with the ocean (hundreds of thousands km² of it) differ significantly from the temperature of the ocean considering its size and the wind speeds ?
There is only one answer :
we have no clue what it is elsewhere than on exactly 1 point which happens to be the station on the Mayen island .
For me THIS is the point Willis made : a “reconstruction” of an average air temperature over 10 000 km² of ocean where only 1 point is measured and this point happens NOT to be above the ocean is just a numerical artefact which might be about anything you want .
If you feel like that you can say that it follows closely the SST . If you don’t then you can say that it follows closely the Mayen data . Or anything in between .
Whatever you choose can’t be falsified by real measures anyway pretty much per definition .
That’s what wbriggs called GIGO .