Guest Post by Willis Eschenbach
A learned man was arguing with a rube named Nasruddin. The learned man asked “What holds up the Earth?” Nasruddin said “It sits on the back of a giant turtle.” The learned man knew he had Nasruddin then. The learned man asked “But what holds up the turtle”, expecting Nasruddin to be flustered by the question. Nasruddin simply smiled. “Sure, and as your worship must know being a learned man, it’s turtles all the way down …”
I’ve written before of the dangers of mistaking the results of the ERA-40 and other “re-analysis” computer models for observations or data. If we just compare models to models and not to data, then it’s “models all the way down,” not resting on real world data anywhere.
I was wondering where on the planet I could demonstrate the problems with ERA-40. I happened to download the list of stations used in the CRUTEM3 analysis, and the first one was Jan Mayen Island. “Perfect”, I thought. Middle of nowhere, tiny dot, no other stations for many gridcells in any direction.
Figure 1. Location of Jan Mayen Island, 70.9°N, 8.7°W. White area in the upper left is Greenland. Gridpoints for the ERA-40 analysis shown as red diamonds. Center gridpoint data used for comparisons.
How does the ERA-40 reanalysis data stack up against the Jan Mayen ground data?
Figure 2. Actual temperature data for Jan Mayen Island and ERA-40 nearest gridpoint reanalysis “data”. NCAR data from KNMI. Jan Mayen data from GISS.
It’s not pretty. The ERA-40 simulated data runs consistently warmer than the observations in both the summer and the winter. The 95% confidence intervals of the two means (averages) don’t overlap, meaning that they come from distinct populations. Often the ERA-40 data is two or more degrees warmer in the winter. But occasionally and unpredictably, ERA-40 is 3 to 5 degrees cooler in winter. Jan Mayen’s year-round average is below freezing. The average of the ERA-40 is above freezing. The annual cycle of the two, as shown in Figure 3 below, is also revealing.
Figure 3. Two annual cycles (Jan-Dec) of the ERA-40 synthetic data and Jan Mayen temperature. Photo Source
The ERA-40 synthetic data runs warmer than the observations in every single month of the year. On average, it is 1.3°C warmer . In addition, the distinctive winter signature of Jan Mayen (February averages warmer than either January or March) is not captured at all in the ERA-40 synthetic data.
So that’s why I say, don’t be fooled by people talking about “reanalysis data”. It is a reanalysis model, and from first indications not all that good a reanalysis model. If you want to understand the actual winter weather in Jan Mayen, you’d be well-advised to avoid the ERA-40, or February will bite you in the ice.
The use of “reanalysis data” has some advantages. Because the reanalysis data is gridded, it can be compared directly to model outputs. It is mathematically more challenging to compare the model outputs to point data.
But that should be a stimulus to develop better mathematical comparison methods. It shouldn’t be a reason to interpose a second model in between the first model and the data. All that can do is increase the uncertainty.
In addition, due to the fact that both models involved (various GCMs and the ERA-40) are related conceptually (being current generation climate models), we would expect the correlations to be artificially high. In other words, a model’s output is likely to have a better fit to another related model’s output than it does to observational data. Data is ugly and has sudden jumps and changes. Computer model output is smooth and continuous. Which will fit better?
My conclusion? The ERA-40 is unsuited for the purpose of validating model results. Compare model results to real data, not to the ERA-40. Comparing models to models is a non-starter.
Regards to everyone,
w.
[UPDATE] Several people have asked about the sea surface temperatures in the area. Here they are:
Figure 4. As in Figure 2, but including HadSST sea surface temperature (SST) data for the gridcell containing Jan Mayen. SST data from KNMI
Figure 5. As in Figure 3, but including HadSST sea surface temperature (SST) data for the gridcell containing Jan Mayen. SST data from KNMI
Note that SST is always higher than the Jan Mayen temperature. This is not true for the ERA-40 reconstruction model output.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.





Presumably the “Gridded Data” that you mentioned is actual measurements from grid points on the earth, so the models can recreate the data measured at those actual points ? or am I looking at this too simplisically ?
When people talk about gridded data, I visualize an Electrolytic Tank, used to model some electron optics setup, with EO electrodes, immersed in a conducting fluid, so you can apply Voltages to each electrode, and then map the electric field with a probe dipped into the electrolyte, at different “grid Points”.
Is this along the same lines as your gridded data measurements ?
“”””” John Johnston says:
March 8, 2011 at 1:24 pm
@Juice, your supposed Lord Rutherford of Nelson quote is a misquote:
‘“If your experiment needs statistics, you ought to have done a better
experiment.”
The actual quote was: “If your result needs a statistician then you should design a better experiment”
There is a huge difference. Rutherford did say the odd dumb thing (who the hell does not?) but this was definitely not one of them. Lord Rutherford was the New Zealand Chemist and Physicist who laid the groundwork for the development of nuclear physics by investigating radioactivity. and who first split the atom. He collaborated with Bohr in describing atomic structure, and won the Nobel Prize in 1908, when that award still meant something. “””””
Well we Kiwi are quite proud of our Lord Rutherford; but I don’t remember him splitting the atom; but I might have been doing something else that day.
What he did do I believe is fire alpha particles at very thin sheets of mica, and observe the scattering angles on the other side of the sheet. He had calculated the expected scattering angles from the then current plum pudding model of the atom, which had the chartges spread over the atomic volume, and the expected deflection of the doubly ionised Helium, could be calculated from how far from the CG of the chage it passed.
He observed quite unexpectedly, that some alphas scattered over very large angles, over 90 degrees (back scatter), and he concluded, that there must be something very dense, and localised in the middle of his plum pudding, for the alphas to ricochet off.
Thus was born the nuclear Atom. Maybe it was Fermi, who first “split the atom”, I don’t remember that either, or maybe he was the first to observe a chain reaction in nuclear fission. But I doubt that Rutherford, ever split an atom, so that anybody noticed.
Well it is apparently quite urban mythology, that rutherford split the atom. Maybe cockroft and Walton did in the Cavendish Laboratory, when Rutherford was running the Lab; but that was more than a decade, after he got his Nobel prize, which was NOT for splitting any atoms.
We actually had a 600 KeV Cockroft Walton accelerator , in our Physics Department, which was used to fire Deuterons, at heavy ice targets, to make beams of polarized neutrons (14 MeV, I believe) and Grad Students did double scattering experiments on those polarized neutron beams. I built a very efficient Neutron Scintillation detector, to count those neutron beams, so they didn’t have to run the accelerator for weeks, to get good statistics.
From Jorge Luis Borges’ story “On Exactitude in Science”:
“. . . In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.”
In climate science, the “map”, which is confused with the territory itself, is huge and of dubious utility. Borges’ work is reputed, incidentally, to have had linkages with Sufism and, if only indirectly, with Nasrudin tales, both of which have interested me for nearly forty years. Somehow, it doesn’t come as a surprise that some others here are interested, too.
Nasrudin is great for inoculating the reflective mind against rote conditioning.
I am looking at the graphs that Willis has posted. Clearly the ERA-40 is slightly biased warmer than the observation at the station. Also, as expected the SST have a lower variance and are generally warmer.
Note that you can scale up from a point measurement to a grid cell measurement but you cannot “downscale” from a grid cell to a point measurement – if the latter were possible you could measure real data at a lower resolution and then “magically” recover information at a higher resolution.
However, we can say something about the change of properties under upscaling. The mean is generally unchanged and the variance goes down as we upscale. In the case shown here by Willis we have a very surprising result. This is a tiny island in a large grid cell dominated by water. What I find surprising is that the ERA-40 appears to have the dynamic range of the MET station result, when it should have a much smaller variance and look like the SST curve, because the spatial averaging of the station data and the SST data would look very much like the SST only curve – the station data influence should be very small as its a tiny rock in a very large ocean grid cell.
Or am I misunderstanding the information being presented on the graphs?
“Models all the way down” isn’t neccessarily a bad thing:
http://tinyurl.com/4uzcyh2
Isn’t there a Dr. Nasruddin who works for the IPCC?
Hey guys. This “turtles all the way down” thing was funny the first time I heard it.
http://www.cafepress.com/turtleswaydown
ThinkingScientist says:
March 8, 2011 at 4:47 pm
I don’t think you are misinterpreting the graphs at all.
Either the results represent a grid square, in which case the variation should be much lower, or they represent a point, in which case they are too high.
Willis Eschenbach says:
March 8, 2011 at 12:23 pm
eadler says:
March 8, 2011 at 10:39 am
As I understand it, the objective of the modeling to fill in data is to produce an estimate of the temperature anomaly in the area. That is a different thing from trying to determine the exact temperature. A consistent warm or cold bias doesn’t make a difference under those circumstances.
Thanks, eadler. The problem arises when (as is often the case) we don’t have any data, or only scarce data, for a gridcell. I say if we want to analyze GCM model results, if we have no data for that gridcell, we don’t compare it to anything.
Instead, people use the ERA-40 climate model to manufacture imaginary, synthetic data for that gridcell. Then they compare the GCM results to the imaginary, synthetic data and TA-DA!. They announce that their model matches the observations. Which is what the Nature flood folks said, their model matched the observations.
But they weren’t observations at all. They were just the results of another model. You end up comparing two sets of synthetic temperatures. I don’t see the value in that when you have real temperatures to compare to. Nor do I see the value in that when (as is often the case) you have no real temperatures to compare to.
Finally, whether a warm or cold bias makes no difference as you claim depends on what you are analyzing and what the bias looks like. If the bias is not constant year-round, for example, it may not be a problem for some kinds of annual analyses but it would be a problem for most seasonal analyses.
Regards,
w.
It does not seem to me as big an issue as you are making it. In fact the Hadcrut and GISS data are not so different except for the regions where there is no data, where GISS interpolates and Hadcrut leaves it out. It turns out that climate change is the fastest in the Arctic, and Hadcrut in leaving out data, underestimates the amount of climate change as a result.
In the first place, the only way to run a global model is to start with data at all grid points. An undetermined initial condition at a number of your grid points is a nonstarter. Approximate data achieved by interpolation using some kind of model based on actual nearby data is superior to no data at all.
Looking at the evolution of climate in time takes a different model than using a model which interpolates the data to determine the temperature from neighboring grid points at each time step. Your analogy of turtles on turtles is way overstated. Most of the structure holding up the models is temperature data. There are a few holes in the data, which are plugged using model interpolation. For past data, it is the only thing we can do.
Regarding the Island example, would you prefer that it be used as the data representing the entire grid in which it lies, which is predominantly ocean?
If not, then I don’t see the basis for your criticism.
Dave Springer,
I like your models better than climatologists’ models.
* * *
Izen says:
“Only the scientifically ignorant would think that RUNAWAY global warming is possible.”
Where were you when Algore and Michael Mann were scaring the ignoratii with tales of runaway global warming? We could have used your help. Their alarming charts with a vertical line showing temperatures increasing exponentially were the cause of the taxpayers getting fleeced.
“Runaway global warming” was the operative phrase up until recently, when it became clear that Ms Gaia wasn’t cooperating. So the new Orwellian phrase became “climate change.”
But you know what? I’m sticking with “runaway global warming” when the opportunity arises, to remind folks that that was the purported reason for Cap & Tax, carbon credits, wind farms, and just about every other bad, expensive idea.
I’m holding their feet to the fire on “runaway global warming.” They need to start refunding all the wasted loot, and admit that they were wrong.
And BTW, AGW was a hypothesis, not a theory. It’s never been a theory. Theories must be able to make consistent, validated predictions. Since the AGW hypothesis has been so wrong [and CAGW has always been wrong, as you admit], the AGW hypothesis must be regarded as only a conjecture at this point. You can learn about the differences here.
ThinkingScientist says:
March 8, 2011 at 4:47 pm
However, we can say something about the change of properties under upscaling. The mean is generally unchanged and the variance goes down as we upscale. In the case shown here by Willis we have a very surprising result. This is a tiny island in a large grid cell dominated by water. What I find surprising is that the ERA-40 appears to have the dynamic range of the MET station result, when it should have a much smaller variance and look like the SST curve, because the spatial averaging of the station data and the SST data would look very much like the SST only curve – the station data influence should be very small as its a tiny rock in a very large ocean grid cell.
Or am I misunderstanding the information being presented on the graphs?
I am also puzzled. The synthesized data referred to by Eschenbach seems to be a ERA gridpoint data from an online data base. Why is it so close the the data from a specific island. The grid points shown on the map are over the ocean not on land.
I had assumed that the ERA model was run specially to get a prediction for an point on the map, but reading carefully, he says the data came from an online data base : http://climexp.knmi.nl/data/iera40_t2m_-9E_70.5N_n_su.dat
But the location of JanMayen Island is given by him as:
Figure 1. Location of Jan Mayen Island, 70.9°N, 8.7°W. White area in the upper left is Greenland. Gridpoints for the ERA-40 analysis shown as red diamonds. Center gridpoint data used for comparisons.
It doesn’t make sense to demand the sort of accuracy Eschenback is asking for under the circumstances.
Good idea. “Runaway” is much more pointed than “catastrophic,” because it keeps the focus on Gore’s discredited movie the discredited hockey stick, and on the alarmists’ attempt to panic the public.
PS: “Runaway” also keeps the focus on the alarmists’ unjustified reliance on presumed positive feedbacks.
So far, no one has contradicted my understanding of the graphs, so i will state again what I find surprising: If the ERA-40 is an upscaled large grid cell temperature value then it is very surprising that the ERA-40 looks so like the station data. It shouldn’t : it should look like the SST curve because the station data area contribution is trivial in the upscaling to this grid cell. The SST response should dominate.
The basics of upscaling like this are well known in mining and reservoir engineering, so why does the ERA-40 result behave like this and apparently reproduce the station curve (in terms of variance, albeit slightly biased to a slightly higher temperature) when it should actually reproduce something closer to the SST curve? This to me points to a fundamental problem in the way the gridding is done in models – it suggests they are simply gridding point values instead of upscaling to block averages weighted by land/sea area within the target grid cell.
The very first solid piece of work merging the point-data and the gridcell data should be the complete cross-calibration of the satellite data with the ground data at as many stations as possible. Hell, one would be nice. Not correlations with anomalies – but cross-calibrations that would allow the actual calculation of the error inherent in using our point-source instruments to estimate global gridcells.
Same point as ThinkingScientist .
The grid value should look like the SST because the grid is ocean.
Nobody knows what the real SST average is but the variance should be near to the point SST measure at Mayene.
It is not. The grid average looks like a land station, not a sea station – it is cool instead of warm and varies much instead of little.
If the red curve is really some “reanalysis” leading to a grid average where sea is dominating then it is clearly garbage.
But Willis makes another valuable and much finer point.
Knowing that :
a) the real spatial grid average around the real world Mayene is unknown
b) the point measure of the real world Mayene is known but can’t be compared to the grid average
c) models only produce numerical spatial grid averages
what allows to validate/compare the virtual numerical grid averages?
The data we have can’t be used and the data that must be used we have it not .
So Willis is right, it’s completely circular – models produce grid averages which are compared to grid averages produced by other models. It is indeed turtles all the way down .
To make it fit, nothing easier – just change something in one or several models .
No pesky real world can be allowed to get in the way .
Somebody would like to compare the numerical games to real good old fashioned temperature measures that we have ?
Too bad , they are irrelevant 🙂
First, I would like to stress the importance of validating models against observations. However, Willis here displays a very poor validation, which would never have passed any peer review (or critical eye of any researcher working on the matter). What Willis has actually done, is to compare a single point to a single model gridcell, and conclude that the model is a failure. If that isn’t cherry-picking, nothing is!
And honestly, Willis, I don’t buy your argument that you chose Jan Mayen because it was such an easy area to compare – because Jan Mayen is a lonely island in the middle of nowhere. In fact, you have performed a test that the model by no means is expected to pass. Either you didn’t know, or you did it on purpose. If the first, you should have known, else you have very little knowledge of modeling. If the latter, you are only trying to score cheap points, and should be left with little trustworthiness. Sorry. Therefore, it is puzzling to see how many thumbs up you get…
As I see it, there are 3 points explaining why the modeled failed your test, and at least one of them have already been mentioned:
1. The resolution of ERA40 is 100*100 km, which means that each gridcell represent an area of 10.000 km2. For comparison, Jan Mayen is ~50 km long and ~6 km wide, a total of ~350 km2 (i.e. look up in Wikipedia). How do you expect Jan Mayen to be represented in ERA40? Let me tell you: It’s not! The model does not “see” Jan Mayen, except that recordings of temperature and pressure are assimilated into it. Hence, picking a lonely, small island for your validation is a plain stupid thing to do! UNLESS you explicitly state that you in fact expect a bias in the model due to the lack of proper representation of the island. Nobody would except any close fit between model and observations in such a validation – well, nobody with any knowledge of modelling, that is.
2. Jan Mayen is located in the vicinity of the Arctic Front, that is, the oceanic front between the cold polar/arctic water masses in the Greenland Sea and the relatively warm atlantic water masses in the Norwegian Sea. Thus, being located in an area with large horizontal temperature gradients, you would expect locally large deviations between model and observations because 1) the model is coarse and will smooth out the gradients, and 2) the model may displace the front somewhat. Again, not the smartest area to validate a coarse resolution, global model….
However, I don’t expect you to be all that familiar with the oceanic conditions within the Nordic Seas. But if your analysis was expected to be a good one, you should definitely have some background knowledge of the area. Being rude, I could even suspect that you chose an area to which your AUDIENCE is unfamiliar…
3. A re-analysis model is dependent on observations to be realistic. That is, more observations: the model will be kept on track; less observations: the model will tend to “live it’s own life” and perhaps, move away from reality. You chose an area with exceptionally low abundance of observations….
Again, you chose an area where nobody would expect the model to do a good job. It’s OK to perform a tough test, but to run a close to impossible test will leave you with as little answers as performing a too easy test.
4. I also have to throw in a fourth point, although it is partly related to #1. Having a resolution of 100 X 100 km, ERA40 hardly even resolves cold-air outbreaks from the sea-ice in the Greenland Sea, which will heavily influence the temperature at Jan Mayen at each occurence, and will certainly contribute to the model being biased high. Not to mention more local effects at Jan Mayen itself…
Last, a small comment on the discussion on the SST vs. ground temperature, and the fact that the model agrees more with the ground temperature than the SST. The model gives SAT (Surface Air Temperature), and not SST (Sea Surface Temperature), and therefore, it is no wonder why the ERA40 temperature plunges below freezing temperature of seawater in winter, although the temperature in ERA40 reflects temperature above the sea surface, and NOT temperature on Jan Mayen.
As I fist said, validation of models is very important, and robust validation techniques are needed. But unfortunately, as I see it, Willis here demonstrates how NOT to validate a model. It could have been an interesting excersice, but he fails to recognize the limitations of his study, and he also reveals a severe lack of understanding when designing his analysis. And I am a bit worried when I see all the praise he gets for his (failure of an) attempt. This is NOT the kind of analysis I would like to use to replace what is being done at institutes around the world. But obviously, the “crowd” of climate sceptics would more than welcome such “science”. Am I right?
Re: Jorge Luis Borges’ story “On Exactitude in Science”
From ‘Sylvie and Bruno Concluded’ by Lewis Carroll (1889):
Mein Herr looked so thoroughly bewildered that I thought it best to change the subject. “What a useful thing a pocket-map is!” I remarked.
“That’s another thing we’ve learned from your Nation,” said Mein Herr, “map-making. But we’ve carried it much further than you. What do you consider the largest map that would be really useful?”
“About six inches to the mile.”
“Only six inches!” exclaimed Mein Herr. “We very soon got to six yards to the mile. Then we tried a hundred yards to the mile. And then came the grandest idea of all! We actually made a map of the country, on the scale of a mile to the mile!”
“Have you used it much?” I enquired.
“It has never been spread out, yet,” said Mein Herr: “the farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well. Now let me ask you another question. What is the smallest world you would care to inhabit?”
Was it Picasso that said “All artists copy. Great artists steal!”?
Vidar says:
“The model gives SAT (Surface Air Temperature), and not SST (Sea Surface Temperature), and therefore, it is no wonder why the ERA40 temperature plunges below freezing temperature of seawater in winter, although the temperature in ERA40 reflects temperature above the sea surface, and NOT temperature on Jan Mayen”
Thank you for addressing one of my points above: I am misreading the information of the temperature curves. I was expecting a largely ocean grid cell to look the SST curve due to upscaling, but it looks more like the MET station curve because it is air temperature shown by ERA-40. But this then begs the question: what can ERA-40 be validated against? It suggests it can only be compared to station data on land, and preferably where the station data is dense as this allows the consequence of upscaling to be examined – comparing upscaled grid cells to point data tells us very little. perhaps, unless the upscaling argument is irrelevent on Jan Maren, ie the air temperature from ERA-40 should follow the station data, even for a large grid cell, because the local air mass is spatially very smooth and homogenous?
Willis says:
“The ERA-40 synthetic data runs warmer than the observations in every single month of the year. On average, it is 1.3°C warmer”
So this represents an error of around 1/2 a percentage point. approximately equal to the global temperature change in the last 200 years. Not bad for a computer model.
I’m liking NCEP reanalysis data more and more. Whether it is useful or not all depends on what purpose you put it to.
Me = Pedant
Air temperatures by themselves are meaningless unless one also knows the moisture content (via e.g. the wet-bulb temperature). Only then can the heat content of the air be determined.
“Warming” may simply be due to drier air. The wetter the air, the more heat (energy) it takes to raise its temperature.
Enthalpy is the light side; entropy the dark. 😉
Vidar wrote:
“1. The resolution of ERA40 is 100*100 km, which means that each gridcell represent an area of 10.000 km^2.”
“5. Last, a small comment on the discussion on the SST vs. ground temperature, and the fact that the model agrees more with the ground temperature than the SST. The model gives SAT (Surface Air Temperature), and not SST (Sea Surface Temperature), and therefore, it is no wonder why the ERA40 temperature plunges below freezing temperature of seawater in winter, although the temperature in ERA40 reflects temperature above the sea surface, and NOT temperature on Jan Mayen.”
While I agree with your comments, I think that they raise more questions than answers:
1. Both SteveE and you have raised valid points with regards to Jay Mayen island, which sparked my curiousity about the actual grid size of the ERA-40 grid. The link to this paper
http://goo.gl/0Jk4G
suggests that the ERA-40 grid size is indeed 100km x 100km = 10,000km^2
So I think that the key question is how well is the ocean temperature known to within such a grid size as the oceans constitute about 71% of the earth’s surface?
The completed ARGO ocean temperature/salinity measurement system
Wiki: http://goo.gl/7v2Kx
has a nominal gridding of about 300km x 300km = 90,000km^2,
Wiki: http://goo.gl/sED5J
the actual distribution being quasi-random as the sensors are floating about and are carried by ocean currents.
Wiki: http://goo.gl/JeOx8
So one question that arises is what are the systematic errors in interpolating down to 10,000km^2 from 90,000km^2 which is a not insignificant factor of 9 in grid area. Thus ERA-40 claims to give SAT [Surface Atmospheric Temperatures] values over 71% of the planet’s surface, the oceans, at nearly an order of magnitude higher resolution than the best current SST [Sea Surface Temperature] measurements. How can this model then be validated?
As the ARGO project was completed in Nov 2007, it’s unlikely that the ERA-40 model has much merit, if any, the further back in time one goes before this completion data as interpolating over 1,000,000′s of km^2 of ocean data without taking effects of local variation in ocean temperature (ocean currents, interactions with the atmosphere, etc) into account.
Of course, the situation is even worse as SSTs are only a proxy measurement for SATs.
2. As you pointed out, ERA-40 gives SATs not SSTs. Now unless SATs have been measured to 100km x 100km resolution over the oceans surfaces over a period of time, how can such a model be validated? Some might argue that sampling of of a subset of grid cells is sufficient. However, this would only be true if SATs were slowly varying across space and relatively constant in time. Neither is the case, as you yourself have referred to the Artcic Front and it’s temperature gradients. If there are no measurements of SATs at 100km x 100km resolution against which to validate over the time of the model, then how are it’s outputs anything more than guesswork.
Based on my above two points, I’d be interested in an explanation at to why the ERA-40 model is not pure GIGO or at best a crude model requiring much much more observational empirical data as input. Certainly not a model with any predictive skill on which to base economic decisions.
Smokey says:
March 8, 2011 at 5:38 pm
“Where were you when Algore and Michael Mann were scaring the ignoratii with tales of runaway global warming? We could have used your help. Their alarming charts with a vertical line showing temperatures increasing exponentially were the cause of the taxpayers getting fleeced. ”
AlGore is a politician so it is inevitable he would spout nonsense, but I would like to see a link to a quote of Micheal Mann using the ‘runaway global warming’ meme in the sense of unlimited and accelerating warming from CO2 or temperatures increasing exponentially. I think you just attributed this to him when it was written by some media hack to sensationalize a story and is NOT a scientific claim and never has been.
If you google ‘runaway global warming’ the vast majority of the time it is used by ‘skeptics’ and the shallow end of the media pool.
“And BTW, AGW was a hypothesis, not a theory. It’s never been a theory. Theories must be able to make consistent, validated predictions. ”
It was a hypothesis when proposed by Arrenhius and Fourier.
By the time of Callender it was a theory, but with very little experimantal surport.
By the late 50s when Revelle, Keeling and Plass had established the rising CO2, the radiatiove transfer functions and the inability of the oceans to stabilise CO2 levels in less than geologic timescales it was a well surported theory.
Nowdays it is a theory with such strong surport from direct observation, physical measurement and confirmed theoretical predictions.
Vidar wrote:
3. A re-analysis model is dependent on observations to be realistic. That is, more observations: the model will be kept on track; less observations: the model will tend to “live it’s own life” and perhaps, move away from reality. You chose an area with exceptionally low abundance of observations….Again, you chose an area where nobody would expect the model to do a good job.
But the fact that the model will “live it’s own life” is exactly the point of all this. 2010 (or 2009?) was supposedly one of the warmest years on record, but this was due to the model showing the Arctic to be exceptionally warm, in spite of the lack of factual observations, which necessitated relying instead upon the model results.
Given the obvious bias in the research community toward global warming being a reality (due, I’m convinced, to the massive infusion of our tax money as long as one toe’s the party line), I’m very much afraid that this was another case of the model, as you put it, living it’s own life. The problem is that the “own life” of the model is heavily determined by the modeler and his/her own perspective (See Hockey Stick for illustration.)
It seems to me that temps in the summer, over land, should rise above the SST temps, due to the ocean’s mixing and it’s huge heat sink capabilities, and that in summer the SAT over the ocean should have fallen below the SAT over Jan Mayan. But I don’t know. Just curious. Besides, I’m skeptical of numbers displayed on the order of tenths of a degree, anyway. All of the numbers are probably suspect, including the datapoints on Jan Mayan. After all, that’s what the Surface Stations project illustrated. We can’t trust most of the data.
But what the hey, let’s devote another few $trillion to stopping a problem we can’t measure close enough to even know if we have it, much less whether we’re fixing it.