The Best Test of Downscaling

Guest Post by Willis Eschenbach

In a recent issue of Science magazine there was a “Perspective” article entitled “Projecting regional change” (paywalled here)  This is the opening:

Techniques to downscale global climate model (GCM) output and produce high-resolution climate change projections have emerged over the past two decades. GCM projections of future climate change, with typical resolutions of about 100 km, are now routinely downscaled to resolutions as high as hundreds of meters. Pressure to use these techniques to produce policy-relevant information is enormous. To prevent bad decisions, the climate science community must identify downscaling’s strengths and limitations and develop best practices. A starting point for this discussion is to acknowledge that downscaled climate signals arising from warming are more credible than those arising from circulation changes.

The concept behind downscaling is to take a coarsely resolved climate field and determine what the finer-scale structures in that field ought to be. In dynamical downscaling, GCM data are fed directly to regional models. Apart from their finer grids and regional domain, these models are similar to GCMs in that they solve Earth system equations directly with numerical techniques. Downscaling techniques also include statistical downscaling, in which empirical relationships are established between the GCM grid scale and finer scales of interest using some training data set. The relationships are then used to derive finer-scale fields from the GCM data.

So generally, “downscaling” is the process of using the output of a global-scale computer climate model as the input to another regional-scale computer model … can’t say that’s a good start, but that’s how they do it. Heres the graph that accompanies the article:

downscaling science mag

In that article, the author talks about various issues that affect downscaling, and then starts out a new paragraph as follows (emphasis mine):

DOES IT MATTER? The appropriate test of downscaling’s relevance is not whether it alters paradigms of global climate science, but whether …

Whether what? My question for you, dear readers, is just what is the appropriate test of the relevance of any given downscaling of a climate model?

Bear in mind that as far as I know, there are no studies showing that downscaling actually works. And the author of the article acknowledges this, saying:

GARBAGE IN, GARBAGE OUT. Climate scientists doubt the quality of downscaled data because they are all too familiar with GCM biases, especially at regional scales. These biases may be substantial enough to nullify the credibility of downscaled data. For example, biases in certain features of atmospheric circulation are common in GCMs (4) and can be especially glaring at the regional scale.

So … what’s your guess as to what the author thinks is “the appropriate test” of downscaling?

Being a practical man and an aficionado of observational data, me, I’d say that on my planet the appropriate test of downscaling is to compare it to the actual observations, d’oh. I mean, how else would one test a model other than by comparing it to reality?

But noooo … by the time we get to regional downscaling, we’re not on this Earth anymore. Instead, we’re deep into the bowels of ModelEarth. The study is of the ModelLakeEffectSnow around ModelLakeErie.

And as a result, here’s the actual quote from the article, the method that the author thinks is the proper test of the regional downscaling (emphasis mine):

The appropriate test of downscaling’s relevance is not whether it alters paradigms of global climate science, but whether it improves understanding of climate change in the region where it is applied.

You don’t check it against actual observations, you don’t look to see whether it is realistic … instead, you squint at it from across the room and you make a declaration as to whether it “improves understanding of climate change”???

Somewhere, Charles Lamb is weeping …

w.

PS—As is my custom, I ask that if you disagree with someone, QUOTE THE EXACT WORDS YOU DISAGREE WITH. I’m serious about this. Having threaded replies is not enough. Often people (including myself) post on the wrong thread. In other cases the thread has half a dozen comments and we don’t know which one is the subject. So please quote just what it is that you object to, so everyone can understand your objection.

0 0 votes
Article Rating
188 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
January 5, 2015 3:10 pm
catweazle666
Reply to  Leif Svalgaard
January 6, 2015 1:08 pm

Thanks, but no thanks.
Life’s to short etc.

catweazle666
Reply to  catweazle666
January 6, 2015 1:09 pm

That’s “too”.

Paul Seward
January 5, 2015 3:13 pm

How much money did this “scientist” receive for this drivel?

ducdorleans
Reply to  Paul Seward
January 6, 2015 3:01 am

http://www.nsf.gov/awards/award_visualization_noscript.jsp?org=EF&region=US-CA&instId=0013151000
thus: $358,775
well worth it !
(the number of the NSF grant is at the bottom of the article in Dr. Svalgaard’s link)

Peter Miller
January 5, 2015 3:19 pm

I haven’t a clue as to the reasoning behind the concept of downscaling, presumably the original author also thought the same.
Is this just more smoke and mirrors in ‘climate science’?

Don K
Reply to  Peter Miller
January 6, 2015 1:28 am

Peter, I should think that the value of downsizing (if it works) would be that it is one of the many factors that go into planning. For example, you and a few of your wealthy drinking buddies think that Forlorn Hope, Nevada would be a great place for a ritzy ski area. So you rustle up some financial commitments and do a business plan. Your ski area consultant identifies four possible locations in Gawdawful Gulch. Before you start the lengthy process of getting the BLM to permit your project, there are hundreds of things you need to know or guess at. e.g. location A is closer to the highway and has more water for snowmaking, but will it have reliable snow most Winters? Or should you go with location B or C higher up in elevation but with different potential problems.
I wouldn’t be at all surprised that downsizing — if it turns out to be workable — is useful and routine many decades from now. But we will need climate modeling that actually models climate — which the current model demonstrably do not.

Keith Willshaw
Reply to  Don K
January 6, 2015 2:22 am

Don K Said
“Before you start the lengthy process of getting the BLM to permit your project, there are hundreds of things you need to know or guess at. e.g. location A is closer to the highway and has more water for snowmaking, but will it have reliable snow most Winters? Or should you go with location B or C higher up in elevation but with different potential problems.”
This is true however any sane person would you base the decision on observed conditions at these 4 locations not on some computer model of what conditions MIGHT be in an imaginary world.

Jimbo
Reply to  Don K
January 6, 2015 5:16 am

In the UK local councils were told to expect warmer winters. The Met Office, using its soooper duper computers, told them so on a downscaled level. Observations meant they ran out of grit for roads leading to massive inconvenience and deaths.

Guardian – 7 January 2010
Snow clearance hampered as UK grit supplies run low
• Low salt reserves mean councils are forced to grit more thinly
One person killed after being struck by lorry

The Met Office in it’s infinite wisdom decided to abandon their downscaling for public consumption.

catweazle666
Reply to  Don K
January 6, 2015 1:10 pm

“But we will need climate modeling that actually models climate “
So that’s “never” then.

SMC
January 5, 2015 3:23 pm

“A starting point for this discussion is to acknowledge that downscaled climate signals arising from warming are more credible than those arising from circulation changes.”
Maybe I misunderstand what the author is saying but, it seems to me the study is biased toward warming from the get go. The rest just seems like a method to generate ‘evidence’ to support the supposedly more credible climate signals arising from warming.

Alx
Reply to  SMC
January 5, 2015 6:11 pm

It looks like another example of circular reasoning. You assume climate change and then not surprisingly all your test results are the effects of climate change. But that is just one of the problems with this fiasco.

Tim
Reply to  Alx
January 6, 2015 5:20 am

Einstein advised “We cannot solve our problems with the same level of thinking that created them”
– but I guess they know better.

MJB
Reply to  SMC
January 6, 2015 6:09 am

Although worded awkwardly, I think what they mean to say is there is greater uncertainty in downscaling local precipitation patterns (for example) than local temperature patterns. A reasonable and welcome admission. In my view there is too much uncertainty in both. Downscaling is a worthy problem to tackle, but seems premature to report results.

timothy sorenson
January 5, 2015 3:29 pm

I want to participate in an Art Contest where I get to include in my submission: “The appropriate judging criteria for my project is the origin of the materials use to make the project and my race.”
Bet that would go over well.

Ian H
Reply to  timothy sorenson
January 5, 2015 6:14 pm

Actually it probably would. The way they judge art contests is crazy.

Editor
January 5, 2015 3:29 pm

Whether it works. That was obviously what the article was about to say, or equivalent. Except that, if it had, there would have been no Willis article.

jmichna
Reply to  Mike Jonas
January 5, 2015 3:34 pm

From the paper:
“DOES IT MATTER? The appropriate test of downscaling’s relevance is not whether it alters paradigms of global climate science, but whether it improves understanding of climate change in the region where it is ap-
plied. The snowfall example above meets that test. In many places, such fine spatial structures have important implications for climate change adaptation. In the urban areas of the United States and Canada most affected by lake effect snow, infrastructure and water resource planning must proceed very differently if lake effect snow is not projected to decrease significantly….”

Editor
Reply to  jmichna
January 5, 2015 5:05 pm

So they intend to use the downscaling BEFORE they determine whether it works.

Reply to  jmichna
January 5, 2015 8:00 pm

The author is merely just trolling for grant money to study “Downscaling.”

Steve Keohane
January 5, 2015 3:30 pm

Maybe I have it wrong but if B is A on a local scale, I have trouble seeing B as a derivative of A. It doesn’t seem the average of B over 5 zones as A is, would look like A.

Editor
January 5, 2015 3:30 pm

Willis writes: “Somewhere, Charles Lamb is weeping …”
And someone, me, is laughing at the absurdity of climate modelers.

Robert B
Reply to  Bob Tisdale
January 5, 2015 6:24 pm

“Climate scientists, I suppose, were children once.” ?
Excuse my ignorance. Hubert or have i missed something?

xyzzy11
Reply to  Bob Tisdale
January 5, 2015 9:59 pm

Bob Tisdale January 5, 2015 at 3:30 pm
Willis writes: “Somewhere, Charles Lamb is weeping …”
And someone, me, is laughing at the absurdity of climate modelers.

Me too. I’ve worked with computer models in a number of environments (modelling chemical processes and chemical plant design). It’s absurdly hard to even come close to reality in a single pass. Engineers use models to outline or scope a problem, not design a bridge or building. Climate models are not even inadequate.
Even if the modelers knew ALL of the variables involved in describing climate (which they don’t) and even if they knew the variables to within (say) 99%, the models would be wildly inaccurate after several passes, let alone after sufficient passes to project the next 100 years. The errors simply accumulate too fast and overwhelm the result.
Pointman wrote a great piece a few years ago (https://thepointman.wordpress.com/2011/01/21/the-seductiveness-of-models/) which just about covers it.

george e. smith
January 5, 2015 3:31 pm

My suspicion of the concept of “downscaling”, says that if you have a model of a system sampled at some large scale intervals, then you are going to interpolate to find HIGHER FREQUENCY values for intermediate points. So you take the climate for the SF Bay region, and you interpolate the climate for Los Gatos, or Emeryville from that.
Something tells me that this is a huge violation of the Nyquist sampling theorem.
Given that the telephone system works, and that it is entirely dependent on the validity IN PRACTICE of the Nyquist theorem, in that both time and frequency multiplexed bandwidth and capacity considerations are pushed to the max; I’m not a subscriber to interpolation of higher frequency out of band “data”.
Sorry G

Reply to  george e. smith
January 5, 2015 3:54 pm

exactly – you’ll get pixelated bs
“The appropriate test of downscaling’s relevance is not whether it alters paradigms of global climate science . . .” what?
How about rain, snow, temperature . . . something real and measurable?

Gary Pearse
Reply to  Bubba Cow
January 5, 2015 5:20 pm

No No, Ya see, once you have downscaled all this fine detail locally, you can integrate it back and improve the global GCM! sarc off/

Robert B
Reply to  Bubba Cow
January 5, 2015 6:03 pm

Pixelated spaghetti.

Reply to  george e. smith
January 5, 2015 3:59 pm

This is almost the inverse of Nyquist sampling (unless they are out collecting new data for grids every couple of hundred meters… ha) since they’re creating dozens of modeled cells from the output of a single modeled cell. I learned that you oversampled a minimum of 2x for temporal (frequency) targets and planar spatial targets by at least 4 times your desired final resolution. I guess they’ve figured that since they can make up temperature data for a third of the “stations” in the US it’s acceptable to create a completely ethereal 2nd generation out of a 1st generation which has also lost all roots to the realm of reality.

Reply to  nielszoo
January 5, 2015 4:07 pm

precisely – “make up temperature” It is not as if any physical phenomenon is going to be sampled.

Bernie Hutchins
Reply to  george e. smith
January 5, 2015 4:26 pm

George – Exactly
Interpolation gives you NO NEW information. If I have samples every second I can interpolate 9 additional samples in between according to some model of the signal such as bandlimited (Nyquist) or polynomial fitting (like a straight line – a first-order polynomial). Or my model could be that all nine samples in between are defined to be 13.33. The model IS (at best) separate information, and it could be trivial, or complicated and likely wrong, or just useless.
Bandlimited interpolation works just dandy for things like digital audio (so-called “oversampling”). But that’s because we have good reason to believe the model (low-pass) is correct for music (our ears). Getting to the comment Willis made about checking the interpolated result against the real world – well – it SOUNDS great. That’s the test all right! We have to know any proposed model is reasonably based. Until proven, assume it’s quite likely bogus.

Reply to  Bernie Hutchins
January 5, 2015 5:49 pm

Agreed and I call foul on this. The big GCM downscaled (is that even a real world word?) to a regional model (is there one?) by interpolation (where, you are right, there is NO NEW information) produced NEW DATA = paint by numbers OR this is a fiction, by another name, Fraud. So, we are to believe best guess interpolation of shades of blue = white = snow. How many feet?

Bernie Hutchins
Reply to  Bernie Hutchins
January 5, 2015 6:35 pm

Bubba Cow –
No I was not familiar with the term “downscaling”. It is (as I look) used in a previous reference (Kerr 2011) in Science, so I guess it is OK. Many would say “upsampling” (a higher sampling rate). That’s interpolation. Sampling rate (frequency) and sampling intervals (spacing between samples or “scale” I guess) are reciprocals.
Give someone the data (data fixed), and a model for what the data supposedly fits (presumably a choice) and everyone gets the same interpolated data, over and over. Choose a different model and you get a DIFFERENT interpolation, but nothing new in the sense that everyone won’t get the same intermediates. Compare the intermediates to the real world, and you will have SOME possibility of evaluating a model as being valid, or not. Matching the real world is the test of course.

Reply to  Bernie Hutchins
January 5, 2015 8:03 pm

Bernie –
Thanks for your reply. Because I used a naughty word, I was in moderation for a bit. Rightfully so.
Upsampling?
I am unfamiliar with this. I am familiar with selecting sampling rate based upon unfolding events and frequency of need to know changes in those events with increases in rates to assure you don’t miss something of value, properly treated and within resolutions of sensors, DAQ etc. . .
This does not seem to “fit” anywhere in empirical study – which I was protesting. If “downscaling” is supposed to reveal information that was never captured, it is a hoax. It might actually exist, but it has not been observed. I try to be fair.
I agree with information needing to fit reality. I am questioning that this is anywhere near reality.
Best

Bernie Hutchins
Reply to  Bernie Hutchins
January 5, 2015 10:00 pm

Bubba Cow –
Since you are familiar with ordinary sampling, this analogy might help. Sorry it’s a bit long. First of all, you know from ordinary sampling that you can fully recover a properly sampled analog signal, and you could have sampled it at a higher than minimum rate. Interpolation or upsampling is accordingly a DIRECT digital-to-digital way of going to the higher rate WITHOUT returning to analog as an intermediate step. Every CD player (for example) does this in real time. But more simply, it is pretty much ordinary interpolation of intermediate values.
The upsampling or interpolation (downscaling in the climate paper) is analogous to increasing the number of pixels in an image. It IS intuitive, as you suggest, that you don’t get MORE resolution. If the model for the image is low-pass (bandlimited) you get a “blurry” image. You get rid of the sharp pixel edges, making the image more comfortable to view (and bigger) but don’t see anything new. If you picture had a distant person with a face with an unresolved nose, you don’t interpolate the nose. Well, not generally. But if your image “model” was not strictly smooth, but had very sophisticated routines to identify human faces, it might recognize that the person should have a nose, and on that assumption, figure out how a nose WOULD have been blurred perhaps from a black dot into 9 pixels, and do a credible (deconvolution) reconstruct. Pretty much artificial intelligence.
In the case of climate, we might start with a very simple model that heat in the tropics moved to the poles. If we “downscaled” this biggest view, we might then consider air masses over individual oceans and continents. Then downscaling again and again, air masses over unfrozen lakes, that sort of thing. We might even look for tornados, as we might have looked for noses! The output at the lower resolution becomes input to the higher resolution. Makes sense. But you have to know how to do it right, and they almost certainly don’t (can’t).
In signal processing there is a major field of “multi-rate” or “multi-resolution” signal processing (filter banks – an FFT being a perhaps overly-simple filter bank). Good stuff. Systematically and efficiently take the signal apart. But if you cross over a channel, you are wiped out! You don’t make this error in engineering practice – only as an educational exercise!
By “crossing over a channel” I am thinking about something like what the Science
paper here discusses as “bias” – like erroneously shifting the jet stream. They say – don’t have bias. Easy to say. Many of us would suggest not trying to take short cuts, and pretending to know more than we possibly can.

Reply to  Bernie Hutchins
January 5, 2015 10:19 pm

Actually, the brain does a pretty good job at ‘creating more information’. We all have a ‘blind spot’ in out eyes where the optical nerves leave the eye. We don’t see the blind spot because the brain downscales what we actually do see and fills in with what it believes we would see if there were no blind spot.

Reply to  Bernie Hutchins
January 6, 2015 3:59 am

Good Morning, Bernie
I appreciate your explanation and can understand how such systems and techniques might/do work in EE and music. Slept on it, in terms of any usefulness to predicting weather (much less climate) and had to feed the wood stove here last night – well below 0F and good wind chill. Good for our carbon footprint.
Also appreciate Lsvalgaard’s input and I understand how human sensory systems can interpret and even fill in the blanks transitioning between notes and “compute” pattern recognitions. But here we have a receiver – human sensory apparatus – that has immeasurable experience and, as you say, perhaps AI, and generally unfathomable computing capacity. Complex interacting systems. Ever wonder just how you can recognize someone walking in the visual and somewhat obscure distance by cognitively differentiating his/her gait?
We actually tried to model something like this decades ago as grad students with lots of instruments, some time on our hands, and as a purely educational exercise. We had a reasonable gob of empirical data regarding human movement kinematics and kinetics, decent computing power (didn’t care if the model needed to run all night), and the maths were pretty well known. We just wanted to find out if we could identify an envelop of reasonable performance outcomes and what might happen if we pushed an input – say increase velocity a bit. As a colleague from New Zealand decided = just wanking.
There is just too much variability both between and, more importantly, within subjects in that environment and we were able to control some stuff. Too many ways to get from here to there and possibilities for correcting/adjusting errant trajectories . . .dynamical systems that wouldn’t fall for our programming tricks.
Of course we got some envelopes – relevant around known events – but we gave up on using any of that toward generating useful predictive results over realizing we’ll just have to go collect more data.
And that is what concerns me here. We were just wanking and not trying to set energy policy and taxation. Doesn’t matter if one now has a super computer to throw at it. Really believe we need to direct funding toward study and learning, rather than fabricating some fantastical future prospect that has so far eluded reality.
Cheers and hope this lands somewhere close to the proper place in the thread.

Gavin
Reply to  Bernie Hutchins
January 6, 2015 5:28 am

The ‘optical data infilling’ Leif Svalgaard refers to is often cited as a cause for “Sorry mate, I didn’t see you” accidents where car drivers pull into the path of oncoming motorcycles, often with fatal results. I believe Police drivers are taught techniques to minimise the effect.

Reply to  Willis Eschenbach
January 6, 2015 9:45 am

The brain has a ‘model’ [albeit a primitive one] of the surroundings and uses that to downscale [fill in]. The principle is the same as for the [useful] downscaling: use a model to fill in the coarser data.

Owen in GA
Reply to  Bernie Hutchins
January 6, 2015 10:08 am

Before we downscale the model outputs, shouldn’t we first validate that the individual model actually matches at the larger scale? No amount of AI can make a gourmet meal out of the landfill. The models ALL RUN HOT! Downscaling to local environments will not fix the basic problem that the GCM’s are models of a fantasy world that kinda sorta looks like Earth if you squint real hard and click your heals together three time and say: “It really is Earth, It Really Is Earth, IT REALLY IS EARTH!” !
When the physical theory is well known, you can interpolate with some small bit of confidence. The problem is geology, geography and weather have this tendency toward fractal distributions at all scales that makes things difficult to interpolate.

Bernie Hutchins
Reply to  Bernie Hutchins
January 6, 2015 10:51 am

Willis, at Jan 5, 2015 10:19AM you use the term “downsampling”. I fear this is exactly the wrong term – it should be upsampling (interpolation). The paper uses the term “downscaling” which was not familiar to me, but must mean interpolation: the establishment of intermediate locations between existing points and GUESSING what goes there (sometimes successfully).
And to your main point, we test interpolators by starting with a known denser set, throwing away data, and seeing if our interpolation procedure reasonably retrieves what we tossed out. To the extent it fails, we call the residual an “error”. That’s the same as “downscaling” and going to the location to take measurements – just what you said.
But even a small interpolation error might mean almost nothing with regard to adding to understanding. An audio signal, for example, is essentially horizontal (restricted amplitude but broad in time – Fourier components) but CAN be interpolated using a polynomial model (simplest example – a first-order polynomial or straight line). Yet the polynomial is a lousy, misleading model for audio because it is vertical – running to infinity outside a very small local time interval. It works by accident. Well – of course!

george e. smith
Reply to  Bernie Hutchins
January 6, 2015 12:34 pm

“””””…..
lsvalgaard
January 5, 2015 at 10:19 pm
Actually, the brain does a pretty good job at ‘creating more information’. We all have a ‘blind spot’ in out eyes where the optical nerves leave the eye. We don’t see the blind spot because the brain downscales what we actually do see and fills in with what it believes we would see if there were no blind spot……”””””
Well the brain / eye does fill in as you say; but it really doesn’t “make stuff up”.
The answer is scanning.
You think you are looking in the same place, but your eye is actually scanning so it moves the optic nerve blind spot around, so it really does see everything, and then it concocts a composite of multiple slightly different scanned imaged to eliminate the hole. The eye and the brain conspire together to create this illusion, that you are looking at a full picture with no holes in it.
And they also fake the colors as well, because some sensors in the eye see good detail, but not colors, and verse vicea. So once again, a little bit of dithering and collectively they fake the whole of what you think you are seeing.

george e. smith
Reply to  george e. smith
January 6, 2015 2:01 pm

If the original (band limited) climate map, is correctly sampled, a la Nyquist, then sampling theory insists that the original CONTINUOUS function is completely recoverable from just those samples, with no loss of information.
In other words, the samples contain ALL of the necessary information to compute the exact accurate value of that continuous function anywhere and everywhere on the map.
Ergo, interpolation is a perfectly valid process to recover the function value at intermediate points.
The problem is that this creates no NEW information whatsoever. But at least it does tell you what the exact value of the function is at “that point.”.
But these authors are seeming to imply that they create new intermediate point values, that are not just these interpolations, and by definition, these “created values” are necessarily outside the passband of the original band limited signal.
So they are phony and fictitious “information”.
Nyquist violations are serious. If you simply under-sample by a factor of two, so the sampling interval is equal to the wavelength of the band limit signal frequency, instead of being half of that period, then the aliased recovered signal has a spectrum that folds over about (B), and then extends all the way to zero frequency.
Now the zero frequency signal is simply the average of all the values.
So if you unde-sample by just a factor of two the reconstructed “signal” contains aliasing noise at zero frequency, so the average value of the function is corrupted.
Climatists seem to merrily assume that at least the average of their grossly under-sampled data points is valid. It isn’t. And apparently global weather stations report either a daily max / min Temperature pair, or simply twice a day. That barely conforms to Nyquist, if, and only if, the actual daily cycle is a pure sinusoid; which it isn’t. A fast rise / slower fall more saw tooth like daily temperature heating profile, contains at least a second harmonic component, which is at the sampling rate, not the Nyquist rate, so you already have average value aliasing noise.
And when it comes to the spatial sampling of these Temperature maps, they aren’t even close to Nyquist sanitary.
And the key parameter is the MAX time between samples; NOT the average frequency of the samples.
So putting all your stations in the USA, and a handful everywhere else just does not cut it. It is the maximum sample spacing, which must be less than a half period of the band limit frequency.
Random sampling works great on your digital oscilloscope when looking at a repetitive signal; it saves you the vertical delay line for one thing, but it doesn’t work for a single transient event, which is what this global temperature mapping exercise is all about.

Bernie Hutchins
Reply to  george e. smith
January 6, 2015 4:53 pm

George – good review, but I would mention a few points to clarify.
Specifically, what is the independent variable of the sampling? Climate data could be, true enough, a time sequence, and the climate “signal” is likely sufficiently bandlimited in that sense. (A temperature variation with time is probably, climate-wise, gradual enough.) We can use the Nyquist (Shannon) sampling ideas and sync (low-pass) interpolation to reconstruct. Old reliable stuff. Things like musical signals may be naturally bandlimited because of mechanical inertia (limiting rates of change).
But I think for the Science paper, the independent variable is space, at least in two dimensions, and perhaps three for best results. The exact SAME math applies of course, but are the signals bandlimited in space? In places, sure. But the situation may be more like an image than an analog time signal. In an image, we CAN have an instantaneous change (pure white touching pure black), and a proper analog pre-sampling filter evades our resources in such a case.
In consequence, we could only rely, if at all, on climate parameters that vary gradually, in the interpolated analysis, on a scale the same as the ORIGINAL spatial sampling interval – no faster. In terms of spatial frequencies, the climate parameter was fair-sampled at the original rate, and you have just interpolated to many “extra” (over)-samples. You of course have NOT (should not have!) increased the spatial bandwidth by raising the cutoff of the interpolation filter. Here the concern is NOT an anti-aliasing effort, but the proper placement of the reconstruction (interpolation) filter – the OTHER half of the problem.
Speaking of lake-effect snow, I have friends nearby in the Buffalo area who rarely get lake effect, while lucky folks just 5-10 miles to the south get 6 feet. Climate may have some sharp edges that can’t be interpolated. You can’t interpolate a 10 mile transition from 100 mile samples.

george e. smith
Reply to  george e. smith
January 8, 2015 12:21 pm

Bernie,
Most of the sampled data systems that are well established, are time domain samples of a continuous (in time) analog signal. But sampling is not restricted to sampling over single independent variables.
And in the case of “global Temperature mapping” , we have both spatial sampling and Temporal sampling.
The latter is often just max / min thermometer readings, so it is twice a day temporal sampling.
Spatial sampling, seems to be any unoccupied rock that they can put a spare Stevenson Screen or equivalent, that they have sitting around. The spatial sampling is Rube Goldberg in the extreme, when it comes to surface data. The Satellite data sets, presumably have a more organized sampling stratagem.
But the hairs on my neck stand up on goose bumps, when I realize that nobody pays attention in the surface case, to exactly when the samples are taken. Perish the thought that anyone would bother to sample all stations at exactly the same time.
You mean you really want to see the global Temperature map at a specific instant in time ???
Well you get the picture.
I presume that we have both spatial and Temporal sampling of almost any variable that is relevant to global weather / climate.
G

Bernie Hutchins
Reply to  george e. smith
January 8, 2015 3:56 pm

George –
I think that your comments involve what has been discussed here as TOB (Time of Observation Bias). Even when you don’t care about the actual variation during any one day, daily measurements in search of year-long seasonal trends would have serious errors if you measured a daily temperature at 7AM one day and at 3PM the next. At minimum, you should probably take a daily temp at the exact same time each day. Also, as I recall there are problems recording daily max or min for fixed 24 hour frame. If the temp happens to have a max or min close to the transition from one days series to the next day, there is a good chance of getting this same (actual) extreme for both days.
But these problems are fairly easy to understand, and probably to fix – and to defend the adjustments. Unlike so many “adjustments” where you decide which way to adjust (what you want the outcome to be), and then look for some excuse! Do people do that?

January 5, 2015 3:32 pm

Snow fall meets the test clearly.
So do heat waves

Gary Pearse
Reply to  Steven Mosher
January 5, 2015 5:25 pm

Are you saying we will discover there is such a thing as lake effect snowfall? It happens every year for a very local reason that GCM derivatives are not going to see. Wisconsin’s or Ohio’s weather men are a better bet than the models. How can models that don’t work give details of local climate, which is what it is?

Reply to  Steven Mosher
January 5, 2015 5:54 pm

QED.

Editor
Reply to  Steven Mosher
January 5, 2015 8:07 pm

OK, Steven, if there has been a test, where can it be viewed?

Stephen Ricahrds
Reply to  Steven Mosher
January 6, 2015 1:52 am

My car had it’s bi-annual test recently. All the lights worked (looked pretty) and the dasboard units worked but no brakes, no engine and no fuel.
So does somethings.

Reply to  Steven Mosher
January 6, 2015 10:34 am

Snow fall meets the test clearly.

Not necessarily, the example they used is the path of the jet stream not being determinant, the jet stream controls what kind of weather I get, North I get warm moist air, South I get cool dry air. And if the area bounded by the jet stream changes, that alone could change average surface temps.

January 5, 2015 3:34 pm

Policy based and funded science(Proggresive enlightend liberalism) is more “predictive” than the Classic/traditional science?

Rick K
January 5, 2015 3:36 pm

With downscaling I just discovered the temperature on my birthday in late August for my geographical location will be 86.477 degrees F; winds a mild 5.328 mph out of the WNW; humidity a pleasant 48% with a generally clear sky with only 10 percent cirrus coverage at 19,764 feet.
Nice!

Babsy
Reply to  Rick K
January 5, 2015 3:56 pm

You lucky dog!

Rick K
Reply to  Babsy
January 5, 2015 5:30 pm

Babsy, you’re invited! Dress accordingly! No chance of rain!
And bring Bob Tisdale with you — we’re gonna roast hot dogs and climate models!

CodeTech
Reply to  Rick K
January 5, 2015 5:36 pm

Is that 2015? My birthday in November 2033 will be -5.327C, and windy with gusts to 38.884 kph… 🙁

Rick K
Reply to  CodeTech
January 5, 2015 6:23 pm

Bummer, Code! I’d cancel it now if I were you…
Maybe something indoors would be advisable. What?!? No heat?!?
Tell me again what Century this is. It sure feels like the 14th…

Reply to  Rick K
January 5, 2015 8:35 pm

But, have you checked Al Gore’s travel calendar?
That should be an input variable into any Regional model.

Luke Warmist
January 5, 2015 3:36 pm

” A starting point for this discussion is to acknowledge that downscaled climate signals arising from warming are more credible than those arising from circulation changes.”
…tells me everything I need to know right at the beginning.
Good article Willis. Thanks.

Curious George
Reply to  Luke Warmist
January 5, 2015 5:10 pm

I missed that piece of science. Thanks.

Danny Thomas
Reply to  Curious George
January 5, 2015 6:02 pm

Curious and Luke and/or Willis,
I’d copied that in preparation to paste all the while wondering if they answered “why”? Why not cooling as a choice of words also crossed my mind if one doesn’t presume some sort of predetermined orientation.
Wills, was there discussion as to the “why”? Just curious, as it seems a bit moot anyhow.

Konrad.
January 5, 2015 3:39 pm

”Being a practical man and an aficionado of observational data, me, I’d say that on my planet the appropriate test of downscaling is to compare it to the actual observations, d’oh. I mean, how else would one test a model other than by comparing it to reality?”

Comparing to reality would be indeed be a logical approach, but we are dealing with climastrology. I would recommend the engineering approach – do a simple sanity check before detailed analysis.
Ask – “Does “downscaling” now allow the models to run CFD (computational fluid dynamics) in the vertical dimension? Or are vertical non-radiative energy transports still parametrised, not computed?”
The answer is clear, “downscaling” GCMs fails the most basic sanity check. Increasing horizontal resolution is worthless without the addition of CFD in the vertical, something totally lacking from current GCMs.
The bottom line is that our radiatively cooled atmosphere is cooling our solar heated oceans. An atmosphere without radiative cooling can’t do that. Any climate model that shows the atmosphere slowing the cooling of the oceans will always fail miserably.

January 5, 2015 3:41 pm

I can’t take anymore of these computer models passing as science anymore.

Ernest Bush
Reply to  Bobby Davis
January 5, 2015 5:53 pm

They need to be taken to the historical trash heap of really bad ideas.

Reply to  Bobby Davis
January 5, 2015 8:48 pm

In the author’s circle, this is taken seriously. Quite sad.
They can not see from the outside, looking in. How silly they look using GCMs as inputs and then saying, “climate signals arising from warming are more credible than those arising from circulation changes.”

Here’s one for the climastrologists:
Let’s just stipulate that in modeling the number of angels that can fit onto the head of a pin, those outputs arising from smaller angel feet are more credible than those arising from larger pinheads.”

And even sadder is that Science magazine and the AAAS has kowtowed to Climate Change political correctness and lost its way.

dp
January 5, 2015 3:49 pm

The concept behind downscaling is to take a coarsely resolved climate field and determine what the finer-scale structures in that field ought to be. In dynamical downscaling, GCM data are fed directly to regional models.

This says you get get the results you want (“ought to be” where “ought” is informed by divine guidance ahead of the test) by using inaccurate information (results from submitting GCM data to defective regional models). What’s not to like?

January 5, 2015 3:49 pm

Can they downscale their model fine enough to project the weather over our house for the next 50-100 years? I am willing to pay my fair share in Teraflops.
Sorry, I pay that already, electricity this year is 10% up thanks to the certificates for “green” renewables…

Tim
Reply to  Ferdinand Engelbeen
January 6, 2015 6:00 am

Why the emphasis on micro-modelling weather patterns when the propaganda meme is: “weather is not climate”?
It’s the equivalent of counting Polar Bears per iceberg.

Joe Civis
January 5, 2015 3:51 pm

“SIGNAL IN, SIGNAL OUT. Fortunately, there are regional scale anthropogenic signals in the GCMs that are not contaminated by regional biases.”
so they are just contaminated by the GCM, IPCC and modeler biases??? just wow….

January 5, 2015 3:52 pm

“Somewhere Charles Lamb is weeping …” And somewhere Edward Lorenz can’t stop chuckling.

Alan Robertson
Reply to  Thomas
January 5, 2015 4:09 pm

+1

Ian W
Reply to  Thomas
January 6, 2015 4:45 am

Perhaps someone should send the author the reference to Lorenz’s chaos paper (my bold)

Abstract
Finite systems of deterministic ordinary nonlinear differential equations may be designed to represent forced dissipative hydrodynamic flow. Solutions of these equations can be identified with trajectories in phase space. For those systems with bounded solutions, it is found that nonperiodic solutions are ordinarily unstable with respect to small modifications, so that slightly differing initial states can evolve into considerably different states. Systems with bounded solutions are shown to possess bounded numerical solutions.

A simple system representing cellular convection is solved numerically. All of the solutions are found to be unstable, and almost all of them are nonperiodic.
The feasibility of very-long-range weather prediction is examined in the light of these results.

http://journals.ametsoc.org/doi/abs/10.1175/1520-0469%281963%29020%3C0130%3ADNF%3E2.0.CO%3B2

January 5, 2015 3:53 pm

It sounds like “downscaling” is really weather forecasting with a new name, and we all know how accurate that is. (Sorry, Anthony)

knr
January 5, 2015 3:55 pm

Makes sense when your livelihood depends on ‘models ‘ then of course your going to claim that any issues with models can be fixed with more models and that is all you need to do to check the models .
No pay cheques for the authors in saying , the models don’t work no future career either in the modelling area once you admitted to just how rubbish they are .

January 5, 2015 3:59 pm

Model output (or interpolations) are NOT data.

Reply to  Slywolfe
January 6, 2015 6:39 am

Arguably, in climate science, model inputs are not data either. Once data are “adjusted”, they are no longer data, but merely estimates of what the data might have been if they had been collected timely from properly selected, calibrated, sited and installed instruments.
I have been chuckling at the commenters above “playing” with insignificant digits. The entire concept of estimates to two “significant” digits greater than the actual data escapes me.

Curious George
January 5, 2015 4:01 pm

I consider the downscaling a 100% success. Anything – I mean anything – can only improve our understanding of climate change.

Reply to  Curious George
January 5, 2015 4:12 pm

Me too. Down scale this immediately. Bridge out.

Kasuha
January 5, 2015 4:02 pm

“The appropriate test of downscaling’s relevance is not whether it alters paradigms of global climate science, but whether it improves understanding of climate change in the region where it is applied.”
I believe it is meant as a stronger condition than just comparing the result with observations.
For example, let’s assume we had very heavy snow last year. If our downscaled model predicted no snow for that period, it is not helping any understanding. So not only must it match observations, but it must also provide answers why did that happen.
Of course, garbage in, garbage out. It all relies on GCMs providing realistic forecast on global scale as well.

Alan Robertson
January 5, 2015 4:02 pm

That ‘Downscaling Results’ graph reminds me of one of those old “Paint By Numbers” kits, which a friend’s Mom was always completing and hanging all over the walls. None of her efforts could have been mistaken for art, nor her home, for a gallery.
Unlike with my friend’s Mom, maybe someone won’t be reluctant to hurt (the author) Alex Hall’s feelings and will tell him that he’s no modeling Modigliani.

AJB
January 5, 2015 4:25 pm

Rob Dawg
January 5, 2015 4:34 pm

Challenge: I have ten people on a scale totaling 2000 lbs margin of error ±5%.
Question: How much will each of them individually gain/lose over the holidays next year?
This is what downsampling claims to be able to do.

Reply to  Rob Dawg
January 5, 2015 4:39 pm

The dead one will lose all the weight ;>) well you did say a year hope this dies out soon :>(

Rob Dawg
Reply to  lorne50
January 5, 2015 5:05 pm

Are you suggesting I might unethically “hide the decline” or “spike the hike” by reducing the number of reporting elements?

Alx
Reply to  Rob Dawg
January 5, 2015 6:18 pm

Not a claim, this is what downsampling will do.
It will burn a bunch of computer cycles and spit out the gain/lost of each individual to a hundredth of a pound. Since there was never any individuals to begin with, they will have 97% confidence level in the accuracy of their holiday eating projections.

Bernie Hutchins
Reply to  Rob Dawg
January 5, 2015 10:12 pm

Well, they said “downscaling” which if I have it figured correctly, is “Upsampling”.
But your “scale” example is correct. It’s “conservation of information” – numbers in this case. Engineers are very very clever, except when it comes to violating fundamental laws of math and physics!

Steve in SC
January 5, 2015 4:47 pm

Integrate around all space avoiding all singularities.
The GCMs are a singularity in and of itself.
Not real stable in my humble opinion.

S. Geiger
January 5, 2015 4:47 pm

Here is my question. As I understand, the AOCGM ensemble is not expected to match reality because of the much discussed initialization issues (i.e., they would not be expected to replicate ocean cycles, etc.) and this is one reason why perhaps they are not tracking the ‘pause’ very well. If this is the case, how would down-scaling be policy relevant? Or is its (as I think Mosher suggests) just to find out if region A ‘generally warms’, then how would it affect sub-region A-1(?), etc. Seems this would be the only viable way to use the down-scaled models (hypothesis testing based on a set of possible boundary conditions for the sub-regional domain). I would think a thorough review of ‘real world’ data would be better for this.

Bill Illis
January 5, 2015 4:51 pm

I always thought that the system is just too complex to model and that we should just use observations of what is really happening instead.
What’s wrong with just observing what is really happening. You can actually build models of what actually happens which is more-or-less what the weather models do and do so rather successfully for close to 10 days out now.
Put extra GHGs into a climate model and it is going to produce warming. Why? Is it some magically property that just emerges from the climate model as if ordained by the God of Physics and Weather? No. The climate modelers coded their model to produce more warming as more GHGs are introduced, simple as that. It is not an emergent property as Mosher and Gavin like to say/pretend. It is written into the code based on their “theory”.
Does CO2 produce weather? Nobody has observed that yet. Cold fronts and warm fronts and water vapour and pressure systems and winds and ground surfaces produce weather. CO2 has never been shown to have any effect on any weather that I am aware of.
Why not see what the real Earth(tm) actually does and one can make future predictions based on observed behaviour. Lake effect snows are actually easy. We have hundreds of years of actual results on which to base future expectations as GHGs rise. I’m sure the data says more of the same because that is what has happened as CO2 has risen.

Ernest Bush
Reply to  Bill Illis
January 5, 2015 5:59 pm

The answer to your questions is there is no money in it for the perpetrators of this fraud.

Reply to  Bill Illis
January 6, 2015 9:25 am

Bill, right now they can just barely model modern fighter plane design pretty well w/the most powerful supercomputers. To think climate modellers can accurately model global climate is hubris in the most extreme. Nothing wrong w/working toward that goal, but basing government policies on them now is absurd.

Barry
January 5, 2015 5:24 pm

“You don’t check it against actual observations, you don’t look to see whether it is realistic…”
Of course you do, but that is to be implied. Actually the article does address the need to downscale GCMs that recreate observed circulation patterns, while ignoring those that don’t because it will be GIGO. Also stated is that downscaling does provide additional information — the variability that might be expected at smaller spatial scales (and in many cases this variability is derived from variability in observations).
So, how would you propose to compare model projections to future observations? And how do you propose to make future projections based only on observed data?

Ernest Bush
Reply to  Barry
January 5, 2015 6:02 pm

And how do you propose to make future projections based only on observed data?
The Farmer’s Almanac does a pretty good job based on historic observations.

Reply to  Willis Eschenbach
January 5, 2015 8:50 pm

The basis for the models is not the real future world, but an future imaginary one. The future imaginary world is the political established UNFCCC with its idea of CAGW.

Doug Proctor
January 5, 2015 5:24 pm

The assumption is that the coarse scale GCM is accurate. This is the basis of virtually all climate “science”. Of course 97% of studies support CAGW: CAGW and the GCMs are their cornerstone.
But you can’t generate real detail in the analysis that isn’t in the data. You can generate the appearance of detail, however. This is part of the “computational” truth I rage about. The math is correct but is not representionally valid. But it is goodenough

Doug Proctor
January 5, 2015 5:26 pm

The assumption here is that the coarse scale GCM is accurate. This is the basis of virtually all climate “science”. Of course 97% of studies support CAGW: CAGW and the GCMs are their cornerstone.
But you can’t generate real detail in the analysis that isn’t in the data. You can generate the appearance of detail, however. This is part of the “computational” truth I rage about. The math is correct but is not representionally valid. But it is good enough to get something published, clearly.

Gary Hladik
January 5, 2015 5:59 pm

I’m not sure I understand what “downscaling” is supposed to do here. Is it like “enhancing” a low-res digital photo to bring out more detail? Except that the pixels of the low-res photo have been “randomized” to some extent before enhancement (analogous to the “regional bias” of GCMs) and generally “brightened” a bit (analogous to GCM warm bias).

Bernie Hutchins
Reply to  Gary Hladik
January 5, 2015 10:19 pm

Gary –
Perhaps see my response to Bubba Cow above.
The image processing seems to be a useful analogy.
I don’t think there is any randomization (like dithering?) or brightening – just interpolation.

Gary Hladik
Reply to  Bernie Hutchins
January 6, 2015 1:31 am

Thanks, Bernie. That helps.

artwest
Reply to  Gary Hladik
January 6, 2015 4:14 am

“Is it like “enhancing” a low-res digital photo to bring out more detail?”
I think they’ve been taking seriously those cop shows where they discover Seurat-like CCTV footage of the villain’s car from 300 yards away.
The license plate occupies 3 random blown-out pixels but through punching a keyboard a couple of times the computer geek makes the license number appear in pristine detail.

Bernie Hutchins
Reply to  artwest
January 6, 2015 11:12 am

Well yes and no. If you have MULTIPLE successive images you can get noise cancellation.
In the early days of TV, you had an antenna and a lot of empty channels (like 9 of 12 – believe it or not). At night, as you turned the knob, you might see a very weak image in the “snow”. A hobby called “TVDX” was to wait for the station logo or test pattern and take a photograph (film). When you got the photos processed, it was usually astounding – the exposure time averaging frame and noise cancelling. Magic.
Some time look up “dithering” and “stochastic resonance”.
Here I am bringing in additional information in the form of multiple images, but that’s what CCTV is. So – possible?

whiten
Reply to  Gary Hladik
January 6, 2015 9:10 am

Hello Gary.
I think there is one thing that is not higlighted here about the “downscaling” GCM in principle, something that maybe most are missing.
First let me explain my understanding of GCM in principle, before coming to the downscaling.
A GCM basically is a model, and as any model is bound and possible to adjust to a better and a better functioning and performance.
In the case of a GCM that adjustment requiries a long time and it will be a bit arbitrary till the time nedded for that adjustment.
To have the adjustment performed also the feedback to the GCM is nedded .
Up to now the feedback could be only the real climate data as per accuracy of the GCM projections comparision.
This is not very effective and it takes a lot of time and arbitrarity decissions before the GCM can be considered as properly adjusted.
Now the “downscaled” GCMs offer the better option of a feedback to the GCMs.
Much faster and a better feedback in this case, therefore a better and a quicker way to adjust the GCMs.
A feedback means a correction for the garbage in garbage out.
Is easier to adjust for “garbage out”, in this case and through a feedback adjast and correct the “garbage”.
in” and hopefully getting to the point where there is an input to the “downscaled” GCMs that not considered as “garbage in” anymore.
I know, is easier said than done, but that is what I think is the best in the case of the “downscaled” GCMs, a better and more efficient possible feedback to the GCMs for their correction and adjustment.
Not sure if this helps with your uncertainty!
cheers
.

george e. smith
Reply to  Gary Hladik
January 8, 2015 12:27 pm

It is taking one of Hansen’s 1200 km square pixels, for which a single Temperature is reported, and breaking it up into a 10 x 10 array or 100 x 100 array of smaller pixels, and then guessing a value for each of your new pixels.
Well I thought Hansen says they are still all the same value. You mean they might not be the same Temperature, everywhere in his 1200 km square pixel ??

John F. Hultquist
January 5, 2015 6:03 pm

Odd that lake-effect-snow is used for the test. Why not just run BUFKIT
a few hundred thousand times and sort the results into bins based on some starting set of data?
http://www.erh.noaa.gov/btv/research/Tardy-ta2000-05.pdf

January 5, 2015 6:06 pm

I have a idea, use the downscaled model, but instead of downscaling a region do the whole globe, might be interesting to see if the model can out run reality.

fred4d
Reply to  Paul Jackson
January 5, 2015 6:40 pm

The point of downscaling is because you cannot run a fine model on the whole earth so you do a small section in fine detail. This type of modeling is often done on large structures to analyze small complex details at fine scale. Of course in mechanical or thermal models it is possible to make sure the global model works reasonably well before relying on it to set the boundary conditions for the small scale models.

george e. smith
Reply to  fred4d
January 9, 2015 1:05 am

Well that is done quite a lot in finite element analysis. Of course, in some of these structures, there are regions which have only slow changes i.e. they are band limited to some low frequency, while other regions have faster changes, and hence have a higher frequency band limit.
So long as the sampling process is proper for each of those regions, given its signal bandwidth, then the results are valid, and it is not necessary to sample the entire planet surface at the highest anywhere signal frequency.
A good example would be clear cloudless sky, changing to solid cloud cover, and passing through regions of scattered cloud and then regions of broken cloud, before finally “socking in”.
g
I once concocted a discrete resistor array equivalent to a uniform resistivity film. This allowed me to model a stack of layers of a semiconductor process, as a stack of resistor matrices. I could then connect matching nodes from layer to layer with discrete capacitors to model the interlayer capacitance.
My resistor matrix could be replicated at any scale, by simply subdividing each square into four (or more) smaller squares, each one being the same resistor network; or even different resistors if my semiconductor layer resistivity varied from one region to another.
Quite often, such stacks had square symmetry, which actually gave them an eight fold symmetry of 45 degree triangles, so the original square could be triple folded to reduce the number of Rs and Cs substantially.
I could always compare the final simulations, as I reduced the matrix cell size, until no significant change resulted from further subdivision.
That saved me a whole lot of complex three dimensional EM field integrations, which would have been tera-messy.

January 5, 2015 6:18 pm

Bob Tisdale has shown that the potential exists for reasonably accurate regional weather forecasts through the process of gathering ENSO/La NIña/La Nada data. No need for down scaling from GCM’s.

January 5, 2015 6:36 pm

“Pressure to use (downscale) techniques to produce policy-relevant information is enormous…”
Interesting, but not surprising. ‘Pressure’ from whom – management, specific governments, UN…others??

January 5, 2015 6:36 pm

I dunno’ but for some reason “Does it matter?” sounds suspiciously like “What difference does it make?”
That’s really all it is: All politics, all the time.

Terry
Reply to  Tom J
January 5, 2015 6:57 pm

I seem to remember that Pielke Snr had this sort of thing as his biggest problem with GCMs. He argued that regional and sub-regional effects were far more important than the coarse projections from GCMs. Attempting to interpolate by downscaling without the fine grid effects Pielke insisted on seems to me to be an excercise in futility.

RomanM
January 5, 2015 6:40 pm

Lipstick on a pig… and a not-so-good looking one at that….

Alx
January 5, 2015 6:46 pm

Downscaling reminds me of TV crime dramas where they have low resolution grainy surveillance video and are able to zoom in and suddenly the grainy video becomes high resolution and crystal clear and can pick out the name tags of a suspect running in the dark. This is farcical in the crime dramas and as farcical in computer models.
You cannot downscale, it is impossible since the detail is just not there and so it needs to be made up. Which is ok with the modelers since the purpose is to see the effects of climate change, kind of like playing those world building games or war strategy games to try out various theories. Funny that those games like climate models use dice or other random generators . To suggest though that a game is predictive for a particular region is certifiable schizophrenia; disturbing evidence of lack of ability to tell the difference between what is real and not real. Hey lets throw the dice and see how much snow Michigan is going to get in 2017.
At this rate I imagine they will soon be upscaling the models to the entire universe in order to finally complete Einstein’s unfinished theory of everything.

Reply to  Alx
January 5, 2015 7:10 pm

You beat me to it. I was going to make exactly the same comparison except I was thinking of them taking license plates with 20 huge pixels and turning it into something readable. Anyone with any image processing experience knows this is impossible.

Curt
January 5, 2015 7:06 pm

Roger Pielke Sr. has always been scathing (in his polite and formal way) about the uselessness of downscaling in climate models as they stand now. One typical post:
http://pielkeclimatesci.wordpress.com/2011/06/15/the-failure-of-dynamic-downscaling-as-adding-value-to-multi-decadal-regional-climate-prediction/
There are many others that are easy to find.

DC Cowboy
Editor
January 5, 2015 7:18 pm

why do they insist on calling model output ‘data’? It isn’t ‘data’

David Jay
Reply to  DC Cowboy
January 5, 2015 7:33 pm

Because it sounds “sciencey”

mebbe
Reply to  DC Cowboy
January 5, 2015 10:13 pm

Well, the English word ‘data’ is the plural of the Latin word ‘datum’.
‘Datum’ is a noun derived (unchanged) from the supine (past participle) of the verb ‘dare’, which means to give.
Thus, a datum is something given and ‘data’ are some things given. There is nothing inherently Truth-filled in the word, it simply refers to what you feed your beast and that is often what some other beast fed you.
Could be a line of bull, could be god strewth.

mebbe
Reply to  mebbe
January 6, 2015 7:54 am

A definition from the Oxford Dictionary; “The quantities, characters, or symbols on which operations are performed by a computer, which may be stored and transmitted in the form of electrical signals and recorded on magnetic, optical, or mechanical recording media.”
You say “In science, “data” generally means observations.”
The adverb “generally” doesn’t look like a really solid, confident, sciency thing to me.

January 5, 2015 7:26 pm

Downscaling CAN be useful if there is a good model that describes the phenomenon in question. One example is the sunspot cycle, where knowledge of the maximum [smoothed] sunspot number in a given cycle [either measured or predicted] pretty much allows reconstruction of the details [e.g. each yearly value] of the cycle. Another example is the diurnal variation of the geomagnetic field which is usually so regular that knowing the sunspot number allows a fair reconstruction of the details of the variation in both time and space [location]. One can think of many other examples where a phenomenon [e.g. temperature] can be reconstructed fairly well from the location only [it is cold in the winter and warm in the summer], etc.

Bernie Hutchins
Reply to  lsvalgaard
January 5, 2015 10:34 pm

I agree. It can look very much like “multi-resolution” analysis such as the “perfect reconstruction filters” in digital signal processing. But you do have to know a great deal about your system. Miss-align the channels and you are in trouble.

Reply to  lsvalgaard
January 6, 2015 8:24 pm

Here is an [real life] example of a case where downscaling works and is useful. First the Figure
http://www.leif.org/research/Downscaling.png
Then the story:
The Figure shows the average diurnal variation of the geomagnetic Declination at Pawlovsk [near St. Petersburg], Russia, for the year 1860 (pink lower curve], constructed from real observations every hour. Now in some years [say 1861], the Declination was only observed at 8 am, 2 pm, and 10 pm [8, 14, and 22 hours], but we need to know what it was at the much finer resolution of 1 hour. The observations are marked with a blue circle and have a corresponding observation at those same hours in 1860 marked with pink circles. Plotting the blue-circled values against the pink-circled values yields the regression equation Blue = 36.523 + 0.7323 Pink. The offset, 36.523, comes about because there is a secular change from year to year [caused by flows in the Earth’s core far below us]. The coefficient, 0.7323, is snaller than unity because the (E)UV flux from the Sun that controls the electrical currents at 105 km altitude depends on solar activity and the Sunspot Number [SSN] in 1861 was smaller [77.2] than that in 1860 [95.9]. All this is well-understood physics [see e.g. http://www.leif.org/research/Reconstruction-Solar-EUV-Flux.pdf ] so we can be reasonably confident that applying the empirical [BTW we can also calculate it directly from the physics] regression equation also holds at all other times and use that to downscale [i.e. go to much finer time resolution] the three observations at 8, 14, and 22 hours. It just happens that we actually do have hourly data for 1861, so can compare the real data [green curve with diamond symbols] with the downscaled version. As you can see, there is good agreement. The reason for this is that we know the physics and how the system reacts. BTW, you can, perhaps, also see that we can use the amplitude of the diurnal variation to calibrate the sunspot number or at least to check if we have got the right number.

Reply to  Willis Eschenbach
January 7, 2015 7:12 am

That is trivially true. However, there is a connection. My example relies on knowledge of the physics of the phenomenon. Now, the climate modelers may be presumed to believe that they also know the physics of their system, so the situation is comparable. It is only when we disagree with their assessment that the situation becomes different.

Reply to  lsvalgaard
January 7, 2015 12:02 pm

lsvalgaard commented on

It is only when we disagree with their assessment that the situation becomes different.

So true!
But I also want to make the point that they are doing the same downscaling to generate GAT series, as one of the steps is to normalize temps to the altitude, lat and lon, and then use this to infill and create a uniform normalized field for the entire planet, just like the example Leif posted.

Reply to  Mi Cro
January 7, 2015 12:10 pm

And that normalization is both necessary and probably mostly correct when based on actual data.

Reply to  lsvalgaard
January 7, 2015 12:26 pm

lsvalgaard commented

And that normalization is both necessary and probably mostly correct when based on actual data.

I agree, it’s just in the case of surface temps, I think I calculated with a 100 mile circle around each of the 20 some thousand stations in the GSoD data set that was a couple % of the planets surface. And weather isn’t linear spatially.

Reply to  Mi Cro
January 7, 2015 12:29 pm

I’m reasonably sure that the people doing the normalization are doing their very best to make it as good as possible. Scientists are generally not morons.

Reply to  lsvalgaard
January 7, 2015 12:41 pm

lsvalgaard commented on

I’m reasonably sure that the people doing the normalization are doing their very best to make it as good as possible. Scientists are generally not morons.

Again, in general I agree with this, but I will point out that to do so they must really understand what they are modeling (because at this point it really is a model), and if their results are good enough to in this case base policy on. I don’t think we can make proclamations on 100 years of surface temps at the required detail, let alone 100 years of SST’s.
And from my looking at the data, a lot of whats happened the last 40 years that end up as part of the temp record is not a global effect, but a regional one in minimum temp. Now, I accept that that doesn’t mean there isn’t something else in the background, and in fact you can see it in the derivative of temps changing over the years, but it also looks like that’s reversing direction too, so I can’t tell if it’s a sign of Co2, or Ocean cycles/clouds/some longer period cycle in whatever or something else entirely.

Reply to  Mi Cro
January 7, 2015 12:46 pm

As with all science, observations [normalized or not] must always be examined critically and not just be believed.

Reply to  lsvalgaard
January 7, 2015 1:10 pm

lsvalgaard commented on

As with all science, observations [normalized or not] must always be examined critically and not just be believed.

Which is why, even if I don’t like it, I accept what you say as true to the best of your knowledge (such as the (lack of) effect of a moving barycenter on the Sun’s output).

george e. smith
Reply to  lsvalgaard
January 7, 2015 8:04 am

Well Leif, while I tend to agree with your assertion that your process got you essentially correct values, I don’t think this is comparable to what those guys did to their “weather / climate” maps.
You clearly have a band limited signal, and your interpolation process, did not introduce any higher frequency components. There are a good number of locations in their “downscaled” map, where they clearly have introduced much higher frequency (spatial) values.
And they are not dealing with a system that is likely to replicate itself at times a year apart, such as yours apparently is. I believe you when you say you have a physical model of your system.
As I said elsewhere, even if you have no more than one sample each half cycle of the highest frequency in a band limited signal, rigorous mathematics says that you can EXACTLY recover the COMPLETE continuous function from just those samples.
Now the actual implementation of reconstruction from sampled data, may be quite difficult to achieve in practice. You have to replace the instantaneous point samples, with a specific impulse of a prescribed shape. The fact that this is done routinely to time and/or frequency multiplex dozens or even thousands of signals, and transmit them together, with perfect unscrambling at the other end, is evidence, that the recovery can be done well enough to enable all our message chattering to get to the right places.
Your system looks like it has similar characteristics to the eyeball’s dither scanning phenomenon, in that you know something about how one observation morphs over time into a similar but slightly different picture.
The reason our optical rodents can resolve fine detail, is that we do get continuous analog values for each pixel on our coarse grid, while it is moving smoothly over the more highly detailed terrain.
G

Reply to  george e. smith
January 7, 2015 9:04 am

My point is that there probably is spatial reproducible structures. The temperature [or any other weather/climate variable] is often determined by local conditions, such as UHI effects, land-use effects, coast effect [e.g. Buffalo NY], and others, that do not change much from year to year [or has trends that can even be modeled too]. Those effects can be injected in the downscaling procedure and do help to get to finer resolution.

Reply to  Willis Eschenbach
January 7, 2015 10:10 am

So it really simply comes down to whether one believes one knows what one is doing. And there it must rest. Questions about beliefs cannot be resolved.

Reply to  lsvalgaard
January 7, 2015 1:14 pm

lsvalgaard
January 7, 2015 at 10:10 am

Questions about beliefs cannot be resolved.

Don’t they do that all the time in Las Vegas? 🙂

Reply to  Mi Cro
January 7, 2015 1:42 pm

Most leave that place rather fleeced…

Bob Weber
January 5, 2015 7:51 pm

Good points. Speaking of sunspot reconstructions… how is your revised monthly GSN coming along?

Reply to  Bob Weber
January 5, 2015 7:59 pm

Basically unchanged. We shall have a meeting in [of all places] Sunspot NM [ http://en.wikipedia.org/wiki/Sunspot,_New_Mexico ] during the last week of January to iron-out minor details. The new numbers will be presented at a press conference in Brussels, Belgium later in the spring/early summer and submitted to IAU [ http://www.iau.org/ ] in early August for possible adoption as an international standard..

Bob Weber
Reply to  lsvalgaard
January 5, 2015 8:37 pm

Thanks for the update. I’ve used your yearly data, and also used SIDC monthly data, and am looking forward to using the monthly rGSNs. If the rGSN becomes an international standard, will it replace the SIDC or be separate? May you enjoy a spot in the sun in Sunspot counting sunspots!

Reply to  Bob Weber
January 5, 2015 9:11 pm

The GSN will become obsolete and not published as a separate series, but will be incorporated with the regular SSN. There will thus be only ONE SSN series [and it will be called the Wolf number]. We will maintain a separate Group Number [GN] as a means to keep track of the number of groups which is a proxy for somewhat different physics as the ratio SSN/GN is not constant as was earlier surmised. We will discourage using the GN as a proxy for solar activity [as it is not].

January 5, 2015 7:54 pm

“whether it improves understanding of climate change in the region where it is applied.”?
This must have come out of the “Humor” section of the paper, it’s just a joke.
Oh, wait, there’s no “Humor” section in this paper.
Thanks, Willis. I had to laugh, then cry.

alpha2actual
January 5, 2015 8:35 pm

Perfect example of why atmospheric supercomputer models fail, spectacularly. If you go to NASA, ( fedscoop.com/nasa-supercomputer-carbon-dioxide ) you will find a supercomputer model of atmospheric CO2 global dynamics circa 2006.
When I first watched the 2006 gif I was struck by the absence of C02 density in the Southern Hemisphere.  Move forward to the actual measurements from NASA’s Orbiting Carbon Observatory-2 mission launched in July of this year. comment image and see what is actually happening. Supercomputer model selection bias in action.

January 5, 2015 8:44 pm

As insinuated in an earlier comment, current models are like really pathetic cameras, the kind you might have made in science class as a kid with a box and a pin hole with your finger as the shutter. You get this really lousy picture where gross shapes can be discerned but little else.
You can take that image and put it in a modern photo editor and pixelate the dickens out of it, but all you are doing is subdividing lousy larger pixels…

george e. smith
Reply to  gymnosperm
January 6, 2015 4:46 pm

Well interpolation between large pixels can produce useful results.
For example is you are currently holding an optical mouse in your hand (LED or LASER), the chances are that the digital camera in your mouse only has between15 x 15, and 22 x 22 pixels. Well if it one of those top secret fast gaming mice, it could have as many as 32 x 32 pixels. We are talking maybe 50 micron or 60 micron square pixels; veritable cow pastures.
Now it is of course taking at least 1500 frames per second, and maybe as high as 10,000 frames per second for that killer gaming mouse.
The camera lens that goes with that camera, started out able to resolve maybe 100 line pairs per mm, or about 5 microns spot size. Well the lens includes a built in optical low pass filter, that kills that resolution down to maybe a 100 micron spot size; but very uniform over the entire one by one mm field of view of the mouse. (it’s a 1:1 relay close up lens).
Because the lens point spread function is accurately devised (it’s like a laser beam waist ) the pixel signals are able to track the large spot with a Gaussian profile, as it tracks over the big pixels (which are smaller than the spot).
As a result, and the fact that absolute analog intensity values are stored for each pixel , the mouse is able to resolve motions much smaller than the pixel, so that 300-400 dot per inch mouse motion resolution is maintained.. Remember it is scanning.
All of that optical wizardry and signal processing magic, was done to save you from the monthly clean out of the hair and lint inside your $2 ball mouse. Yes it is all patented.
But nothing that is happening is creating out of (Nyquist) band information.
Now without the (patented) optical low pass filter, the whole thing would descend to garbage, and back tracking cursors, because of the original prototype high resolving power of the camera lens (it’s only about a 1.5 mm focal length lens; and aspheric, both surfaces.) And wildly aspheric in the LP filter region. The first one was actually a form of Tchebychev filter. Latest ones are segmented cubic profile filters.

Bob Weber
January 5, 2015 8:45 pm

“GCM projections of future climate change, with typical resolutions of about 100 km, are now routinely downscaled to resolutions as high as hundreds of meters.”
Who gives these folks the idea that their GCMs are working out now? After making faulty GCMs that have run hot for years, do they really think reducing the run area by 1000X is going to be an improvement?
“Pressure to use these techniques to produce policy-relevant information is enormous. To prevent bad decisions, the climate science community must identify downscaling’s strengths and limitations and develop best practices.”
First they must identify the limitations and bad decisions that went into current GCMs.
“A starting point for this discussion is to acknowledge that downscaled climate signals arising from warming are more credible than those arising from circulation changes.”
A starting point for this discussion is to acknowledge that the GCMs are not at all credible.

January 5, 2015 9:09 pm

It seems to me that to Upscale would make more sense. First try to make an extremely accurate model of local weather over a very short period of time. Say something like this: It is now 65 degrees and 74% humidity on my porch I predict, based on my model that one minute from now it will be 65 degrees and 74% humidity on my porch. If over time your model show skill, then expand it in space and time, if still show skill expand it further. Eventually you might work it up to a global model of the climate in 100 years, but before it gets there it would have to show the ability to reasonable predict regional weather over at least a month. Working from future global climate to future local weather seems working backwards to me.

jorgekafkazar
January 5, 2015 9:44 pm

It looks like the GCMs perform so poorly overall that Warmists want to look at a finer scale, where there’s a greater chance that at least a few areas will show better correspondence between the models and the data. They they’ll be able to say something like: “500 regions on the Earth show substantial warming.” Or “97% of Earth’s climate matches the models.” Or “Earth, him get plenty-plenty warmy all ovah!” Or “Aguanga residents to die soon in robust 160°F heat.”

Trevor
January 5, 2015 10:19 pm

This task seems to me like endeavouring to resolve a game of Sudoku. It’s as if a GCM gives an output that is analogous to a whole Sudoku game. This down scaling enterprise then attempts to work out what the value of every square is on that game. The only trouble is that on an individual square level there are millions of possible arrangements of numbers but every row or column must always have the same average as well as the whole game itself having the same average..
It also reminds me of the idea of trying to increase the resolution of a graphic image. It’s kind of hard to resolve a single pixel into 100 pixels that are all different based on the colour of surrounding pixels. More like inventing data.

mebbe
January 5, 2015 10:52 pm

It is my (fuzzy) understanding that regional meteo models are spun up or initialized by a partial run of a GCM, and that sounds like downscaling. The GCM supposedly runs on the universal laws of physics that are then resolved to the Earth, and that’s a downscaling of sorts.
I’ve also been told that the mean of a whole suite of GCM’s is better than any individual GCM and that most cells are fed with virtual values that are extrapolated from meagre observations. That sounds like upscaling. Sounds like fun, all the same.

knr
Reply to  mebbe
January 6, 2015 3:43 am

Their inability to resolve the models to the actual reality of what is happening on the Earth is the very reasons for the models to fail in the first place. To often we hear this ‘laws of physics ‘ claim when in fact it only works if these laws are applied in a theoretical sense or in bell jar with no other variables. Its the old spherical chicken in vacuum argument. Its not the laws that are the issue but the manner in which they are applied.

rtj1211
January 5, 2015 11:06 pm

‘We know that GCMs have lost credibility so we need a new subject for grant money. As we’ve only worked on GCMs the past 20 years, we need a narrative that allows us to use those computer models in our future grant projects. This is the best we can come up with….’

January 6, 2015 12:35 am

Are we talking about natural climate change or that mythical man-made climate change everyone is talking about?

Admad
January 6, 2015 1:03 am

I’m just wondering at what point the models (which in many cases are reanalysis of other models based on models all the way down) will disappear up their own fundamentals?…

Stephen Ricahrds
January 6, 2015 1:49 am

Somewhere, Charles Lamb is weeping
Hubert as well !

ren
January 6, 2015 2:18 am

I would recommend observation the pace of freezing Lake Sperior.
http://www.glerl.noaa.gov/res/glcfs/anim.php?lake=s&param=icecon&type=n

Harrowsceptic
January 6, 2015 2:34 am

Moderated??

Reply to  Harrowsceptic
January 6, 2015 7:02 am

Yes, Mods.
Is this acceptable?
Foul language, off topic and rude.
(I think he’s using a pseudonym as well).

ren
January 6, 2015 2:49 am
cd
January 6, 2015 4:39 am

…there are no studies showing that downscaling actually works
In the oil industry down-scaling is an important part of both inversion and forward modelling of inverted models – there is extensive literature on this type of approach: see a search in google scholar “downscaling petroleum models” or downscaling petroleum reservoir models”. These normally involve statistical models, multifractal modelling or prior assumptions. The ultimate use of such modelling is not to find a 1:1 relationship between reality and the model but rather to model uncertainty within cells or between cells or even to derive possible well log responses in fictitious production wells. This then provides the focus for what-if scenarios and risk analysis.
Dynamic down-scaling is approach is slightly different and is tested against observations. Again this is used extensively in the oil industry and has an extensive body of literature. It is used successfully as part of the history mapping process and economic modelling. I’d guess the process is the same as the one proposed in the paper being discussed. One takes a coarse grained grid and refines the resolution for part of the cellular model (producing a new higher resolution model) and then testes it against observations, if it improves the performance of the model (locally) then it is selected and this is then further refined to a new cellular model. This works really well where there are global issues that can be modelled at large scales but local issues that are not independent from the global models. This solves the need for prohibitive processing power while satisfying the need to marry global influence and local details.

cd
Reply to  Willis Eschenbach
January 6, 2015 4:05 pm

Willis
First of all I only included the part of your article I thought relevant to down-scaling as a valid modelling method – I was trying to be helpful; perhaps further reading might help support your article.
On the surface of it they may seem like disparate fields but they employ many of the same type of discretisation routines (one may deal in polar or sterographic coords) while the other Cartesian but in the end they use a similar types of cellular gridding. Furthermore they are often trying to emulate the same thing; hence the elaboration. Ironically a lot of people who worked in teams designing and writing code for both types of models have worked in both fields.
At no point did I say I was trying to prove that downs-scaling was ever used successfully for climate modelling. But perhaps it might yield fruit based on the success it has found in similar fields.

cd
Reply to  Willis Eschenbach
January 7, 2015 2:14 am

Willis
Thanks for your polite reply.
In the oil industry, they don’t use iterative models
I’m not sure what an iterative model is to be honest. As I’m sure you appreciate we commonly use iterative methods during modelling for all sorts of things. We also use iterative methods as the principal step in model construction, so for example seismic inversion will often employ simulated annealing. But whether this is akin to the ‘iterative modelling’ of GCMs I cannot say. Perhaps you could elucidate.
they don’t do well out of sample, it may be better than chance
Yeah I doubt anyone could argue that they do anything better than a poor job. Although, I think it may have been one of your more recent posts, where you compared observed vs collated models, and the observed data, just-about, tracked where I would’ve ‘drawn’ the lower confidence interval (at 95% confidence). But then that would be clutching at straws.
As a result of all of that, I fear that it’s totally immaterial whether downscaling is used in other fields …
That’s unfortunate. I think there is much to be gained from reading those that have been there and done it – albeit in a different domain or ‘paradigm’. Personally I can see why they’re doing it given the limits of computing power. However, if they’re using it – and I’m not putting words into your mouth just merely reading between the lines here as I see it – as away to deflect attention from the GCM failures and to spin that there’s life in the ‘old-dogs’ yet, then that is shameful and may reflect a crisis in the modelling community.

January 6, 2015 5:11 am

I think downscaling will be useful if the geography has a large influence on the climate. Similar problems should exist for high resolution weather forecast.

cd
Reply to  Paul Berberich
January 6, 2015 7:36 am

Paul
I think Dyson said that the models when they limit their scope do a good job at explaining meteorological phenomena. I guess this is a move in the right direction accepting that perhaps global modelling is just far too ambitious. Best to use the global models to capture gross trends and then augment these locally at a regional scale.
So my impression is, is that your point is right
…geography has a large influence on the climate…
Perhaps the approach is now that global modelling is a some of its regional parts but with lots of synergy between the two scales.

cd
Reply to  Paul Berberich
January 6, 2015 7:39 am

Paul by limit their scope I mean limited geographically.

January 6, 2015 6:30 am

In the oil industry down-scaling is an important

Here is the context of the original comment.

My question for you, dear readers, is just what is the appropriate test of the relevance of any given downscaling of a climate model?
Bear in mind that as far as I know, there are no studies showing that downscaling actually works.

The climate model part is kind of relevant if one is going to quote…

Reply to  Mark Cates
January 6, 2015 7:10 am

cd said “Dynamic down-scaling is approach is slightly different and is tested against observations.” and that made it relevant, at least for me, because they make no effort in the climate modelling to match output with ground truth.
So I agree with Steven Mosher and Willis that “appropriate test(s) of relevance would be real world data: maybe snow, maybe heat . . .

cd
Reply to  Bubba Cow
January 6, 2015 7:21 am

Bubba
I think they do. I don’t know in this case but they do tune their models. I suspect they do something similar here.

cd
Reply to  Mark Cates
January 6, 2015 7:31 am

Climate models and reservoir models share a lot in common. They both ‘descretise’ space into voluminous cells usually defined by a corner point grid (CPG). The finite flow simulators have all the same issues that climate models have particularly when dealing with transfer of matter and energy through the system (turbulent vs laminar flow etc.). The same dynamic down-scaling techniques are likely to be employed because the down-scaling issues are exactly the same: higher resolution CPGs.

Craig Loehle
January 6, 2015 7:26 am

The problem is they want to apply an academic criterion (improves our understanding) when agencies are using downscaled results for planning for flood control, for water supply, for endangered species risk, for crop insurance risk….

cd
Reply to  Craig Loehle
January 6, 2015 8:06 am

Craig
The point I was making was that while down-scaling may or may not work for climate modelling it is a valid modelling method. And is employed routinely, and successfully in other areas that deal with similar issues and in a similar manner – cellular models. There is a large body of literature that might be of use as a starting point.

Kevin Kilty
January 6, 2015 9:17 am

Influences from the weather or climate in adjacent regions flow through the boundaries of a region of interest. In order to accurately down-scale one would need to couple these models or credibly specify these boundary influences. There is an element of circular reasoning here.
In the 1960s and 1970s there arose in the discipline of image processing the idea of super-resolution. The whole effort foundered because it was predicated on the idea of being able to use detailed information in an image that in fact the optical system had removed. This effort seems similar.

Kev-in-Uk
January 6, 2015 10:29 am

After reading this, I wondered if anyone has tried to downscale real (or even modeled?) temperature data for near-urban to urban regions to ‘model’ the UHE that we know exists? Thereafter, perhaps kind of reversing the process, we could have an idea of the possible ‘real’ effect of UHE on near-urban and urban stations? At least to my mind, that might be useful………

logos_wrench
January 6, 2015 1:38 pm

Climate Science has become a byword and a laughing stock. What a joke.

Steve Garcia
January 6, 2015 5:26 pm

Modeling is a good thing for such things as building and bridge design – where ALL of the formulae are proven against hard reality.
Modeling something on the leading edge of a science where all or almost all of the formulae are untested against the real world is just really, really stupid. Oh, it wouldn’t be so bad if they actually ACKNOWLEDGED that the models are only rough guidelines. Actually, NO, that isn’t true at all. If the underlying connection to reality isn’t there, sorry, but then the models don’t mean and CANNOT mean anything. They are talking about real world climate, and if they never test the models against that real world, it is all Looney Tunes cartoons – where a coyote can spin his back feet for several seconds in one spot 10 feet off a cliff before finally falling, or the coyote can have an anvil fall on his head without dying. Sure, people can illustrate it, but does it have any connection to reality?
Th-th-th-th-that’s All, Folks!

u.k.(us)
Reply to  Steve Garcia
January 6, 2015 11:58 pm

“Modeling is a good thing for such things as building and bridge design – where ALL of the formulae are proven against hard reality.
===========
The “big dig” came up against some hard reality, luckily it is only taxpayer money.
Otherwise someone might have to pay for it.
I’m sure everyone is doing their best……
Wonder what I could do with 14 billion dollars ?
Wiki:
“The Big Dig was the most expensive highway project in the U.S. and was plagued by escalating costs, scheduling overruns, leaks, design flaws, charges of poor execution and use of substandard materials, criminal arrests,[2][3] and one death.[4] The project was originally scheduled to be completed in 1998[5] at an estimated cost of $2.8 billion (in 1982 dollars, US$6.0 billion adjusted for inflation as of 2006).[6] However, the project was completed only in December 2007, at a cost of over $14.6 billion ($8.08 billion in 1982 dollars, meaning a cost overrun of about 190%)[6] as of 2006.[7] The Boston Globe estimated that the project will ultimately cost $22 billion, including interest, and that it will not be paid off until 2038.[8] As a result of the death, leaks, and other design flaws, the consortium that oversaw the project agreed to pay $407 million in restitution, and several smaller companies agreed to pay a combined sum of approximately $51 million.[9]”
—-
No one talks about this abuse of taxpayers money ??

Arno Arrak
January 9, 2015 2:24 pm

The purpose of models was to predict what global temperature we should expect ahead. At least that is why Hansen introduced them in 1988. His attempts at predicting the future temperature were atrocious but getting supercomputers was supposed to fix that. 37 years later they still cannot do it despite using million line code. And now we find that they have their own computer hobby interests that do not tell us anything about temperature. The project is just a waste of taxpayers money and should be closed down.

cd
Reply to  Arno Arrak
January 14, 2015 3:35 am

Arno
They’ve not been a complete waste of money. There will be trickle-down effect. So while as you say they’ve been very poor at predicting climate change, the advances made in computational and numerical methods have been more significant such is innovative ways of using parallel processing even for sequential processes. This will lead to better systems in a whole hot of other areas.

Reply to  cd
January 14, 2015 5:47 am

cd commented on

They’ve not been a complete waste of money. There will be trickle-down effect. So while as you say they’ve been very poor at predicting climate change, the advances made in computational and numerical methods have been more significant such is innovative ways of using parallel processing even for sequential processes. This will lead to better systems in a whole hot of other areas.

This presumes that climate research has been the driving factor in supercomputer development, I would be surprised if this were true. Now I’m sure the extra business was appreciated, but a climate model isn’t that complicated from an execution environment. It is amenable to be run on a massively parallel server, but those are the “Plain Jane” supercomputers. I spent a while at Cray about a decade ago, Red Storm just had a lot of AMD 64 processors, the really fancy stuff wasn’t aimed at massively parallel system (at least what I saw).