The Best Test of Downscaling

Guest Post by Willis Eschenbach

In a recent issue of Science magazine there was a “Perspective” article entitled “Projecting regional change” (paywalled here)  This is the opening:

Techniques to downscale global climate model (GCM) output and produce high-resolution climate change projections have emerged over the past two decades. GCM projections of future climate change, with typical resolutions of about 100 km, are now routinely downscaled to resolutions as high as hundreds of meters. Pressure to use these techniques to produce policy-relevant information is enormous. To prevent bad decisions, the climate science community must identify downscaling’s strengths and limitations and develop best practices. A starting point for this discussion is to acknowledge that downscaled climate signals arising from warming are more credible than those arising from circulation changes.

The concept behind downscaling is to take a coarsely resolved climate field and determine what the finer-scale structures in that field ought to be. In dynamical downscaling, GCM data are fed directly to regional models. Apart from their finer grids and regional domain, these models are similar to GCMs in that they solve Earth system equations directly with numerical techniques. Downscaling techniques also include statistical downscaling, in which empirical relationships are established between the GCM grid scale and finer scales of interest using some training data set. The relationships are then used to derive finer-scale fields from the GCM data.

So generally, “downscaling” is the process of using the output of a global-scale computer climate model as the input to another regional-scale computer model … can’t say that’s a good start, but that’s how they do it. Heres the graph that accompanies the article:

downscaling science mag

In that article, the author talks about various issues that affect downscaling, and then starts out a new paragraph as follows (emphasis mine):

DOES IT MATTER? The appropriate test of downscaling’s relevance is not whether it alters paradigms of global climate science, but whether …

Whether what? My question for you, dear readers, is just what is the appropriate test of the relevance of any given downscaling of a climate model?

Bear in mind that as far as I know, there are no studies showing that downscaling actually works. And the author of the article acknowledges this, saying:

GARBAGE IN, GARBAGE OUT. Climate scientists doubt the quality of downscaled data because they are all too familiar with GCM biases, especially at regional scales. These biases may be substantial enough to nullify the credibility of downscaled data. For example, biases in certain features of atmospheric circulation are common in GCMs (4) and can be especially glaring at the regional scale.

So … what’s your guess as to what the author thinks is “the appropriate test” of downscaling?

Being a practical man and an aficionado of observational data, me, I’d say that on my planet the appropriate test of downscaling is to compare it to the actual observations, d’oh. I mean, how else would one test a model other than by comparing it to reality?

But noooo … by the time we get to regional downscaling, we’re not on this Earth anymore. Instead, we’re deep into the bowels of ModelEarth. The study is of the ModelLakeEffectSnow around ModelLakeErie.

And as a result, here’s the actual quote from the article, the method that the author thinks is the proper test of the regional downscaling (emphasis mine):

The appropriate test of downscaling’s relevance is not whether it alters paradigms of global climate science, but whether it improves understanding of climate change in the region where it is applied.

You don’t check it against actual observations, you don’t look to see whether it is realistic … instead, you squint at it from across the room and you make a declaration as to whether it “improves understanding of climate change”???

Somewhere, Charles Lamb is weeping …

w.

PS—As is my custom, I ask that if you disagree with someone, QUOTE THE EXACT WORDS YOU DISAGREE WITH. I’m serious about this. Having threaded replies is not enough. Often people (including myself) post on the wrong thread. In other cases the thread has half a dozen comments and we don’t know which one is the subject. So please quote just what it is that you object to, so everyone can understand your objection.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

188 Comments
Inline Feedbacks
View all comments
jorgekafkazar
January 5, 2015 9:44 pm

It looks like the GCMs perform so poorly overall that Warmists want to look at a finer scale, where there’s a greater chance that at least a few areas will show better correspondence between the models and the data. They they’ll be able to say something like: “500 regions on the Earth show substantial warming.” Or “97% of Earth’s climate matches the models.” Or “Earth, him get plenty-plenty warmy all ovah!” Or “Aguanga residents to die soon in robust 160°F heat.”

Trevor
January 5, 2015 10:19 pm

This task seems to me like endeavouring to resolve a game of Sudoku. It’s as if a GCM gives an output that is analogous to a whole Sudoku game. This down scaling enterprise then attempts to work out what the value of every square is on that game. The only trouble is that on an individual square level there are millions of possible arrangements of numbers but every row or column must always have the same average as well as the whole game itself having the same average..
It also reminds me of the idea of trying to increase the resolution of a graphic image. It’s kind of hard to resolve a single pixel into 100 pixels that are all different based on the colour of surrounding pixels. More like inventing data.

mebbe
January 5, 2015 10:52 pm

It is my (fuzzy) understanding that regional meteo models are spun up or initialized by a partial run of a GCM, and that sounds like downscaling. The GCM supposedly runs on the universal laws of physics that are then resolved to the Earth, and that’s a downscaling of sorts.
I’ve also been told that the mean of a whole suite of GCM’s is better than any individual GCM and that most cells are fed with virtual values that are extrapolated from meagre observations. That sounds like upscaling. Sounds like fun, all the same.

knr
Reply to  mebbe
January 6, 2015 3:43 am

Their inability to resolve the models to the actual reality of what is happening on the Earth is the very reasons for the models to fail in the first place. To often we hear this ‘laws of physics ‘ claim when in fact it only works if these laws are applied in a theoretical sense or in bell jar with no other variables. Its the old spherical chicken in vacuum argument. Its not the laws that are the issue but the manner in which they are applied.

rtj1211
January 5, 2015 11:06 pm

‘We know that GCMs have lost credibility so we need a new subject for grant money. As we’ve only worked on GCMs the past 20 years, we need a narrative that allows us to use those computer models in our future grant projects. This is the best we can come up with….’

January 6, 2015 12:35 am

Are we talking about natural climate change or that mythical man-made climate change everyone is talking about?

Admad
January 6, 2015 1:03 am

I’m just wondering at what point the models (which in many cases are reanalysis of other models based on models all the way down) will disappear up their own fundamentals?…

Stephen Ricahrds
January 6, 2015 1:49 am

Somewhere, Charles Lamb is weeping
Hubert as well !

ren
January 6, 2015 2:18 am

I would recommend observation the pace of freezing Lake Sperior.
http://www.glerl.noaa.gov/res/glcfs/anim.php?lake=s&param=icecon&type=n

Harrowsceptic
January 6, 2015 2:34 am

Moderated??

Reply to  Harrowsceptic
January 6, 2015 7:02 am

Yes, Mods.
Is this acceptable?
Foul language, off topic and rude.
(I think he’s using a pseudonym as well).

ren
January 6, 2015 2:49 am
cd
January 6, 2015 4:39 am

…there are no studies showing that downscaling actually works
In the oil industry down-scaling is an important part of both inversion and forward modelling of inverted models – there is extensive literature on this type of approach: see a search in google scholar “downscaling petroleum models” or downscaling petroleum reservoir models”. These normally involve statistical models, multifractal modelling or prior assumptions. The ultimate use of such modelling is not to find a 1:1 relationship between reality and the model but rather to model uncertainty within cells or between cells or even to derive possible well log responses in fictitious production wells. This then provides the focus for what-if scenarios and risk analysis.
Dynamic down-scaling is approach is slightly different and is tested against observations. Again this is used extensively in the oil industry and has an extensive body of literature. It is used successfully as part of the history mapping process and economic modelling. I’d guess the process is the same as the one proposed in the paper being discussed. One takes a coarse grained grid and refines the resolution for part of the cellular model (producing a new higher resolution model) and then testes it against observations, if it improves the performance of the model (locally) then it is selected and this is then further refined to a new cellular model. This works really well where there are global issues that can be modelled at large scales but local issues that are not independent from the global models. This solves the need for prohibitive processing power while satisfying the need to marry global influence and local details.

cd
Reply to  Willis Eschenbach
January 6, 2015 4:05 pm

Willis
First of all I only included the part of your article I thought relevant to down-scaling as a valid modelling method – I was trying to be helpful; perhaps further reading might help support your article.
On the surface of it they may seem like disparate fields but they employ many of the same type of discretisation routines (one may deal in polar or sterographic coords) while the other Cartesian but in the end they use a similar types of cellular gridding. Furthermore they are often trying to emulate the same thing; hence the elaboration. Ironically a lot of people who worked in teams designing and writing code for both types of models have worked in both fields.
At no point did I say I was trying to prove that downs-scaling was ever used successfully for climate modelling. But perhaps it might yield fruit based on the success it has found in similar fields.

cd
Reply to  Willis Eschenbach
January 7, 2015 2:14 am

Willis
Thanks for your polite reply.
In the oil industry, they don’t use iterative models
I’m not sure what an iterative model is to be honest. As I’m sure you appreciate we commonly use iterative methods during modelling for all sorts of things. We also use iterative methods as the principal step in model construction, so for example seismic inversion will often employ simulated annealing. But whether this is akin to the ‘iterative modelling’ of GCMs I cannot say. Perhaps you could elucidate.
they don’t do well out of sample, it may be better than chance
Yeah I doubt anyone could argue that they do anything better than a poor job. Although, I think it may have been one of your more recent posts, where you compared observed vs collated models, and the observed data, just-about, tracked where I would’ve ‘drawn’ the lower confidence interval (at 95% confidence). But then that would be clutching at straws.
As a result of all of that, I fear that it’s totally immaterial whether downscaling is used in other fields …
That’s unfortunate. I think there is much to be gained from reading those that have been there and done it – albeit in a different domain or ‘paradigm’. Personally I can see why they’re doing it given the limits of computing power. However, if they’re using it – and I’m not putting words into your mouth just merely reading between the lines here as I see it – as away to deflect attention from the GCM failures and to spin that there’s life in the ‘old-dogs’ yet, then that is shameful and may reflect a crisis in the modelling community.

January 6, 2015 5:11 am

I think downscaling will be useful if the geography has a large influence on the climate. Similar problems should exist for high resolution weather forecast.

cd
Reply to  Paul Berberich
January 6, 2015 7:36 am

Paul
I think Dyson said that the models when they limit their scope do a good job at explaining meteorological phenomena. I guess this is a move in the right direction accepting that perhaps global modelling is just far too ambitious. Best to use the global models to capture gross trends and then augment these locally at a regional scale.
So my impression is, is that your point is right
…geography has a large influence on the climate…
Perhaps the approach is now that global modelling is a some of its regional parts but with lots of synergy between the two scales.

cd
Reply to  Paul Berberich
January 6, 2015 7:39 am

Paul by limit their scope I mean limited geographically.

January 6, 2015 6:30 am

In the oil industry down-scaling is an important

Here is the context of the original comment.

My question for you, dear readers, is just what is the appropriate test of the relevance of any given downscaling of a climate model?
Bear in mind that as far as I know, there are no studies showing that downscaling actually works.

The climate model part is kind of relevant if one is going to quote…

Bubba Cow
Reply to  Mark Cates
January 6, 2015 7:10 am

cd said “Dynamic down-scaling is approach is slightly different and is tested against observations.” and that made it relevant, at least for me, because they make no effort in the climate modelling to match output with ground truth.
So I agree with Steven Mosher and Willis that “appropriate test(s) of relevance would be real world data: maybe snow, maybe heat . . .

cd
Reply to  Bubba Cow
January 6, 2015 7:21 am

Bubba
I think they do. I don’t know in this case but they do tune their models. I suspect they do something similar here.

cd
Reply to  Mark Cates
January 6, 2015 7:31 am

Climate models and reservoir models share a lot in common. They both ‘descretise’ space into voluminous cells usually defined by a corner point grid (CPG). The finite flow simulators have all the same issues that climate models have particularly when dealing with transfer of matter and energy through the system (turbulent vs laminar flow etc.). The same dynamic down-scaling techniques are likely to be employed because the down-scaling issues are exactly the same: higher resolution CPGs.

Craig Loehle
January 6, 2015 7:26 am

The problem is they want to apply an academic criterion (improves our understanding) when agencies are using downscaled results for planning for flood control, for water supply, for endangered species risk, for crop insurance risk….

cd
Reply to  Craig Loehle
January 6, 2015 8:06 am

Craig
The point I was making was that while down-scaling may or may not work for climate modelling it is a valid modelling method. And is employed routinely, and successfully in other areas that deal with similar issues and in a similar manner – cellular models. There is a large body of literature that might be of use as a starting point.

Kevin Kilty
January 6, 2015 9:17 am

Influences from the weather or climate in adjacent regions flow through the boundaries of a region of interest. In order to accurately down-scale one would need to couple these models or credibly specify these boundary influences. There is an element of circular reasoning here.
In the 1960s and 1970s there arose in the discipline of image processing the idea of super-resolution. The whole effort foundered because it was predicated on the idea of being able to use detailed information in an image that in fact the optical system had removed. This effort seems similar.

Kev-in-Uk
January 6, 2015 10:29 am

After reading this, I wondered if anyone has tried to downscale real (or even modeled?) temperature data for near-urban to urban regions to ‘model’ the UHE that we know exists? Thereafter, perhaps kind of reversing the process, we could have an idea of the possible ‘real’ effect of UHE on near-urban and urban stations? At least to my mind, that might be useful………

logos_wrench
January 6, 2015 1:38 pm

Climate Science has become a byword and a laughing stock. What a joke.

Steve Garcia
January 6, 2015 5:26 pm

Modeling is a good thing for such things as building and bridge design – where ALL of the formulae are proven against hard reality.
Modeling something on the leading edge of a science where all or almost all of the formulae are untested against the real world is just really, really stupid. Oh, it wouldn’t be so bad if they actually ACKNOWLEDGED that the models are only rough guidelines. Actually, NO, that isn’t true at all. If the underlying connection to reality isn’t there, sorry, but then the models don’t mean and CANNOT mean anything. They are talking about real world climate, and if they never test the models against that real world, it is all Looney Tunes cartoons – where a coyote can spin his back feet for several seconds in one spot 10 feet off a cliff before finally falling, or the coyote can have an anvil fall on his head without dying. Sure, people can illustrate it, but does it have any connection to reality?
Th-th-th-th-that’s All, Folks!

u.k.(us)
Reply to  Steve Garcia
January 6, 2015 11:58 pm

“Modeling is a good thing for such things as building and bridge design – where ALL of the formulae are proven against hard reality.
===========
The “big dig” came up against some hard reality, luckily it is only taxpayer money.
Otherwise someone might have to pay for it.
I’m sure everyone is doing their best……
Wonder what I could do with 14 billion dollars ?
Wiki:
“The Big Dig was the most expensive highway project in the U.S. and was plagued by escalating costs, scheduling overruns, leaks, design flaws, charges of poor execution and use of substandard materials, criminal arrests,[2][3] and one death.[4] The project was originally scheduled to be completed in 1998[5] at an estimated cost of $2.8 billion (in 1982 dollars, US$6.0 billion adjusted for inflation as of 2006).[6] However, the project was completed only in December 2007, at a cost of over $14.6 billion ($8.08 billion in 1982 dollars, meaning a cost overrun of about 190%)[6] as of 2006.[7] The Boston Globe estimated that the project will ultimately cost $22 billion, including interest, and that it will not be paid off until 2038.[8] As a result of the death, leaks, and other design flaws, the consortium that oversaw the project agreed to pay $407 million in restitution, and several smaller companies agreed to pay a combined sum of approximately $51 million.[9]”
—-
No one talks about this abuse of taxpayers money ??

Arno Arrak
January 9, 2015 2:24 pm

The purpose of models was to predict what global temperature we should expect ahead. At least that is why Hansen introduced them in 1988. His attempts at predicting the future temperature were atrocious but getting supercomputers was supposed to fix that. 37 years later they still cannot do it despite using million line code. And now we find that they have their own computer hobby interests that do not tell us anything about temperature. The project is just a waste of taxpayers money and should be closed down.

cd
Reply to  Arno Arrak
January 14, 2015 3:35 am

Arno
They’ve not been a complete waste of money. There will be trickle-down effect. So while as you say they’ve been very poor at predicting climate change, the advances made in computational and numerical methods have been more significant such is innovative ways of using parallel processing even for sequential processes. This will lead to better systems in a whole hot of other areas.

Reply to  cd
January 14, 2015 5:47 am

cd commented on

They’ve not been a complete waste of money. There will be trickle-down effect. So while as you say they’ve been very poor at predicting climate change, the advances made in computational and numerical methods have been more significant such is innovative ways of using parallel processing even for sequential processes. This will lead to better systems in a whole hot of other areas.

This presumes that climate research has been the driving factor in supercomputer development, I would be surprised if this were true. Now I’m sure the extra business was appreciated, but a climate model isn’t that complicated from an execution environment. It is amenable to be run on a massively parallel server, but those are the “Plain Jane” supercomputers. I spent a while at Cray about a decade ago, Red Storm just had a lot of AMD 64 processors, the really fancy stuff wasn’t aimed at massively parallel system (at least what I saw).