Guest Post by Willis Eschenbach
In a recent issue of Science magazine there was a “Perspective” article entitled “Projecting regional change” (paywalled here) This is the opening:
Techniques to downscale global climate model (GCM) output and produce high-resolution climate change projections have emerged over the past two decades. GCM projections of future climate change, with typical resolutions of about 100 km, are now routinely downscaled to resolutions as high as hundreds of meters. Pressure to use these techniques to produce policy-relevant information is enormous. To prevent bad decisions, the climate science community must identify downscaling’s strengths and limitations and develop best practices. A starting point for this discussion is to acknowledge that downscaled climate signals arising from warming are more credible than those arising from circulation changes.
The concept behind downscaling is to take a coarsely resolved climate field and determine what the finer-scale structures in that field ought to be. In dynamical downscaling, GCM data are fed directly to regional models. Apart from their finer grids and regional domain, these models are similar to GCMs in that they solve Earth system equations directly with numerical techniques. Downscaling techniques also include statistical downscaling, in which empirical relationships are established between the GCM grid scale and finer scales of interest using some training data set. The relationships are then used to derive finer-scale fields from the GCM data.
So generally, “downscaling” is the process of using the output of a global-scale computer climate model as the input to another regional-scale computer model … can’t say that’s a good start, but that’s how they do it. Heres the graph that accompanies the article:
In that article, the author talks about various issues that affect downscaling, and then starts out a new paragraph as follows (emphasis mine):
DOES IT MATTER? The appropriate test of downscaling’s relevance is not whether it alters paradigms of global climate science, but whether …
Whether what? My question for you, dear readers, is just what is the appropriate test of the relevance of any given downscaling of a climate model?
Bear in mind that as far as I know, there are no studies showing that downscaling actually works. And the author of the article acknowledges this, saying:
GARBAGE IN, GARBAGE OUT. Climate scientists doubt the quality of downscaled data because they are all too familiar with GCM biases, especially at regional scales. These biases may be substantial enough to nullify the credibility of downscaled data. For example, biases in certain features of atmospheric circulation are common in GCMs (4) and can be especially glaring at the regional scale.
So … what’s your guess as to what the author thinks is “the appropriate test” of downscaling?
Being a practical man and an aficionado of observational data, me, I’d say that on my planet the appropriate test of downscaling is to compare it to the actual observations, d’oh. I mean, how else would one test a model other than by comparing it to reality?
But noooo … by the time we get to regional downscaling, we’re not on this Earth anymore. Instead, we’re deep into the bowels of ModelEarth. The study is of the ModelLakeEffectSnow around ModelLakeErie.
And as a result, here’s the actual quote from the article, the method that the author thinks is the proper test of the regional downscaling (emphasis mine):
The appropriate test of downscaling’s relevance is not whether it alters paradigms of global climate science, but whether it improves understanding of climate change in the region where it is applied.
You don’t check it against actual observations, you don’t look to see whether it is realistic … instead, you squint at it from across the room and you make a declaration as to whether it “improves understanding of climate change”???
Somewhere, Charles Lamb is weeping …
w.
PS—As is my custom, I ask that if you disagree with someone, QUOTE THE EXACT WORDS YOU DISAGREE WITH. I’m serious about this. Having threaded replies is not enough. Often people (including myself) post on the wrong thread. In other cases the thread has half a dozen comments and we don’t know which one is the subject. So please quote just what it is that you object to, so everyone can understand your objection.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Get it here: http://www.leif.org/EOS/Downscaling.pdf
Thanks, but no thanks.
Life’s to short etc.
That’s “too”.
How much money did this “scientist” receive for this drivel?
http://www.nsf.gov/awards/award_visualization_noscript.jsp?org=EF®ion=US-CA&instId=0013151000
thus: $358,775
well worth it !
(the number of the NSF grant is at the bottom of the article in Dr. Svalgaard’s link)
I haven’t a clue as to the reasoning behind the concept of downscaling, presumably the original author also thought the same.
Is this just more smoke and mirrors in ‘climate science’?
Peter, I should think that the value of downsizing (if it works) would be that it is one of the many factors that go into planning. For example, you and a few of your wealthy drinking buddies think that Forlorn Hope, Nevada would be a great place for a ritzy ski area. So you rustle up some financial commitments and do a business plan. Your ski area consultant identifies four possible locations in Gawdawful Gulch. Before you start the lengthy process of getting the BLM to permit your project, there are hundreds of things you need to know or guess at. e.g. location A is closer to the highway and has more water for snowmaking, but will it have reliable snow most Winters? Or should you go with location B or C higher up in elevation but with different potential problems.
I wouldn’t be at all surprised that downsizing — if it turns out to be workable — is useful and routine many decades from now. But we will need climate modeling that actually models climate — which the current model demonstrably do not.
Don K Said
“Before you start the lengthy process of getting the BLM to permit your project, there are hundreds of things you need to know or guess at. e.g. location A is closer to the highway and has more water for snowmaking, but will it have reliable snow most Winters? Or should you go with location B or C higher up in elevation but with different potential problems.”
This is true however any sane person would you base the decision on observed conditions at these 4 locations not on some computer model of what conditions MIGHT be in an imaginary world.
In the UK local councils were told to expect warmer winters. The Met Office, using its soooper duper computers, told them so on a downscaled level. Observations meant they ran out of grit for roads leading to massive inconvenience and deaths.
The Met Office in it’s infinite wisdom decided to abandon their downscaling for public consumption.
“But we will need climate modeling that actually models climate “
So that’s “never” then.
“A starting point for this discussion is to acknowledge that downscaled climate signals arising from warming are more credible than those arising from circulation changes.”
Maybe I misunderstand what the author is saying but, it seems to me the study is biased toward warming from the get go. The rest just seems like a method to generate ‘evidence’ to support the supposedly more credible climate signals arising from warming.
It looks like another example of circular reasoning. You assume climate change and then not surprisingly all your test results are the effects of climate change. But that is just one of the problems with this fiasco.
Einstein advised “We cannot solve our problems with the same level of thinking that created them”
– but I guess they know better.
Although worded awkwardly, I think what they mean to say is there is greater uncertainty in downscaling local precipitation patterns (for example) than local temperature patterns. A reasonable and welcome admission. In my view there is too much uncertainty in both. Downscaling is a worthy problem to tackle, but seems premature to report results.
I want to participate in an Art Contest where I get to include in my submission: “The appropriate judging criteria for my project is the origin of the materials use to make the project and my race.”
Bet that would go over well.
Actually it probably would. The way they judge art contests is crazy.
Whether it works. That was obviously what the article was about to say, or equivalent. Except that, if it had, there would have been no Willis article.
From the paper:
“DOES IT MATTER? The appropriate test of downscaling’s relevance is not whether it alters paradigms of global climate science, but whether it improves understanding of climate change in the region where it is ap-
plied. The snowfall example above meets that test. In many places, such fine spatial structures have important implications for climate change adaptation. In the urban areas of the United States and Canada most affected by lake effect snow, infrastructure and water resource planning must proceed very differently if lake effect snow is not projected to decrease significantly….”
So they intend to use the downscaling BEFORE they determine whether it works.
The author is merely just trolling for grant money to study “Downscaling.”
Maybe I have it wrong but if B is A on a local scale, I have trouble seeing B as a derivative of A. It doesn’t seem the average of B over 5 zones as A is, would look like A.
Willis writes: “Somewhere, Charles Lamb is weeping …”
And someone, me, is laughing at the absurdity of climate modelers.
“Climate scientists, I suppose, were children once.” ?
Excuse my ignorance. Hubert or have i missed something?
Me too. I’ve worked with computer models in a number of environments (modelling chemical processes and chemical plant design). It’s absurdly hard to even come close to reality in a single pass. Engineers use models to outline or scope a problem, not design a bridge or building. Climate models are not even inadequate.
Even if the modelers knew ALL of the variables involved in describing climate (which they don’t) and even if they knew the variables to within (say) 99%, the models would be wildly inaccurate after several passes, let alone after sufficient passes to project the next 100 years. The errors simply accumulate too fast and overwhelm the result.
Pointman wrote a great piece a few years ago (https://thepointman.wordpress.com/2011/01/21/the-seductiveness-of-models/) which just about covers it.
My suspicion of the concept of “downscaling”, says that if you have a model of a system sampled at some large scale intervals, then you are going to interpolate to find HIGHER FREQUENCY values for intermediate points. So you take the climate for the SF Bay region, and you interpolate the climate for Los Gatos, or Emeryville from that.
Something tells me that this is a huge violation of the Nyquist sampling theorem.
Given that the telephone system works, and that it is entirely dependent on the validity IN PRACTICE of the Nyquist theorem, in that both time and frequency multiplexed bandwidth and capacity considerations are pushed to the max; I’m not a subscriber to interpolation of higher frequency out of band “data”.
Sorry G
exactly – you’ll get pixelated bs
“The appropriate test of downscaling’s relevance is not whether it alters paradigms of global climate science . . .” what?
How about rain, snow, temperature . . . something real and measurable?
No No, Ya see, once you have downscaled all this fine detail locally, you can integrate it back and improve the global GCM! sarc off/
Pixelated spaghetti.
This is almost the inverse of Nyquist sampling (unless they are out collecting new data for grids every couple of hundred meters… ha) since they’re creating dozens of modeled cells from the output of a single modeled cell. I learned that you oversampled a minimum of 2x for temporal (frequency) targets and planar spatial targets by at least 4 times your desired final resolution. I guess they’ve figured that since they can make up temperature data for a third of the “stations” in the US it’s acceptable to create a completely ethereal 2nd generation out of a 1st generation which has also lost all roots to the realm of reality.
precisely – “make up temperature” It is not as if any physical phenomenon is going to be sampled.
George – Exactly
Interpolation gives you NO NEW information. If I have samples every second I can interpolate 9 additional samples in between according to some model of the signal such as bandlimited (Nyquist) or polynomial fitting (like a straight line – a first-order polynomial). Or my model could be that all nine samples in between are defined to be 13.33. The model IS (at best) separate information, and it could be trivial, or complicated and likely wrong, or just useless.
Bandlimited interpolation works just dandy for things like digital audio (so-called “oversampling”). But that’s because we have good reason to believe the model (low-pass) is correct for music (our ears). Getting to the comment Willis made about checking the interpolated result against the real world – well – it SOUNDS great. That’s the test all right! We have to know any proposed model is reasonably based. Until proven, assume it’s quite likely bogus.
Agreed and I call foul on this. The big GCM downscaled (is that even a real world word?) to a regional model (is there one?) by interpolation (where, you are right, there is NO NEW information) produced NEW DATA = paint by numbers OR this is a fiction, by another name, Fraud. So, we are to believe best guess interpolation of shades of blue = white = snow. How many feet?
Bubba Cow –
No I was not familiar with the term “downscaling”. It is (as I look) used in a previous reference (Kerr 2011) in Science, so I guess it is OK. Many would say “upsampling” (a higher sampling rate). That’s interpolation. Sampling rate (frequency) and sampling intervals (spacing between samples or “scale” I guess) are reciprocals.
Give someone the data (data fixed), and a model for what the data supposedly fits (presumably a choice) and everyone gets the same interpolated data, over and over. Choose a different model and you get a DIFFERENT interpolation, but nothing new in the sense that everyone won’t get the same intermediates. Compare the intermediates to the real world, and you will have SOME possibility of evaluating a model as being valid, or not. Matching the real world is the test of course.
Bernie –
Thanks for your reply. Because I used a naughty word, I was in moderation for a bit. Rightfully so.
Upsampling?
I am unfamiliar with this. I am familiar with selecting sampling rate based upon unfolding events and frequency of need to know changes in those events with increases in rates to assure you don’t miss something of value, properly treated and within resolutions of sensors, DAQ etc. . .
This does not seem to “fit” anywhere in empirical study – which I was protesting. If “downscaling” is supposed to reveal information that was never captured, it is a hoax. It might actually exist, but it has not been observed. I try to be fair.
I agree with information needing to fit reality. I am questioning that this is anywhere near reality.
Best
Bubba Cow –
Since you are familiar with ordinary sampling, this analogy might help. Sorry it’s a bit long. First of all, you know from ordinary sampling that you can fully recover a properly sampled analog signal, and you could have sampled it at a higher than minimum rate. Interpolation or upsampling is accordingly a DIRECT digital-to-digital way of going to the higher rate WITHOUT returning to analog as an intermediate step. Every CD player (for example) does this in real time. But more simply, it is pretty much ordinary interpolation of intermediate values.
The upsampling or interpolation (downscaling in the climate paper) is analogous to increasing the number of pixels in an image. It IS intuitive, as you suggest, that you don’t get MORE resolution. If the model for the image is low-pass (bandlimited) you get a “blurry” image. You get rid of the sharp pixel edges, making the image more comfortable to view (and bigger) but don’t see anything new. If you picture had a distant person with a face with an unresolved nose, you don’t interpolate the nose. Well, not generally. But if your image “model” was not strictly smooth, but had very sophisticated routines to identify human faces, it might recognize that the person should have a nose, and on that assumption, figure out how a nose WOULD have been blurred perhaps from a black dot into 9 pixels, and do a credible (deconvolution) reconstruct. Pretty much artificial intelligence.
In the case of climate, we might start with a very simple model that heat in the tropics moved to the poles. If we “downscaled” this biggest view, we might then consider air masses over individual oceans and continents. Then downscaling again and again, air masses over unfrozen lakes, that sort of thing. We might even look for tornados, as we might have looked for noses! The output at the lower resolution becomes input to the higher resolution. Makes sense. But you have to know how to do it right, and they almost certainly don’t (can’t).
In signal processing there is a major field of “multi-rate” or “multi-resolution” signal processing (filter banks – an FFT being a perhaps overly-simple filter bank). Good stuff. Systematically and efficiently take the signal apart. But if you cross over a channel, you are wiped out! You don’t make this error in engineering practice – only as an educational exercise!
By “crossing over a channel” I am thinking about something like what the Science
paper here discusses as “bias” – like erroneously shifting the jet stream. They say – don’t have bias. Easy to say. Many of us would suggest not trying to take short cuts, and pretending to know more than we possibly can.
Actually, the brain does a pretty good job at ‘creating more information’. We all have a ‘blind spot’ in out eyes where the optical nerves leave the eye. We don’t see the blind spot because the brain downscales what we actually do see and fills in with what it believes we would see if there were no blind spot.
Good Morning, Bernie
I appreciate your explanation and can understand how such systems and techniques might/do work in EE and music. Slept on it, in terms of any usefulness to predicting weather (much less climate) and had to feed the wood stove here last night – well below 0F and good wind chill. Good for our carbon footprint.
Also appreciate Lsvalgaard’s input and I understand how human sensory systems can interpret and even fill in the blanks transitioning between notes and “compute” pattern recognitions. But here we have a receiver – human sensory apparatus – that has immeasurable experience and, as you say, perhaps AI, and generally unfathomable computing capacity. Complex interacting systems. Ever wonder just how you can recognize someone walking in the visual and somewhat obscure distance by cognitively differentiating his/her gait?
We actually tried to model something like this decades ago as grad students with lots of instruments, some time on our hands, and as a purely educational exercise. We had a reasonable gob of empirical data regarding human movement kinematics and kinetics, decent computing power (didn’t care if the model needed to run all night), and the maths were pretty well known. We just wanted to find out if we could identify an envelop of reasonable performance outcomes and what might happen if we pushed an input – say increase velocity a bit. As a colleague from New Zealand decided = just wanking.
There is just too much variability both between and, more importantly, within subjects in that environment and we were able to control some stuff. Too many ways to get from here to there and possibilities for correcting/adjusting errant trajectories . . .dynamical systems that wouldn’t fall for our programming tricks.
Of course we got some envelopes – relevant around known events – but we gave up on using any of that toward generating useful predictive results over realizing we’ll just have to go collect more data.
And that is what concerns me here. We were just wanking and not trying to set energy policy and taxation. Doesn’t matter if one now has a super computer to throw at it. Really believe we need to direct funding toward study and learning, rather than fabricating some fantastical future prospect that has so far eluded reality.
Cheers and hope this lands somewhere close to the proper place in the thread.
The ‘optical data infilling’ Leif Svalgaard refers to is often cited as a cause for “Sorry mate, I didn’t see you” accidents where car drivers pull into the path of oncoming motorcycles, often with fatal results. I believe Police drivers are taught techniques to minimise the effect.
lsvalgaard January 5, 2015 at 10:19 pm
Thanks, Leif, but that’s not very relevant to downscaling. While the brain does “infill” the blind spot in the eyes, please note that this does NOT increase the amount of information available. If there is an actual object in the blind spot, no amount of mental infilling will make that object suddenly visible, or partially visible, or even reveal the silhouette of the object.
Instead, the brain just spreads technicolor peanut butter of some kind over the hole. It “patches” the hole in the same way we patch a hole in the plaster—by making it the same color and visual texture as the surroundings.
Obviously, this is not the desired goal of downsampling …
w.
The brain has a ‘model’ [albeit a primitive one] of the surroundings and uses that to downscale [fill in]. The principle is the same as for the [useful] downscaling: use a model to fill in the coarser data.
Before we downscale the model outputs, shouldn’t we first validate that the individual model actually matches at the larger scale? No amount of AI can make a gourmet meal out of the landfill. The models ALL RUN HOT! Downscaling to local environments will not fix the basic problem that the GCM’s are models of a fantasy world that kinda sorta looks like Earth if you squint real hard and click your heals together three time and say: “It really is Earth, It Really Is Earth, IT REALLY IS EARTH!” !
When the physical theory is well known, you can interpolate with some small bit of confidence. The problem is geology, geography and weather have this tendency toward fractal distributions at all scales that makes things difficult to interpolate.
Willis, at Jan 5, 2015 10:19AM you use the term “downsampling”. I fear this is exactly the wrong term – it should be upsampling (interpolation). The paper uses the term “downscaling” which was not familiar to me, but must mean interpolation: the establishment of intermediate locations between existing points and GUESSING what goes there (sometimes successfully).
And to your main point, we test interpolators by starting with a known denser set, throwing away data, and seeing if our interpolation procedure reasonably retrieves what we tossed out. To the extent it fails, we call the residual an “error”. That’s the same as “downscaling” and going to the location to take measurements – just what you said.
But even a small interpolation error might mean almost nothing with regard to adding to understanding. An audio signal, for example, is essentially horizontal (restricted amplitude but broad in time – Fourier components) but CAN be interpolated using a polynomial model (simplest example – a first-order polynomial or straight line). Yet the polynomial is a lousy, misleading model for audio because it is vertical – running to infinity outside a very small local time interval. It works by accident. Well – of course!
“””””…..
lsvalgaard
January 5, 2015 at 10:19 pm
Actually, the brain does a pretty good job at ‘creating more information’. We all have a ‘blind spot’ in out eyes where the optical nerves leave the eye. We don’t see the blind spot because the brain downscales what we actually do see and fills in with what it believes we would see if there were no blind spot……”””””
Well the brain / eye does fill in as you say; but it really doesn’t “make stuff up”.
The answer is scanning.
You think you are looking in the same place, but your eye is actually scanning so it moves the optic nerve blind spot around, so it really does see everything, and then it concocts a composite of multiple slightly different scanned imaged to eliminate the hole. The eye and the brain conspire together to create this illusion, that you are looking at a full picture with no holes in it.
And they also fake the colors as well, because some sensors in the eye see good detail, but not colors, and verse vicea. So once again, a little bit of dithering and collectively they fake the whole of what you think you are seeing.
If the original (band limited) climate map, is correctly sampled, a la Nyquist, then sampling theory insists that the original CONTINUOUS function is completely recoverable from just those samples, with no loss of information.
In other words, the samples contain ALL of the necessary information to compute the exact accurate value of that continuous function anywhere and everywhere on the map.
Ergo, interpolation is a perfectly valid process to recover the function value at intermediate points.
The problem is that this creates no NEW information whatsoever. But at least it does tell you what the exact value of the function is at “that point.”.
But these authors are seeming to imply that they create new intermediate point values, that are not just these interpolations, and by definition, these “created values” are necessarily outside the passband of the original band limited signal.
So they are phony and fictitious “information”.
Nyquist violations are serious. If you simply under-sample by a factor of two, so the sampling interval is equal to the wavelength of the band limit signal frequency, instead of being half of that period, then the aliased recovered signal has a spectrum that folds over about (B), and then extends all the way to zero frequency.
Now the zero frequency signal is simply the average of all the values.
So if you unde-sample by just a factor of two the reconstructed “signal” contains aliasing noise at zero frequency, so the average value of the function is corrupted.
Climatists seem to merrily assume that at least the average of their grossly under-sampled data points is valid. It isn’t. And apparently global weather stations report either a daily max / min Temperature pair, or simply twice a day. That barely conforms to Nyquist, if, and only if, the actual daily cycle is a pure sinusoid; which it isn’t. A fast rise / slower fall more saw tooth like daily temperature heating profile, contains at least a second harmonic component, which is at the sampling rate, not the Nyquist rate, so you already have average value aliasing noise.
And when it comes to the spatial sampling of these Temperature maps, they aren’t even close to Nyquist sanitary.
And the key parameter is the MAX time between samples; NOT the average frequency of the samples.
So putting all your stations in the USA, and a handful everywhere else just does not cut it. It is the maximum sample spacing, which must be less than a half period of the band limit frequency.
Random sampling works great on your digital oscilloscope when looking at a repetitive signal; it saves you the vertical delay line for one thing, but it doesn’t work for a single transient event, which is what this global temperature mapping exercise is all about.
George – good review, but I would mention a few points to clarify.
Specifically, what is the independent variable of the sampling? Climate data could be, true enough, a time sequence, and the climate “signal” is likely sufficiently bandlimited in that sense. (A temperature variation with time is probably, climate-wise, gradual enough.) We can use the Nyquist (Shannon) sampling ideas and sync (low-pass) interpolation to reconstruct. Old reliable stuff. Things like musical signals may be naturally bandlimited because of mechanical inertia (limiting rates of change).
But I think for the Science paper, the independent variable is space, at least in two dimensions, and perhaps three for best results. The exact SAME math applies of course, but are the signals bandlimited in space? In places, sure. But the situation may be more like an image than an analog time signal. In an image, we CAN have an instantaneous change (pure white touching pure black), and a proper analog pre-sampling filter evades our resources in such a case.
In consequence, we could only rely, if at all, on climate parameters that vary gradually, in the interpolated analysis, on a scale the same as the ORIGINAL spatial sampling interval – no faster. In terms of spatial frequencies, the climate parameter was fair-sampled at the original rate, and you have just interpolated to many “extra” (over)-samples. You of course have NOT (should not have!) increased the spatial bandwidth by raising the cutoff of the interpolation filter. Here the concern is NOT an anti-aliasing effort, but the proper placement of the reconstruction (interpolation) filter – the OTHER half of the problem.
Speaking of lake-effect snow, I have friends nearby in the Buffalo area who rarely get lake effect, while lucky folks just 5-10 miles to the south get 6 feet. Climate may have some sharp edges that can’t be interpolated. You can’t interpolate a 10 mile transition from 100 mile samples.
Bernie,
Most of the sampled data systems that are well established, are time domain samples of a continuous (in time) analog signal. But sampling is not restricted to sampling over single independent variables.
And in the case of “global Temperature mapping” , we have both spatial sampling and Temporal sampling.
The latter is often just max / min thermometer readings, so it is twice a day temporal sampling.
Spatial sampling, seems to be any unoccupied rock that they can put a spare Stevenson Screen or equivalent, that they have sitting around. The spatial sampling is Rube Goldberg in the extreme, when it comes to surface data. The Satellite data sets, presumably have a more organized sampling stratagem.
But the hairs on my neck stand up on goose bumps, when I realize that nobody pays attention in the surface case, to exactly when the samples are taken. Perish the thought that anyone would bother to sample all stations at exactly the same time.
You mean you really want to see the global Temperature map at a specific instant in time ???
Well you get the picture.
I presume that we have both spatial and Temporal sampling of almost any variable that is relevant to global weather / climate.
G
George –
I think that your comments involve what has been discussed here as TOB (Time of Observation Bias). Even when you don’t care about the actual variation during any one day, daily measurements in search of year-long seasonal trends would have serious errors if you measured a daily temperature at 7AM one day and at 3PM the next. At minimum, you should probably take a daily temp at the exact same time each day. Also, as I recall there are problems recording daily max or min for fixed 24 hour frame. If the temp happens to have a max or min close to the transition from one days series to the next day, there is a good chance of getting this same (actual) extreme for both days.
But these problems are fairly easy to understand, and probably to fix – and to defend the adjustments. Unlike so many “adjustments” where you decide which way to adjust (what you want the outcome to be), and then look for some excuse! Do people do that?
Snow fall meets the test clearly.
So do heat waves
Are you saying we will discover there is such a thing as lake effect snowfall? It happens every year for a very local reason that GCM derivatives are not going to see. Wisconsin’s or Ohio’s weather men are a better bet than the models. How can models that don’t work give details of local climate, which is what it is?
QED.
OK, Steven, if there has been a test, where can it be viewed?
My car had it’s bi-annual test recently. All the lights worked (looked pretty) and the dasboard units worked but no brakes, no engine and no fuel.
So does somethings.
Not necessarily, the example they used is the path of the jet stream not being determinant, the jet stream controls what kind of weather I get, North I get warm moist air, South I get cool dry air. And if the area bounded by the jet stream changes, that alone could change average surface temps.
Policy based and funded science(Proggresive enlightend liberalism) is more “predictive” than the Classic/traditional science?
With downscaling I just discovered the temperature on my birthday in late August for my geographical location will be 86.477 degrees F; winds a mild 5.328 mph out of the WNW; humidity a pleasant 48% with a generally clear sky with only 10 percent cirrus coverage at 19,764 feet.
Nice!
You lucky dog!
Babsy, you’re invited! Dress accordingly! No chance of rain!
And bring Bob Tisdale with you — we’re gonna roast hot dogs and climate models!
Is that 2015? My birthday in November 2033 will be -5.327C, and windy with gusts to 38.884 kph… 🙁
Bummer, Code! I’d cancel it now if I were you…
Maybe something indoors would be advisable. What?!? No heat?!?
Tell me again what Century this is. It sure feels like the 14th…
But, have you checked Al Gore’s travel calendar?
That should be an input variable into any Regional model.
” A starting point for this discussion is to acknowledge that downscaled climate signals arising from warming are more credible than those arising from circulation changes.”
…tells me everything I need to know right at the beginning.
Good article Willis. Thanks.
I missed that piece of science. Thanks.
Curious and Luke and/or Willis,
I’d copied that in preparation to paste all the while wondering if they answered “why”? Why not cooling as a choice of words also crossed my mind if one doesn’t presume some sort of predetermined orientation.
Wills, was there discussion as to the “why”? Just curious, as it seems a bit moot anyhow.
Comparing to reality would be indeed be a logical approach, but we are dealing with climastrology. I would recommend the engineering approach – do a simple sanity check before detailed analysis.
Ask – “Does “downscaling” now allow the models to run CFD (computational fluid dynamics) in the vertical dimension? Or are vertical non-radiative energy transports still parametrised, not computed?”
The answer is clear, “downscaling” GCMs fails the most basic sanity check. Increasing horizontal resolution is worthless without the addition of CFD in the vertical, something totally lacking from current GCMs.
The bottom line is that our radiatively cooled atmosphere is cooling our solar heated oceans. An atmosphere without radiative cooling can’t do that. Any climate model that shows the atmosphere slowing the cooling of the oceans will always fail miserably.
I can’t take anymore of these computer models passing as science anymore.
They need to be taken to the historical trash heap of really bad ideas.
In the author’s circle, this is taken seriously. Quite sad.
They can not see from the outside, looking in. How silly they look using GCMs as inputs and then saying, “climate signals arising from warming are more credible than those arising from circulation changes.”
And even sadder is that Science magazine and the AAAS has kowtowed to Climate Change political correctness and lost its way.
This says you get get the results you want (“ought to be” where “ought” is informed by divine guidance ahead of the test) by using inaccurate information (results from submitting GCM data to defective regional models). What’s not to like?
Can they downscale their model fine enough to project the weather over our house for the next 50-100 years? I am willing to pay my fair share in Teraflops.
Sorry, I pay that already, electricity this year is 10% up thanks to the certificates for “green” renewables…
Why the emphasis on micro-modelling weather patterns when the propaganda meme is: “weather is not climate”?
It’s the equivalent of counting Polar Bears per iceberg.
“SIGNAL IN, SIGNAL OUT. Fortunately, there are regional scale anthropogenic signals in the GCMs that are not contaminated by regional biases.”
so they are just contaminated by the GCM, IPCC and modeler biases??? just wow….
“Somewhere Charles Lamb is weeping …” And somewhere Edward Lorenz can’t stop chuckling.
+1
Perhaps someone should send the author the reference to Lorenz’s chaos paper (my bold)
http://journals.ametsoc.org/doi/abs/10.1175/1520-0469%281963%29020%3C0130%3ADNF%3E2.0.CO%3B2
It sounds like “downscaling” is really weather forecasting with a new name, and we all know how accurate that is. (Sorry, Anthony)
Makes sense when your livelihood depends on ‘models ‘ then of course your going to claim that any issues with models can be fixed with more models and that is all you need to do to check the models .
No pay cheques for the authors in saying , the models don’t work no future career either in the modelling area once you admitted to just how rubbish they are .
Model output (or interpolations) are NOT data.
Arguably, in climate science, model inputs are not data either. Once data are “adjusted”, they are no longer data, but merely estimates of what the data might have been if they had been collected timely from properly selected, calibrated, sited and installed instruments.
I have been chuckling at the commenters above “playing” with insignificant digits. The entire concept of estimates to two “significant” digits greater than the actual data escapes me.
I consider the downscaling a 100% success. Anything – I mean anything – can only improve our understanding of climate change.
Me too. Down scale this immediately. Bridge out.
“The appropriate test of downscaling’s relevance is not whether it alters paradigms of global climate science, but whether it improves understanding of climate change in the region where it is applied.”
I believe it is meant as a stronger condition than just comparing the result with observations.
For example, let’s assume we had very heavy snow last year. If our downscaled model predicted no snow for that period, it is not helping any understanding. So not only must it match observations, but it must also provide answers why did that happen.
Of course, garbage in, garbage out. It all relies on GCMs providing realistic forecast on global scale as well.
That ‘Downscaling Results’ graph reminds me of one of those old “Paint By Numbers” kits, which a friend’s Mom was always completing and hanging all over the walls. None of her efforts could have been mistaken for art, nor her home, for a gallery.
Unlike with my friend’s Mom, maybe someone won’t be reluctant to hurt (the author) Alex Hall’s feelings and will tell him that he’s no modeling Modigliani.