Guest Post by Willis Eschenbach
In a recent issue of Science magazine there was a “Perspective” article entitled “Projecting regional change” (paywalled here) This is the opening:
Techniques to downscale global climate model (GCM) output and produce high-resolution climate change projections have emerged over the past two decades. GCM projections of future climate change, with typical resolutions of about 100 km, are now routinely downscaled to resolutions as high as hundreds of meters. Pressure to use these techniques to produce policy-relevant information is enormous. To prevent bad decisions, the climate science community must identify downscaling’s strengths and limitations and develop best practices. A starting point for this discussion is to acknowledge that downscaled climate signals arising from warming are more credible than those arising from circulation changes.
The concept behind downscaling is to take a coarsely resolved climate field and determine what the finer-scale structures in that field ought to be. In dynamical downscaling, GCM data are fed directly to regional models. Apart from their finer grids and regional domain, these models are similar to GCMs in that they solve Earth system equations directly with numerical techniques. Downscaling techniques also include statistical downscaling, in which empirical relationships are established between the GCM grid scale and finer scales of interest using some training data set. The relationships are then used to derive finer-scale fields from the GCM data.
So generally, “downscaling” is the process of using the output of a global-scale computer climate model as the input to another regional-scale computer model … can’t say that’s a good start, but that’s how they do it. Heres the graph that accompanies the article:
In that article, the author talks about various issues that affect downscaling, and then starts out a new paragraph as follows (emphasis mine):
DOES IT MATTER? The appropriate test of downscaling’s relevance is not whether it alters paradigms of global climate science, but whether …
Whether what? My question for you, dear readers, is just what is the appropriate test of the relevance of any given downscaling of a climate model?
Bear in mind that as far as I know, there are no studies showing that downscaling actually works. And the author of the article acknowledges this, saying:
GARBAGE IN, GARBAGE OUT. Climate scientists doubt the quality of downscaled data because they are all too familiar with GCM biases, especially at regional scales. These biases may be substantial enough to nullify the credibility of downscaled data. For example, biases in certain features of atmospheric circulation are common in GCMs (4) and can be especially glaring at the regional scale.
So … what’s your guess as to what the author thinks is “the appropriate test” of downscaling?
Being a practical man and an aficionado of observational data, me, I’d say that on my planet the appropriate test of downscaling is to compare it to the actual observations, d’oh. I mean, how else would one test a model other than by comparing it to reality?
But noooo … by the time we get to regional downscaling, we’re not on this Earth anymore. Instead, we’re deep into the bowels of ModelEarth. The study is of the ModelLakeEffectSnow around ModelLakeErie.
And as a result, here’s the actual quote from the article, the method that the author thinks is the proper test of the regional downscaling (emphasis mine):
The appropriate test of downscaling’s relevance is not whether it alters paradigms of global climate science, but whether it improves understanding of climate change in the region where it is applied.
You don’t check it against actual observations, you don’t look to see whether it is realistic … instead, you squint at it from across the room and you make a declaration as to whether it “improves understanding of climate change”???
Somewhere, Charles Lamb is weeping …
w.
PS—As is my custom, I ask that if you disagree with someone, QUOTE THE EXACT WORDS YOU DISAGREE WITH. I’m serious about this. Having threaded replies is not enough. Often people (including myself) post on the wrong thread. In other cases the thread has half a dozen comments and we don’t know which one is the subject. So please quote just what it is that you object to, so everyone can understand your objection.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Challenge: I have ten people on a scale totaling 2000 lbs margin of error ±5%.
Question: How much will each of them individually gain/lose over the holidays next year?
This is what downsampling claims to be able to do.
The dead one will lose all the weight ;>) well you did say a year hope this dies out soon :>(
Are you suggesting I might unethically “hide the decline” or “spike the hike” by reducing the number of reporting elements?
Not a claim, this is what downsampling will do.
It will burn a bunch of computer cycles and spit out the gain/lost of each individual to a hundredth of a pound. Since there was never any individuals to begin with, they will have 97% confidence level in the accuracy of their holiday eating projections.
Well, they said “downscaling” which if I have it figured correctly, is “Upsampling”.
But your “scale” example is correct. It’s “conservation of information” – numbers in this case. Engineers are very very clever, except when it comes to violating fundamental laws of math and physics!
Integrate around all space avoiding all singularities.
The GCMs are a singularity in and of itself.
Not real stable in my humble opinion.
Here is my question. As I understand, the AOCGM ensemble is not expected to match reality because of the much discussed initialization issues (i.e., they would not be expected to replicate ocean cycles, etc.) and this is one reason why perhaps they are not tracking the ‘pause’ very well. If this is the case, how would down-scaling be policy relevant? Or is its (as I think Mosher suggests) just to find out if region A ‘generally warms’, then how would it affect sub-region A-1(?), etc. Seems this would be the only viable way to use the down-scaled models (hypothesis testing based on a set of possible boundary conditions for the sub-regional domain). I would think a thorough review of ‘real world’ data would be better for this.
I always thought that the system is just too complex to model and that we should just use observations of what is really happening instead.
What’s wrong with just observing what is really happening. You can actually build models of what actually happens which is more-or-less what the weather models do and do so rather successfully for close to 10 days out now.
Put extra GHGs into a climate model and it is going to produce warming. Why? Is it some magically property that just emerges from the climate model as if ordained by the God of Physics and Weather? No. The climate modelers coded their model to produce more warming as more GHGs are introduced, simple as that. It is not an emergent property as Mosher and Gavin like to say/pretend. It is written into the code based on their “theory”.
Does CO2 produce weather? Nobody has observed that yet. Cold fronts and warm fronts and water vapour and pressure systems and winds and ground surfaces produce weather. CO2 has never been shown to have any effect on any weather that I am aware of.
Why not see what the real Earth(tm) actually does and one can make future predictions based on observed behaviour. Lake effect snows are actually easy. We have hundreds of years of actual results on which to base future expectations as GHGs rise. I’m sure the data says more of the same because that is what has happened as CO2 has risen.
The answer to your questions is there is no money in it for the perpetrators of this fraud.
Bill, right now they can just barely model modern fighter plane design pretty well w/the most powerful supercomputers. To think climate modellers can accurately model global climate is hubris in the most extreme. Nothing wrong w/working toward that goal, but basing government policies on them now is absurd.
“You don’t check it against actual observations, you don’t look to see whether it is realistic…”
Of course you do, but that is to be implied. Actually the article does address the need to downscale GCMs that recreate observed circulation patterns, while ignoring those that don’t because it will be GIGO. Also stated is that downscaling does provide additional information — the variability that might be expected at smaller spatial scales (and in many cases this variability is derived from variability in observations).
So, how would you propose to compare model projections to future observations? And how do you propose to make future projections based only on observed data?
And how do you propose to make future projections based only on observed data?
The Farmer’s Almanac does a pretty good job based on historic observations.
Barry January 5, 2015 at 5:24 pm
Perhaps you’d do that, Barry, as would I … but I see no evidence that they’ve done anything of the sort.
I’m gonna assume that this is a serious question, although it seems obvious. Two choices. Either:
1) Initialize the models on the first half of the historical dataset, and then compare the output of the models to the second half of the dataset, or,
2) Make the actual predictions, and then wait and see if they come to pass.
Not all that tough …
Best to you,
w.
The basis for the models is not the real future world, but an future imaginary one. The future imaginary world is the political established UNFCCC with its idea of CAGW.
The assumption is that the coarse scale GCM is accurate. This is the basis of virtually all climate “science”. Of course 97% of studies support CAGW: CAGW and the GCMs are their cornerstone.
But you can’t generate real detail in the analysis that isn’t in the data. You can generate the appearance of detail, however. This is part of the “computational” truth I rage about. The math is correct but is not representionally valid. But it is goodenough
The assumption here is that the coarse scale GCM is accurate. This is the basis of virtually all climate “science”. Of course 97% of studies support CAGW: CAGW and the GCMs are their cornerstone.
But you can’t generate real detail in the analysis that isn’t in the data. You can generate the appearance of detail, however. This is part of the “computational” truth I rage about. The math is correct but is not representionally valid. But it is good enough to get something published, clearly.
I’m not sure I understand what “downscaling” is supposed to do here. Is it like “enhancing” a low-res digital photo to bring out more detail? Except that the pixels of the low-res photo have been “randomized” to some extent before enhancement (analogous to the “regional bias” of GCMs) and generally “brightened” a bit (analogous to GCM warm bias).
Gary –
Perhaps see my response to Bubba Cow above.
The image processing seems to be a useful analogy.
I don’t think there is any randomization (like dithering?) or brightening – just interpolation.
Thanks, Bernie. That helps.
“Is it like “enhancing” a low-res digital photo to bring out more detail?”
I think they’ve been taking seriously those cop shows where they discover Seurat-like CCTV footage of the villain’s car from 300 yards away.
The license plate occupies 3 random blown-out pixels but through punching a keyboard a couple of times the computer geek makes the license number appear in pristine detail.
Well yes and no. If you have MULTIPLE successive images you can get noise cancellation.
In the early days of TV, you had an antenna and a lot of empty channels (like 9 of 12 – believe it or not). At night, as you turned the knob, you might see a very weak image in the “snow”. A hobby called “TVDX” was to wait for the station logo or test pattern and take a photograph (film). When you got the photos processed, it was usually astounding – the exposure time averaging frame and noise cancelling. Magic.
Some time look up “dithering” and “stochastic resonance”.
Here I am bringing in additional information in the form of multiple images, but that’s what CCTV is. So – possible?
Hello Gary.
I think there is one thing that is not higlighted here about the “downscaling” GCM in principle, something that maybe most are missing.
First let me explain my understanding of GCM in principle, before coming to the downscaling.
A GCM basically is a model, and as any model is bound and possible to adjust to a better and a better functioning and performance.
In the case of a GCM that adjustment requiries a long time and it will be a bit arbitrary till the time nedded for that adjustment.
To have the adjustment performed also the feedback to the GCM is nedded .
Up to now the feedback could be only the real climate data as per accuracy of the GCM projections comparision.
This is not very effective and it takes a lot of time and arbitrarity decissions before the GCM can be considered as properly adjusted.
Now the “downscaled” GCMs offer the better option of a feedback to the GCMs.
Much faster and a better feedback in this case, therefore a better and a quicker way to adjust the GCMs.
A feedback means a correction for the garbage in garbage out.
Is easier to adjust for “garbage out”, in this case and through a feedback adjast and correct the “garbage”.
in” and hopefully getting to the point where there is an input to the “downscaled” GCMs that not considered as “garbage in” anymore.
I know, is easier said than done, but that is what I think is the best in the case of the “downscaled” GCMs, a better and more efficient possible feedback to the GCMs for their correction and adjustment.
Not sure if this helps with your uncertainty!
cheers
.
It is taking one of Hansen’s 1200 km square pixels, for which a single Temperature is reported, and breaking it up into a 10 x 10 array or 100 x 100 array of smaller pixels, and then guessing a value for each of your new pixels.
Well I thought Hansen says they are still all the same value. You mean they might not be the same Temperature, everywhere in his 1200 km square pixel ??
Odd that lake-effect-snow is used for the test. Why not just run BUFKIT
a few hundred thousand times and sort the results into bins based on some starting set of data?
http://www.erh.noaa.gov/btv/research/Tardy-ta2000-05.pdf
I have a idea, use the downscaled model, but instead of downscaling a region do the whole globe, might be interesting to see if the model can out run reality.
The point of downscaling is because you cannot run a fine model on the whole earth so you do a small section in fine detail. This type of modeling is often done on large structures to analyze small complex details at fine scale. Of course in mechanical or thermal models it is possible to make sure the global model works reasonably well before relying on it to set the boundary conditions for the small scale models.
Well that is done quite a lot in finite element analysis. Of course, in some of these structures, there are regions which have only slow changes i.e. they are band limited to some low frequency, while other regions have faster changes, and hence have a higher frequency band limit.
So long as the sampling process is proper for each of those regions, given its signal bandwidth, then the results are valid, and it is not necessary to sample the entire planet surface at the highest anywhere signal frequency.
A good example would be clear cloudless sky, changing to solid cloud cover, and passing through regions of scattered cloud and then regions of broken cloud, before finally “socking in”.
g
I once concocted a discrete resistor array equivalent to a uniform resistivity film. This allowed me to model a stack of layers of a semiconductor process, as a stack of resistor matrices. I could then connect matching nodes from layer to layer with discrete capacitors to model the interlayer capacitance.
My resistor matrix could be replicated at any scale, by simply subdividing each square into four (or more) smaller squares, each one being the same resistor network; or even different resistors if my semiconductor layer resistivity varied from one region to another.
Quite often, such stacks had square symmetry, which actually gave them an eight fold symmetry of 45 degree triangles, so the original square could be triple folded to reduce the number of Rs and Cs substantially.
I could always compare the final simulations, as I reduced the matrix cell size, until no significant change resulted from further subdivision.
That saved me a whole lot of complex three dimensional EM field integrations, which would have been tera-messy.
Bob Tisdale has shown that the potential exists for reasonably accurate regional weather forecasts through the process of gathering ENSO/La NIña/La Nada data. No need for down scaling from GCM’s.
“Pressure to use (downscale) techniques to produce policy-relevant information is enormous…”
Interesting, but not surprising. ‘Pressure’ from whom – management, specific governments, UN…others??
I dunno’ but for some reason “Does it matter?” sounds suspiciously like “What difference does it make?”
That’s really all it is: All politics, all the time.
I seem to remember that Pielke Snr had this sort of thing as his biggest problem with GCMs. He argued that regional and sub-regional effects were far more important than the coarse projections from GCMs. Attempting to interpolate by downscaling without the fine grid effects Pielke insisted on seems to me to be an excercise in futility.
Lipstick on a pig… and a not-so-good looking one at that….
Downscaling reminds me of TV crime dramas where they have low resolution grainy surveillance video and are able to zoom in and suddenly the grainy video becomes high resolution and crystal clear and can pick out the name tags of a suspect running in the dark. This is farcical in the crime dramas and as farcical in computer models.
You cannot downscale, it is impossible since the detail is just not there and so it needs to be made up. Which is ok with the modelers since the purpose is to see the effects of climate change, kind of like playing those world building games or war strategy games to try out various theories. Funny that those games like climate models use dice or other random generators . To suggest though that a game is predictive for a particular region is certifiable schizophrenia; disturbing evidence of lack of ability to tell the difference between what is real and not real. Hey lets throw the dice and see how much snow Michigan is going to get in 2017.
At this rate I imagine they will soon be upscaling the models to the entire universe in order to finally complete Einstein’s unfinished theory of everything.
You beat me to it. I was going to make exactly the same comparison except I was thinking of them taking license plates with 20 huge pixels and turning it into something readable. Anyone with any image processing experience knows this is impossible.
Roger Pielke Sr. has always been scathing (in his polite and formal way) about the uselessness of downscaling in climate models as they stand now. One typical post:
http://pielkeclimatesci.wordpress.com/2011/06/15/the-failure-of-dynamic-downscaling-as-adding-value-to-multi-decadal-regional-climate-prediction/
There are many others that are easy to find.
why do they insist on calling model output ‘data’? It isn’t ‘data’
Because it sounds “sciencey”
Well, the English word ‘data’ is the plural of the Latin word ‘datum’.
‘Datum’ is a noun derived (unchanged) from the supine (past participle) of the verb ‘dare’, which means to give.
Thus, a datum is something given and ‘data’ are some things given. There is nothing inherently Truth-filled in the word, it simply refers to what you feed your beast and that is often what some other beast fed you.
Could be a line of bull, could be god strewth.
So I’m correct to call my birthday presents “data” because they were given to me?
That’s the craziest misuse of etymology I’ve heard in a long time. I hope you just forgot the [sarc] and [/sarc] tags.
In science, “data” generally means observations. The use of it to describe the output of climate models is both a cruel joke and a huge deception. A more useful definition from the web is:
“facts and statistics collected together for reference or analysis.”
Not birthday presents. Not “something given”. Not “what you feed your beast”. Not computer output, which can be total fantasy and totally wrong.
Facts.
w.
A definition from the Oxford Dictionary; “The quantities, characters, or symbols on which operations are performed by a computer, which may be stored and transmitted in the form of electrical signals and recorded on magnetic, optical, or mechanical recording media.”
You say “In science, “data” generally means observations.”
The adverb “generally” doesn’t look like a really solid, confident, sciency thing to me.
Downscaling CAN be useful if there is a good model that describes the phenomenon in question. One example is the sunspot cycle, where knowledge of the maximum [smoothed] sunspot number in a given cycle [either measured or predicted] pretty much allows reconstruction of the details [e.g. each yearly value] of the cycle. Another example is the diurnal variation of the geomagnetic field which is usually so regular that knowing the sunspot number allows a fair reconstruction of the details of the variation in both time and space [location]. One can think of many other examples where a phenomenon [e.g. temperature] can be reconstructed fairly well from the location only [it is cold in the winter and warm in the summer], etc.
I agree. It can look very much like “multi-resolution” analysis such as the “perfect reconstruction filters” in digital signal processing. But you do have to know a great deal about your system. Miss-align the channels and you are in trouble.
Here is an [real life] example of a case where downscaling works and is useful. First the Figure
http://www.leif.org/research/Downscaling.png
Then the story:
The Figure shows the average diurnal variation of the geomagnetic Declination at Pawlovsk [near St. Petersburg], Russia, for the year 1860 (pink lower curve], constructed from real observations every hour. Now in some years [say 1861], the Declination was only observed at 8 am, 2 pm, and 10 pm [8, 14, and 22 hours], but we need to know what it was at the much finer resolution of 1 hour. The observations are marked with a blue circle and have a corresponding observation at those same hours in 1860 marked with pink circles. Plotting the blue-circled values against the pink-circled values yields the regression equation Blue = 36.523 + 0.7323 Pink. The offset, 36.523, comes about because there is a secular change from year to year [caused by flows in the Earth’s core far below us]. The coefficient, 0.7323, is snaller than unity because the (E)UV flux from the Sun that controls the electrical currents at 105 km altitude depends on solar activity and the Sunspot Number [SSN] in 1861 was smaller [77.2] than that in 1860 [95.9]. All this is well-understood physics [see e.g. http://www.leif.org/research/Reconstruction-Solar-EUV-Flux.pdf ] so we can be reasonably confident that applying the empirical [BTW we can also calculate it directly from the physics] regression equation also holds at all other times and use that to downscale [i.e. go to much finer time resolution] the three observations at 8, 14, and 22 hours. It just happens that we actually do have hourly data for 1861, so can compare the real data [green curve with diamond symbols] with the downscaled version. As you can see, there is good agreement. The reason for this is that we know the physics and how the system reacts. BTW, you can, perhaps, also see that we can use the amplitude of the diurnal variation to calibrate the sunspot number or at least to check if we have got the right number.
Thanks for that interesting example, Leif. However, as I said elsewhere, the fact that we can and do use downscaling in other fields means nothing about what they are attempting with the climate models.
w.
That is trivially true. However, there is a connection. My example relies on knowledge of the physics of the phenomenon. Now, the climate modelers may be presumed to believe that they also know the physics of their system, so the situation is comparable. It is only when we disagree with their assessment that the situation becomes different.
lsvalgaard commented on
So true!
But I also want to make the point that they are doing the same downscaling to generate GAT series, as one of the steps is to normalize temps to the altitude, lat and lon, and then use this to infill and create a uniform normalized field for the entire planet, just like the example Leif posted.
And that normalization is both necessary and probably mostly correct when based on actual data.
lsvalgaard commented
I agree, it’s just in the case of surface temps, I think I calculated with a 100 mile circle around each of the 20 some thousand stations in the GSoD data set that was a couple % of the planets surface. And weather isn’t linear spatially.
I’m reasonably sure that the people doing the normalization are doing their very best to make it as good as possible. Scientists are generally not morons.
lsvalgaard commented on
Again, in general I agree with this, but I will point out that to do so they must really understand what they are modeling (because at this point it really is a model), and if their results are good enough to in this case base policy on. I don’t think we can make proclamations on 100 years of surface temps at the required detail, let alone 100 years of SST’s.
And from my looking at the data, a lot of whats happened the last 40 years that end up as part of the temp record is not a global effect, but a regional one in minimum temp. Now, I accept that that doesn’t mean there isn’t something else in the background, and in fact you can see it in the derivative of temps changing over the years, but it also looks like that’s reversing direction too, so I can’t tell if it’s a sign of Co2, or Ocean cycles/clouds/some longer period cycle in whatever or something else entirely.
As with all science, observations [normalized or not] must always be examined critically and not just be believed.
lsvalgaard commented on
Which is why, even if I don’t like it, I accept what you say as true to the best of your knowledge (such as the (lack of) effect of a moving barycenter on the Sun’s output).
Well Leif, while I tend to agree with your assertion that your process got you essentially correct values, I don’t think this is comparable to what those guys did to their “weather / climate” maps.
You clearly have a band limited signal, and your interpolation process, did not introduce any higher frequency components. There are a good number of locations in their “downscaled” map, where they clearly have introduced much higher frequency (spatial) values.
And they are not dealing with a system that is likely to replicate itself at times a year apart, such as yours apparently is. I believe you when you say you have a physical model of your system.
As I said elsewhere, even if you have no more than one sample each half cycle of the highest frequency in a band limited signal, rigorous mathematics says that you can EXACTLY recover the COMPLETE continuous function from just those samples.
Now the actual implementation of reconstruction from sampled data, may be quite difficult to achieve in practice. You have to replace the instantaneous point samples, with a specific impulse of a prescribed shape. The fact that this is done routinely to time and/or frequency multiplex dozens or even thousands of signals, and transmit them together, with perfect unscrambling at the other end, is evidence, that the recovery can be done well enough to enable all our message chattering to get to the right places.
Your system looks like it has similar characteristics to the eyeball’s dither scanning phenomenon, in that you know something about how one observation morphs over time into a similar but slightly different picture.
The reason our optical rodents can resolve fine detail, is that we do get continuous analog values for each pixel on our coarse grid, while it is moving smoothly over the more highly detailed terrain.
G
My point is that there probably is spatial reproducible structures. The temperature [or any other weather/climate variable] is often determined by local conditions, such as UHI effects, land-use effects, coast effect [e.g. Buffalo NY], and others, that do not change much from year to year [or has trends that can even be modeled too]. Those effects can be injected in the downscaling procedure and do help to get to finer resolution.
lsvalgaard January 7, 2015 at 7:12 am
Thanks, Leif. While that’s true, there are other differences.
You are using a known relationship between succeeding cycles in a cyclical system to infill between measured values.
They, on the other hand, are using a local model to attempt to get finer spatial values using a more general model which gives them an average value over the space. They are NOT working from measured values, as you are. And they are NOT simply trying to infill at the same frequency as you are.
Now it’s possible to do that in certain special circumstances when the problem is well defined. But in general you can’t assume that the results will have any meaning.
Next, they are using the output of an iterative model as the input for an iterative model. One of the difficulties with iterative models of any complexity is that it can be very difficult to know not only whether what they are doing is correct physics, but how they are getting the answers at all.
As such, there is a qualitative difference between that and the models which you are discussing. We can test a model like F = M x A, and we are clear about how it gets to the output from the input. But initializing a climate model and letting it run doesn’t offer us the same information. A typical climate model iterates on a half-hour basis, and has say a 300×300 km grid, perhaps 15 atmospheric levels, and 3 oceanic levels.
So a years worth of interations is about four billion gridcell calculations … and an error in any one of them may well carry forwards.
In addition, the models are fitted to historical data in order to produce a given set of results, so we can’t test them that way … and they have dozens of tunable parameters, so they are prime examples of von Neumann elephants.
Finally, your example is of a non-chaotic, predictable situation … whereas we don’t even know if the climate is predictable in theory, much less in practice.
As a result of all of that, it’s not even possible to say whether “they also know the physics of their system”. They obviously believe that they do, but with fitted, tuned, hugely complex iterative models, no one knows. We don’t even know if their solutions to the fluid equations converge …
Best regards,
w.
So it really simply comes down to whether one believes one knows what one is doing. And there it must rest. Questions about beliefs cannot be resolved.
lsvalgaard
January 7, 2015 at 10:10 am
Don’t they do that all the time in Las Vegas? 🙂
Most leave that place rather fleeced…
Good points. Speaking of sunspot reconstructions… how is your revised monthly GSN coming along?
Basically unchanged. We shall have a meeting in [of all places] Sunspot NM [ http://en.wikipedia.org/wiki/Sunspot,_New_Mexico ] during the last week of January to iron-out minor details. The new numbers will be presented at a press conference in Brussels, Belgium later in the spring/early summer and submitted to IAU [ http://www.iau.org/ ] in early August for possible adoption as an international standard..
Thanks for the update. I’ve used your yearly data, and also used SIDC monthly data, and am looking forward to using the monthly rGSNs. If the rGSN becomes an international standard, will it replace the SIDC or be separate? May you enjoy a spot in the sun in Sunspot counting sunspots!
The GSN will become obsolete and not published as a separate series, but will be incorporated with the regular SSN. There will thus be only ONE SSN series [and it will be called the Wolf number]. We will maintain a separate Group Number [GN] as a means to keep track of the number of groups which is a proxy for somewhat different physics as the ratio SSN/GN is not constant as was earlier surmised. We will discourage using the GN as a proxy for solar activity [as it is not].
“whether it improves understanding of climate change in the region where it is applied.”?
This must have come out of the “Humor” section of the paper, it’s just a joke.
Oh, wait, there’s no “Humor” section in this paper.
Thanks, Willis. I had to laugh, then cry.
Perfect example of why atmospheric supercomputer models fail, spectacularly. If you go to NASA, ( fedscoop.com/nasa-supercomputer-carbon-dioxide ) you will find a supercomputer model of atmospheric CO2 global dynamics circa 2006.
and see what is actually happening. Supercomputer model selection bias in action.
When I first watched the 2006 gif I was struck by the absence of C02 density in the Southern Hemisphere. Move forward to the actual measurements from NASA’s Orbiting Carbon Observatory-2 mission launched in July of this year.
As insinuated in an earlier comment, current models are like really pathetic cameras, the kind you might have made in science class as a kid with a box and a pin hole with your finger as the shutter. You get this really lousy picture where gross shapes can be discerned but little else.
You can take that image and put it in a modern photo editor and pixelate the dickens out of it, but all you are doing is subdividing lousy larger pixels…
Well interpolation between large pixels can produce useful results.
For example is you are currently holding an optical mouse in your hand (LED or LASER), the chances are that the digital camera in your mouse only has between15 x 15, and 22 x 22 pixels. Well if it one of those top secret fast gaming mice, it could have as many as 32 x 32 pixels. We are talking maybe 50 micron or 60 micron square pixels; veritable cow pastures.
Now it is of course taking at least 1500 frames per second, and maybe as high as 10,000 frames per second for that killer gaming mouse.
The camera lens that goes with that camera, started out able to resolve maybe 100 line pairs per mm, or about 5 microns spot size. Well the lens includes a built in optical low pass filter, that kills that resolution down to maybe a 100 micron spot size; but very uniform over the entire one by one mm field of view of the mouse. (it’s a 1:1 relay close up lens).
Because the lens point spread function is accurately devised (it’s like a laser beam waist ) the pixel signals are able to track the large spot with a Gaussian profile, as it tracks over the big pixels (which are smaller than the spot).
As a result, and the fact that absolute analog intensity values are stored for each pixel , the mouse is able to resolve motions much smaller than the pixel, so that 300-400 dot per inch mouse motion resolution is maintained.. Remember it is scanning.
All of that optical wizardry and signal processing magic, was done to save you from the monthly clean out of the hair and lint inside your $2 ball mouse. Yes it is all patented.
But nothing that is happening is creating out of (Nyquist) band information.
Now without the (patented) optical low pass filter, the whole thing would descend to garbage, and back tracking cursors, because of the original prototype high resolving power of the camera lens (it’s only about a 1.5 mm focal length lens; and aspheric, both surfaces.) And wildly aspheric in the LP filter region. The first one was actually a form of Tchebychev filter. Latest ones are segmented cubic profile filters.
“GCM projections of future climate change, with typical resolutions of about 100 km, are now routinely downscaled to resolutions as high as hundreds of meters.”
Who gives these folks the idea that their GCMs are working out now? After making faulty GCMs that have run hot for years, do they really think reducing the run area by 1000X is going to be an improvement?
“Pressure to use these techniques to produce policy-relevant information is enormous. To prevent bad decisions, the climate science community must identify downscaling’s strengths and limitations and develop best practices.”
First they must identify the limitations and bad decisions that went into current GCMs.
“A starting point for this discussion is to acknowledge that downscaled climate signals arising from warming are more credible than those arising from circulation changes.”
A starting point for this discussion is to acknowledge that the GCMs are not at all credible.
It seems to me that to Upscale would make more sense. First try to make an extremely accurate model of local weather over a very short period of time. Say something like this: It is now 65 degrees and 74% humidity on my porch I predict, based on my model that one minute from now it will be 65 degrees and 74% humidity on my porch. If over time your model show skill, then expand it in space and time, if still show skill expand it further. Eventually you might work it up to a global model of the climate in 100 years, but before it gets there it would have to show the ability to reasonable predict regional weather over at least a month. Working from future global climate to future local weather seems working backwards to me.