Solar Notch-Delay Model Released

Readers may recall the contentious discussions that occurred on this thread a couple of weeks back. Both Willis Eschenbach and Dr. Leif Svalgaard were quite combative over the fact that the model data had not been released. But that aside, there is good news.

David Archibald writes in to tell us that the model has been released and that we can examine it. Links to the details follow.

While this is a very welcome update, from my viewpoint the timing of this could not be worse, given that a number of people including myself are in the middle of the ICCC9 conference in Las Vegas.

I have not looked at this model, but I’m passing it along for readers to examine themselves. Perhaps I and others will be able to get to it in a few days, but for now I’m passing it along without comment.

Archibald writes:

There is plenty to chew on. Being able to forecast turns in climate a decade in advance will have great commercial utility. To reiterate, the model is predicting a large drop in temperature from right about now:

clip_image002

 

David Evans has made his climate model available for download here.

The home for all things pertaining to the model is: http://sciencespeak.com/climate-nd-solar.html

UPDATE2:

For fairness and to promote a fuller understanding, here are some replies from Joanne Nova

http://joannenova.com.au/2014/07/the-solar-model-finds-a-big-fall-in-tsi-data-that-few-seem-to-know-about/

http://joannenova.com.au/2014/07/more-strange-adventures-in-tsi-data-the-miracle-of-900-fabricated-fraudulent-days/

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
633 Comments
Inline Feedbacks
View all comments
July 8, 2014 8:50 pm

Even if the forecast turns out to be in the ballpark, that does not prove anything at all

When taken into context with how it has tracked with previous climate, if it has skill in forecasting what is yet to come, while one might claim it has not “proved” anything (which is technically true since Dr. Evans does not attempt to attribute cause, he is only documenting behavior), it would be good enough to use it.
If I had a box with a crank and a wheel and I cranked 10 turns and the wheel moved 5 turns and I crank 30 turns and the wheel moves 15, I have a pretty good idea that whatever is in that box provides a 2:1 turns ratio even though I can’t prove it with certainty because I haven’t opened the box, I can be fairly confident that if I now turn the crank 20 times, I’ll get 10 turns of the wheel.
If climate behaves as expected, it would indicate something a little more than coincidence, again, when the forecast is taken into context with the hindcast. I’m not understanding the vitriol. from some corners.
And “he can be right for the wrong reason” doesn’t really apply because he isn’t really attributing a “reason”. He is noting the correlation between events and that there is about a 10-ish year lag between them. The “why”, or what he calls “X” is left up to future research.

Reply to  crosspatch
July 8, 2014 9:08 pm

crosspatch:
To assess the performance of a model by the single measure of skill is to neglect the issue of the falsifiability of this model’s claims for the skillfulness is independent of the falsifiability.

gary gulrud
July 8, 2014 8:54 pm

Pamela Gray says:
July 8, 2014 at 8:38 pm
Ms. Gray, your response which I reprised was to my implication that I had little confidence in a figger of 0.1% variance(a statistical measure) over the secular Solar cycle in TSI when the UV component is known to vary 100% between cycles.
Perhaps you can see now, that your response in no way pertains.
BTW, of the 40% of Solar radiation incident comprised by the IR spectrum, but 1% of that fraction reaches the ground. Since the Atmosphere does not heat the ground by re-radiating having an emissivity of a low-pressure gas, you would do well to re-jigger your conceptual universe, such as it is.

Pamela Gray
July 8, 2014 8:55 pm

I think what may have confused you Gary is a straight across measure of energy by wavelength. Of course excitation increases in shorter wavelength. Duh. But we are talking solar insolation and the amount of UV that is a part of total solar radiance. The Sun is not a UV lamp. The Sun’s radiance spectrum is known. UV is a small part of its energy output. Very small part. Why is that even being argued? Do you not know what the solar spectrum is made of? What it looks like? Is there something wrong with the analyzers that measure the Solar spectrum? Is more UV hiding somewhere? Is it under the George Bush table?
Any variation in UV at the surface due to solar variation, would produce a tiny, tiny % change in energy available to heat the oceans, and would NOT BE detectable in the noisy ocean heat data series. Earth’s own messy environment totally buries solar sourced UV variation. Period. Case closed. We are talking many decimals places here.
Gary, you can’t be serious!

Rud Istvan
July 8, 2014 8:56 pm

Mosher, you dissemble.
Evan’s model is not designed to be a statistical expectations one like BEST. Asking for those results changes the frame of reference unfairly. Apples to oranges. He did not create your sort of model, and it does not produce your sort of result. Get over it. It is what it is, not what you might wish it to be. Whining about the difference is unseemly.
BTW, what sort of falsifiable forecast does BEST make? We’re I you, I would not answer such an unfair question. BEST was designed to produce another temperature history, not a forecast. But this question you should not answer uses your own rhetorical technique above to illustrate why your Evans complaint above was simply off base.

July 8, 2014 8:59 pm

Anthony’s call for avoidance of comments on personalities is an excellent one. If bloggers were to universally heed this call, the quality of the dialog in Anthony’s blog would be much improved. Comments on personalities should be avoided because they are: a) irrelevant b) distracting and c) often unfair.

Ragnaar
July 8, 2014 9:09 pm

Here’s a list of some Scientists who are looking at TSI:
http://iopscience.iop.org/1748-9326/5/3/034008/cites
Part of why Evans may be right is idea the sensitivity. Small changes causing larger outcomes. As Kyle Swanson said, variability is the flip side of sensitively. If temperatures were rising and then they plateaued, what caused that? We don’t know. But if the system is sensitive to small changes, it could happen and we just can’t figure out what that small change was. But it would be consistent with high sensitivity to something causing the pause. And the irony is just a bonus. Warmists may argue sensitivity is low to TSI, UV etc.

Reply to  Ragnaar
July 8, 2014 9:30 pm

Ragnaar:
The climate sensitivity aka equilibrium climate sensitivity is the ratio between the change in the spatially and temporally averaged equilibrium temperature and the change in the logarithm of the atmospheric CO2 concentration. As the value of the equilibrium temperature is not observable, when a numerical value is asserted for the climate sensitivity this value is non-falsifiable. Thus, the climate sensitivity does not exist as a scientific concept.

gary gulrud
July 8, 2014 9:15 pm

Pamela Gray says:
July 8, 2014 at 8:55 pm
Indeed I am serious: you understand little more of what you are saying than of what I am saying, which is nothing at all.

Mike Jowsey
July 8, 2014 9:18 pm

Pamela Gray: “I’m not about to play with fudge factors to run a model till I know from whence and how the factors came to be the chosen ones. I expect no less of AGW modelers. And guess what folks? In well documented articles, they are there. Should we be giving ourselves a pass and not do that?”
Pamela – have you read the 8 posts preceding the release to the Excel programme? You can start here: http://joannenova.com.au/2014/06/big-news-part-i-historic-development-new-solar-climate-model-coming/ At the bottom are links to the other posts. Stop knee-jerking – it’s unbecoming.

gary gulrud
July 8, 2014 9:21 pm

Not that I’ve done any, but principle component analysis, when employed with variables independent of the target, can be used to establish a weight of contribution to the whole.
I gather that is Evan’s endeavor.

Pamela Gray
July 8, 2014 9:51 pm

Gary, you said about yourself, “I had little confidence in a figger [sic] of 0.1% variance(a statistical measure) over the secular Solar cycle in TSI when the UV component is known to vary 100% between cycles.”
Now that is even further out there. Way out there. TSI does indeed vary about 0.1% from peak to trough. Peak to peak variation is rather consistent at plus/minus 0.1% (likely because of the “floor”). Yes UV varies greater than that but it is not reflected in the overall TSI variation because UV is a small part of the total solar irradiance spectrum. Similar in that regard to CO2 as a tiny portion of atmosphere. Yes CO2 has expanded percentage wise more than the atmosphere has but the atmosphere is so much bigger it does not feel any part of those extra CO2 molecules.
The cyclic TSI and UV variation is well understood, modeled, mathematically calculable, and verified with observations.
Logically, if you worry about UV trend/variation playing a large role in climate trend/variation, you should also be worried about CO2.
http://astro.ic.ac.uk/research/solar-irradiance-variation

July 8, 2014 9:56 pm

To me the novel and important point about this model is that it is falsifiable, and will soon either be falsified or not falsified.
If the model is not falsified then we can debate the question why it works. And if it is falsified we can inquire whether or not another empirical model is worth our time. Either way, we learn something.
Notice that I say this is an empirical model. If I understand correctly the no theory supports the model. This is the intention of the modeler.
In my opinion, the empirical approach is a reasonable way to study a system as complex and possibly chaotic as the climate system..We have only a few years to wait, whereas with the 100 or so models built on theory we have to wait 25 or 50 years to see if they work. And then the modelers will just tweak the models a little and ask us to wait another 25 to 50 years.

Reply to  Chris Marlowe
July 8, 2014 10:06 pm

Chris Marlowe:
I don’t believe that the model is falsifiable. How would one falsify it?

July 8, 2014 10:21 pm

Figure 5 – SORCE/TIM Reconstruction shows TSI in 1600-1800 lower by about 0.8 W/m^2 than in 1950-2000. That’s enough to cause a little ice age. Coincidence? I think not.

Editor
July 8, 2014 10:48 pm

Terry Oldberg says:
July 8, 2014 at 8:36 pm

Steven Mosher:
How would one do out of sample testing of Dr. Evans’s model?

Thanks, Terry. I’m not Mosh but as I’ve made the same point, let me explain how to do it.
You divide the data in half. Then you “train” the model on the first half, meaning that you use some kind of weighting process to determine the optimum value of the 11 arbitrary parameters.
Then, using that set of parameters, you see how well it performs on the other half of the data.
Next, you reverse the procedure. You train the model on the second half of the data, and see how well it performs on the first half.
It is generally the very first test done on such a model, because such “tuned” models are known to be generally very good at hindcasting (because they were trained on that very data), but very poor at forecasting the half of the data that that the model has never seen.
However, to do so, we have to know how David arrived at the values of the parameters … and that is the part which he has not yet revealed.
Now, Jo told me over at their blog that indeed, they had already performed that exact kind of out-of-sample test on the data. So if they wished to, they could publish the parameters that their training process gave them for the first half of the data, and show us the results using those same parameters on the second half of the data.
However, they have not revealed that either.
As a result, despite the fact that they have released the model, we’re no better off than before. We don’t have the code used to determine the values of the parameters. And we don’t have the results of the out-of-sample tests which they have done. So … we cannot do even the most basic test of the model.
Now, a number of people have said to just wait for three years, and if the temperature drops by a tenth of a degree the model is verified. There are two problems with that.
The first is that the threshold is extremely low, well within the natural swings of the data.
The second is, there is no reason to wait. If we try the model by testing it on half the data and it fails miserably on the other half, we can all go home. Since he claims that the model has passed the out-of-sample tests, then he could simply reveal them, as I requested that he do both here and at Jo’s site.
Finally, as Pamela Gray has expressed so eloquently above, the rules apply to all. I and many others have been very strong regarding the necessity of publishing the data and the code when you publish the study. And we have been cheered on by the skeptical community for standing four-square for transparent, honest science. As she said:

Priceless. People complain all the time about AGW paywalled studies and research by media. Or grey papers filled with unvetted sciency sounding proclamations with no research to back it up. We cheer when skeptics, after great effort, finally get the stuff needed from the AGW crowd for reproducibility, validity, and sound science critique of CO2 global warming. Apparently we can’t do that with our own side.

For me to not apply the same exact standards to Jo and David, merely because they are good folks (which they are), or because they are skeptical of AGW (which they are), or because they have put a commendably huge effort into their project (which they have), would be the height of hypocrisy.
I must confess, I am amazed by the resistance to a simple request for code and data, both from David and Jo, as well as from other skeptics. Foolish me, I thought the skeptics stood for solid science. Why should David and Jo be exempt from the normal rules of transparency in science? The rules are simple—no code, no data, no science.
Finally, I am not being hard on David and Jo as many have claimed. I am not asking them to do anything that I, or Steven Mosher, or Steve McIntyre, or any of a host of other skeptical scientists don’t do. We are all transparent regarding code and data. Mosh has written up an entire suite of R commands that will let you go step by step through the process used by Berkeley Earth. Steven McIntyre does the same.
And I publish the data and code for all of my work, to allow anyone to see if I’ve made any mistakes. Considering that I write a scientific investigation of some aspect of climate every week, I know exactly what is involved in transparency, and it’s not hard. Simply publish all of the code as used, and all of the data as used. Yes, some times I get bitten by it, when someone looks at what I’ve done and finds an error … but that is science at its finest. For me, this kind of instant peer review is extremely valuable, because it prevents me from spending weeks or months following a blind trail.
Are David and Jo free to not publish? Of course. They can publish as little of it as they wish, or none at all. But until and unless they publish all of it, it is not science of any kind.
w.

July 8, 2014 10:50 pm

For Gary Gulrud re principal components analyis (PCA)
We can be a little more specific about what PCA does. The computation methodology takes a number of measured variables that may be correlated among each other (multicollinear) and transforms them into a set of orthogonal (uncorrelated) components.(PCs)
The number of principal components is equal to the number of the original data variables, Thus the new components (PCs) together represent a multidimensional Euclidean space with the original variables projected onto the new variables (PCs).
(Engineers do the same with force vectors when designing roof trusses and bridges. They decompose force vectors into vertical and horizontal components.)
Imagine that the axes of the original multidimensional data space forming angles, the size of the angles given by the correlation coefficients among the variables. (The higher the correlation between two variables, the smaller the angle.). PCA rotates the original space to get a set of new variables (components) that are at right-angles to each other. The original correlated variables are decomposed into vectors that form right angles. The PCs are not correlated with each other but they are linear combinations of the original variables.
One of the algorithms used for PCA rotates multidimensional space in a way that maximizes the total variance on the 1st component (PC), and then maximizes the remaining variance on the 2nd PC and so on until all the variance has been accounted for.
You may recall the critiques of the mathematical procedure that Michael Mann used. One criticism was that Dr Mann did not follow the correct procedure for standardizing the variables. The standard method relates the value of a data variable to the mean plus or minus a multiple or fraction of the standard deviation. Dr Mann used an unconventional method not related to the means of the original data….
Another criticism was that PCA may not have been an appropriate model because PCA loads variance on a succession of PCs. This could produce a Hockeystick shape from a very small subset of the data, in effect from outliers. As the critics showed, using PCA in a certain way enables the modeler to produce “hockey-sticks” from random data values.
Several years ago, I discussed this problem of statistical modeling with a senior Singaporean statistician who had 40 years experience in medical statistics. He said that the biggest problem was spurious correlation:finding correlation purely by chance where there is no causality.
The point he made has stuck with me. He said that we should rely on statistical models only if we know the data very well and we know the theory that relates the variables. My impression was that he did not believe that statistical correlation alone can support the claim that a theory has been proven.

ren
July 8, 2014 11:15 pm

This is the truth of the TSI and the effects are already visible in the atmospheric temperature. These are the temperature changes ozone.
http://oi58.tinypic.com/2m5cls5.jpg
http://iopscience.iop.org/1748-9326/5/3/034008/pdf/1748-9326_5_3_034008.pdf

ren
July 8, 2014 11:22 pm

TSI a graph corresponds exactly to the graph Ap.
http://ice-period.com/wp-content/uploads/2013/03/sun2013.png

July 8, 2014 11:59 pm

Thanks to Anthony for updating the post and adding links to our detailed replies to Willis and Leif.
http://joannenova.com.au/2014/07/the-solar-model-finds-a-big-fall-in-tsi-data-that-few-seem-to-know-about/
http://joannenova.com.au/2014/07/more-strange-adventures-in-tsi-data-the-miracle-of-900-fabricated-fraudulent-days/
Despite being abjectly wrong, and in a documented and obvious way, neither man has acknowledged, let alone apologized, for their disgraceful behaviour.
It all got a bit overexcited on the “bermuda-triangle” thread where logic and manners disappeared without a trace. Leif exclaimed David’s work was “almost fraudulent ” and a “blatant error” because Leif didn’t realize Davids graph was 11 year smoothed (which was written on the graph). Willis repeated Leif and called the data “bogus”. So David graphed Leif’s own data and showed the fall in the 11 year smoothed TSI was there, and apparently news to Leif. What ho! Are we having fun?
Willis says:” …. it’s not science in any form, which is all that I said.” Steady on, Willis, you also said we “made a wildly incorrect claim”, are like “pseudo-scientists”, who made a “horrendous newbie mistake” and we “invented data” too. You were wrong about all these, which was obvious to anyone who read the graph or reads my site. Have you made any effort to correct your false statements? I have not seen it. Willis went on to say David is “hiding everything he can from public view”, and “taking up the habits of Mann and Jones”. Just a bit of false equivalence there.
Lief went on to misread three small dots and claim the dataset was “doctored” and the” fabrication” of data was a “fact”. Furthermore, “Mr Evans did not intend to have anybody discover his little ‘trick’.” All of which was also false, but somehow very convincing to Willis.
Willis is now repeatedly saying we haven’t released the full model, except we have. Not only does the spreadsheet contain all the data and code, but the attachment linked in the post http://jonova.s3.amazonaws.com/cfa/excerpts.pdf contains all the equations and information needed to run the model. The only parts not yet released from the full paper are not things the model depends on, though they corroborate the model and we’ll be discussing them soon.
Willis claims it’s not worth commenting on my site because the readers there are an “infestation” of stupid “true-believer adherents” and “credulati”. (Does he mean like someone who believes everything Leif Svalgaard says?) It couldn’t possibly be that Willis is afraid to comment on my site (where everyone knows how wrong he was) could it?
No doubt he will find a reason to say I have taken these phrases out of context (I quote the exact words with links on my site, see the links above). He may also quote his “best wishes” or “sincere congratulations” as if these neutralize the baseless insults. But what do sincerity and wishes mean from someone who repeatedly make false statements and won’t correct them?
Anthony and I have had a long friendly conversation which I’m grateful for. As a fellow blogger, I am sympathetic to the impossible task of stopping long comment threads from degenerating into name-calling. Everyone would help Anthony if they were careful to write accurately, and understand what they talked about before they made definitive claims.
Both men have my email and access to freely comment on my site. Do either care about accuracy?

July 9, 2014 12:02 am

Steve from Rockwood says:
July 8, 2014 at 3:21 pm
Steve I think you made an error in your calculation. I came up with .1 deg K using your numbers.
I’m going to repeat what you did in a slightly different way assuming a 300K earth for convenience. Solar output = 100% = 300K. A .1% Solar variation translates into a .3K temperature variation.
You might want to recheck your calculation. Your 0.002 deg K is wildly off. For a .5W/m^2 variation which translates to .5/1365 *300 I also get .1K variation confirming the correctness of your w/m^2 per degree number which I used at the top of this comment.

tonyM
July 9, 2014 12:05 am

lsvalgaard says:
July 8, 2014 at 7:46 pm
“To claim success if the drop is at least 0.1C is meaningless as such a small change is well within random fluctuations.”
…………………………..
There is quite a difference between “claiming success” and not being able to falsify a hypothesis. The latter is not the same as claiming success (unless one repeats this a number of times).
The “0.1C is meaningless” is trite given it is a walk away falsifiability criterion in absolute terms as I understand it. I imagine it already incorporates all sorts of errors such as timing within three years (Evans says the range of timing impact is from 10 to 20 years), measurement errors of Avg T etc. A walk away falsifiability test is not the same as an expectation of outcome.
If you have a better suggestion please put it forward; that is the purpose of having an open discussion.
Your question of what Dr Evans learns if it fails is open ended and does cover a lot of science. If an idea looks plausible, is subjected to “testing” on past data (I imagine it has been tested blind on different periods) and holds up then it would indeed qualify for testing in real time.
Science would be pretty dead if we then never took it to this stage or that the prospect of failure is the impediment to testing simply because we can’t learn much more by a failure other than confirm the IPCC analysis of little effect from TSI. Such a criterion would cut out a lot of hypothesis testing in science.

farmerbraun
July 9, 2014 12:09 am

Willis Eschenbach says:
July 8, 2014 at 10:48 pm
Willis , if you start to get a bit tired, I’ve got a spare 20 ton excavator that I could lend to you 🙂

ren
July 9, 2014 12:10 am

“A UV index of 11 is considered extreme, and has reached up to 26 in nearby locations in recent years,” notes Cabrol. “But on December 29, 2003, we measured an index of 43. If you’re at a beach in the U.S., you might experience an index of 8 or 9 during the summer, intense enough to warrant protection. You simply do not want to be outside when the index reaches 30 or 40.”
High elevation, thin ozone layer, and clear sky produce intense ultraviolet (UV) radiation in the tropical Andes. Recent models suggest that tropical stratospheric ozone will slightly decrease in the coming decades, potentially resulting in more UV anomalies. Data collected between 4300 and 5916 m above sea level (asl) in Bolivia show how this trend could dramatically impact surface solar irradiance. During 61 days, two Eldonet dosimeters recorded extreme UV-B irradiance equivalent to a UV index (UVI) of 43.3, which is the highest ground value ever reported. If they become more common, events of this magnitude may have societal and ecological implications, which make understanding the process leading to their generation critical. Our data show that this event and other major UV spikes were consistent with rising UV-B/UV-A ratios in the days to hours preceding the spikes, trajectories of negative ozone anomalies (NOAs), and radiative transfer modeling.
http://journal.frontiersin.org/Journal/10.3389/fenvs.2014.00019/abstract

ren
July 9, 2014 12:20 am

“A UV index of 11 is considered extreme, and has reached up to 26 in nearby locations in recent years,” notes Cabrol. “But on December 29, 2003, we measured an index of 43.”
http://oi60.tinypic.com/8w0aid.jpg
http://www.swpc.noaa.gov/SolarCycle/Ap.gif

joannenova
July 9, 2014 12:22 am

A reply to NikFromNYC: July 8, 2014 at 8:18 pm
Exhibit A — so any model with five parameters is thus “proven wrong”? Does it mean all models can only be right if they use 4 or less… ?
Exhibit B — Lubos didn’t understand the main point of David’s theory and didn’t read the email where David explained that. See our reply. http://joannenova.com.au/2014/06/lubos-and-a-few-misconceptions/
The notch implies an 11 year delay. Other independent researchers corroborate that. http://joannenova.com.au/2014/06/big-news-part-iii-the-notch-means-a-delay/
As for Chester? The solar assumption was only temporary, and we drop it and test it openly. http://joannenova.com.au/2014/06/big-news-part-vii-hindcasting-with-the-solar-model/
We might be wrong, but not from any of these points.

ren
July 9, 2014 12:28 am
1 7 8 9 10 11 25