Readers may recall the contentious discussions that occurred on this thread a couple of weeks back. Both Willis Eschenbach and Dr. Leif Svalgaard were quite combative over the fact that the model data had not been released. But that aside, there is good news.
David Archibald writes in to tell us that the model has been released and that we can examine it. Links to the details follow.
While this is a very welcome update, from my viewpoint the timing of this could not be worse, given that a number of people including myself are in the middle of the ICCC9 conference in Las Vegas.
I have not looked at this model, but I’m passing it along for readers to examine themselves. Perhaps I and others will be able to get to it in a few days, but for now I’m passing it along without comment.
Archibald writes:
There is plenty to chew on. Being able to forecast turns in climate a decade in advance will have great commercial utility. To reiterate, the model is predicting a large drop in temperature from right about now:
David Evans has made his climate model available for download here.
The home for all things pertaining to the model is: http://sciencespeak.com/climate-nd-solar.html
UPDATE2:
For fairness and to promote a fuller understanding, here are some replies from Joanne Nova
another nonsense in science is the notion that we “know’ almost all there is to know about any one subject. in reality, the more science learns about a single subject, the more questions that are raised.
Consider for example, the sun. Does anyone suggest that there are less questions about the sun today than there were 500 years ago? Go back 500 years and ask a scientists about the sun. he would have told you that just about everything there was to know was already known.
In contrast, today we have a great many questions about the sun that were not even dreamed of 500 years ago. The same will be true 500 years from now. we will know a great deal more about the sun, but for every question answered, two more will take its place.
Is indeed the surface temperature does not drop?
http://weather.unisys.com/surface/sst_anom.gif
Observations are moving the solution and the consequences of the solar magnetic cycle interruption along.
http://news.yahoo.com/earths-magnetic-field-weakening-10-times-faster-now-121247349.html
“Earth’s Magnetic Field Is Weakening 10 Times Faster Now
…Previously, researchers estimated the field was weakening about 5 percent per century, but the new data revealed the field is actually weakening at 5 percent per decade, or 10 times faster than thought. As such, rather than the full flip occurring in about 2,000 years, as was predicted, the new data suggest it could happen sooner.
Floberghagen hopes that more data from Swarm will shed light on why the field is weakening faster now….”
The Swarm satellite analysis (Based the analysis of six months of data. Swarm is a set of three satellites that were launched in November, 2013 to study unexplained recent rapid changes to the geomagnetic field) has found that the earth’s magnetic field is weakening ten times faster than it has in the recent past (last 1000 years): 5% per decade (over the last 6 months) rather than 5% per century over the last 1000 years or so. (Note the north geomagnetic pole drift velocity suddenly increased by a factor of ten, starting in the 1990s which theoretically supports the assertion that a geomagnetic excursion is taking place. The sudden change in the Northern magnetic pole drift velocity was one of the reasons the EU space agency found the half billion dollars funding for the Swarm satellite mission.)
This is a big deal for a dozen different reasons. (Assuming I understand the cause of the cyclic abrupt climate change in the proxy record and understand what is currently happening to the sun.) There will be significant high latitude cooling (what is causing the geomagnetic field change, inhibited the solar magnetic cycle modulation of planetary clouds). The fun is not limited to a 180 degree change in the climate crisis. The physics as to why and how a solar core changes causes and modulates planetary magnetic fields will have a profound effect on cosmology and fundamental physics.
Leif, as it appears you have spent zero time study planetary magnetic paradoxes and astronomical paradoxes (which is not unusual as specialists, specialize) and you appear to be incapable of thinking outside of your group’s paradigm, your comments based on an incorrect paradigm (as it is inconceivable for almost everyone who works in a paradigm that some of their fundamental beliefs could be incorrect there is a real mental barrier to address paradoxes, to even discuss paradoxes, to label an anomaly a paradox, and so on), concerning this subject are almost irrelevant. This is not a theoretical problem (assuming I understand what is happening), this is a significant issue (and there will be more observational evidence to support what I am asserting) that will need to be addressed.
Comment: There must be a physical explanation for why the geomagnetic field intensity is suddenly decreasing 10 times faster than expected. A paradox is created when there is no physical explanation for what is observed. The solution to this paradox is that the assumed model of the sun (As most are aware, it is not possible with current technology to send probes into a star, a magtar, a neutron star, a super massive ‘black’ hole, and so on so, therefore what is believed to physically occur when a very, very large body collapses is assumed and what is assumed can be incorrect) and the stars is fundamentally incorrect. There are piles and piles of astronomical anomalies that are explained by what happens when very, very, large bodies collapse.
It is not possible for a field intensity change of the current measured rapidity (5% decline in the geomagnetic field intensity per decade) to occur if the physical cause of the geomagnetic field is thermal motion of the liquid core. Electric currents are induced in the liquid core (counter EMF is produced when there is a change in a magnetic field that resists field changes, as per Maxwell’s equations applied to a conductive liquid) and there is no core change that can suddenly occur in the last decade to cause an increase by a factor of ten in the reduction of the intensity of the geomagnetic field.
The above statement is supported by a half dozen additional fundamental paradoxes concerning the ‘standard’ theory for the generation of the planetary magnetic field (another example is the magnetic field orientation of Uranus and Neptune) and a couple of dozen astronomical paradoxes. For example, the standard planetary magnetic field model requires turbulence caused by a significant thermal gradient. A significant thermal gradient is generated when a pot of water is placed on a stove element. A significant thermal gradient does not occur if the boiling water is placed in an insulated thermos. The core thermal gradient is theoretically believed to be caused by the latent heat that is released when the liquid core solidifies. The problem is (the paradox) calculations indicate that the earth’s solid core is 800 million to at most a billion years old. Calculations indicate that without the latent heat generated at the liquid core/solid core interface that the thermal gradient due to heat loss to the surface of the planet is roughly a factor of 10 less than the thermal gradient generated by the liquid core solidifying. This is not sufficient to create the necessary thermal motion to create the geomagnetic field. No thermal motion no geomagnetic field if the cause of the geomagnetic field is thermal motion of the conductive liquid in the earth’s core. It is know the earth has had a magnetic field (by proxy measurements) for at least 4 billion years. The geomagnetic field also must be of sufficient strength to protect water from being stripped from the atmosphere by the solar wind (calculations indicate that without a magnetic field the water is stripped from the atmosphere in a couple of million years). As we know the planet is 70% covered by water there must have been a strong magnetic field protecting the earth for almost all of the geological past.
Pamela Gray says:
July 8, 2014 at 9:51 pm
You have left out an important possibility. It is not TSI but something correlated with TSI.
Lets try a gedanken experiment.
Solar magnetic field —> cosmic rays (charged particles) —> clouds.
That is one possibility. Another is
UV –> atmospheric chemistry –> clouds.
There may be other possibilities. David will be asking in a future post for those with understanding in the many areas possible for inputs that may help resolve the issue. I’m just an amateur in the area and my math is no where near as good as David’s. But I do get the general drift of what he proposes and is trying to accomplish. Part of the reason I get it is that I have had almost 60 years of experience in electronics. (I started my studies at age 10 and got my first amateur license – Technician and Novice – at age 13. Note – the Technician test was the same as the General test. I got my Radiotelephone 1st Class [commercial] at age 17 1/2 – the youngest age allowed. During my career as a contract engineer I spent almost 5 years in aerospace.).
You can see why next season will be weak hurricanes.
http://weather.unisys.com/surface/sst_anom.gif
I listened to Willie Soon, and he hit a home run in showing how ridiculous much of the data on solar activity is in that none of the data from the various sources show any degree of consistency on what levels TSI has been at over the past . It is as bad if not worse then the present way sunspots are counted, which is very subjective and not objective in the least.
This is why I am using solar criteria such as solar wind speed , ap index and solar flux in making a determination at what point the sun’s variability will have a significant impact upon the climate. I have listed the criteria many times..
Willie Soon, like myself is also of the opinion due to the study of other sun like stars that are sun is much more variable then what mainstream wants you to believe and this variability was clearly present during the Maunder Minimum and more recently the Dalton Minimum.
The climate summit has been great and it just reinforces all of my thoughts about the sun the climate and the connections.
“This is a common straw man. The issue here is not about ‘mechanism’, but about the lack of description of how the parameter set is derived. That is: given solar input, temperature, ‘atomic tests’, volcanic activity, and a range of years, how does one derive the parameter set?”
I think we could say this a hundred times and they still would not get it.
It is all going (climate/solar ) so far the way I expected. Still have to wait a few months down the road to see if extremes or persistence in weather patterns pick up once again as the maximum of solar cycle 24 ends and if the temperature decline starts to be more definitive in response to the expected low solar activity.
Ocean Heat Content will play a role in holding temperatures higher then they might be but volcanic activity if it is to pick up would aid in the decline so there are unknowns that will make it hard to pin down by how much the temperature decline may be.
“Salvatore Del Prete says:
July 9, 2014 at 7:16 am
Talking about this is an exercise of futility. What will matter will be if the model is correct or no correct going forward.”
##########################3
My model is that temperature in 10 years will be the same as today. +-.2C
what will matter will be is the model is correct going forward?
well Salvatore that is a pile of crap.
If you ask me how I came up with this model and I said, I’m not going to show you, you are well within your rational rights to say “well mosher, show how you did it or nobody cares”
We can and we should investigate HOW a model is created before we test it. Its well known that wrong models can give the right answer, only to fail spectacularly at some point.
Second, Suppose a climate scientist came up with a new GCM that hindcast perfectly and predicted 3X the warming of old models. And suppose he told you that he built his model by
twisting 11 knobs to hindcast. And suppose he told you, well wait 20 years to see if my model is right. You’d laugh, Why? because thats no way to build a model.
Like I said, I have a model. temperature will be the same 10 years form now +-.2C
When I show you how I built it, you will laugh. and you wont spend time waiting to see if its true or not. Same with David. Until he shows HOW he built it, nobody should care, and if it turns out right, people will still be within their rational rights to say “so what?” Until he shows how he built it, it doesnt matter whether it is “right” or “wrong.” because it can be right or wrong by pure chance.
Pamela Gray says:
July 8, 2014 at 7:28 pm
“And UV is very good at killing stuff. Hell you can sterilize with the damn stuff.”
Including ozone in stratosphere which changes the radiative properties of the atmosphere, which is one of the many feedbacks that are not modeled by solar variability.
“Peter Sable says:
July 9, 2014 at 1:06 am
A further nitpick is when training a model, you should keep half the data set of training and half for testing. From what I can tell the entire temperature history is used as the training set. It’s not fun to have to wait 10 years to see if the model matches any sort of reality when that could have been done with existing data. Or not. There’s probably not enough existing data to actually do proper modeling – too many low frequency components and not enough time to see multiple periods.
##############
that is in fact what Willis and I and others have asked for. Either
1. Davids results of this testing, which he promised but hasnt delivered OR
2. The code used to build the model which would allow us to do that work for ourselves.
He is obligated to provide both. When Mann build a model we demand BOTH.
Willis is being consistent and principled, as I noted at joNova. That is he is demanding the same things demanded of Mann and Santer. The difference is we could FOIA Santer, we cant FOIA David.
It’s sad that skeptics have lost sight of the fundamentals of science. it was one the one area where they held some High ground.
Witness this. As the AWG story comes under pressure as it comes under pressure because of the free release of code and data, we see some skeptics stepping up with their own science.
and what do we see?
1. creating a phony pal reviewed journal
2. arguing about the free release of all the data and code for their own science.
In short, the skeptics steeping forward to replace or improve the old science.. are going backwards with respect to the principles they espoused before.
Willis Eschenbach:
Thanks for sharing. I’m intimately familiar with the steps that must be taken in designing a model for falsifiability having made my living as a model builder and tester over a period of 11 years. In AR5, Chapter 11 of the report of Working Group 1 sketches out some of the steps. Prior to AR5, none of the models referenced by IPCC assessment reports were designed for falsifiability. Thus, none of these models were “scientific” as this term is defined by the federal government in its Daubert standard.
For falsifiability, there have to be the entities that are called “observed independent events.” Each such event has to be “out-of-sample” meaning that it was not used in the construction of the model. I’ll call the set of these events the “out-of-sample sample.”
Each event in the out-of sample-sample has to have an outcome (called a “bin” in Chapter 11) that belongs to the set of all possible outcomes; in the test of the model of Chapter 11, there are 10 possible outcomes. In statistical jargon, the count of the events in the out-of-sample sample having a particular outcome is called the “frequency” of this outcome. The ratio of the frequency of a particular outcome to the frequency of events of all descriptions in the out-of-sample sample is called the “relative frequency” of this particular outcome.
The model is run under the conditions that pertain to each of the events in the out-of-sample sample with the result that the relative frequency of each outcome that will be observed in the out-of-sample sample is predicted. The predicted relative frequencies are compared to the observed relative frequencies with respect to each of the possible outcomes in the set of them. If the predicted relative frequencies do not match the observed relative frequencies with respect to a particular outcome, a false claim has been made. One or more false claims falsifies the model.
That’s the process in a nutshell. There are complications owing mostly to sampling error that I’ve glossed over for brevity.
When this process has been or is about to be conducted this leaves telltail signs. Thus, far, I’ve not detected these signs in Dr. Evans’s writeup on his model. It seems more likely that rather than test his model for falsity he has already or is about to conduct an IPCC-style “evaluation” of it. In an “evaluation” one or more predicted global temperature time series are made visually comparable to an observed global temperature time series by plotting the various time series on the same X-Y coordinates. This comparison cannot result in falsification of the model.
Terry Oldberg says:
July 9, 2014 at 9:08 am
This comparison cannot result in falsification of the model.
And more seriously, there is no description of how the parameter set is constructed [and that is the real Model – what has been ‘revealed’ is just a vehicle to run the model], so no sensitivity test is possible and thus no confidence interval can be computed.
My musings:
A. Cyclic, yearly, and daily TSI variation are germane here (long term trend is not an issue with regard to Evan’s 11 yr notch delay proposal). So let’s get some numbers under our belt from the following link:
1. “The change in the Sun’s yearly average total irradiance during an 11-year cycle is on the order of 0.1 percent or 1.4 watts per square meter.”
2. Average TSI yearly value is 1,368 W/m2
3. Average cycle change value is 1.4 W/m2
4. “Daily variation in solar output is due to the passage of sunspots across the face of the Sun as the Sun rotates on its axis about once a month. These daily changes can be even larger [IE -3.0 W/m2] than the variation during the 11-year solar cycle. However, such short-term variation has little effect on climate.”
Link: http://earthobservatory.nasa.gov/Features/SORCE/sorce_03.php
B. However, clouds have a much greater effect.
1. From the simple presentation linked just below you can begin to see the effects of clouds on incoming radiation.
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&cad=rja&uact=8&ved=0CEEQFjAC&url=http%3A%2F%2Fwww.ccfg.org.uk%2Fconferences%2Fdownloads%2FP_Burgess.pdf&ei=l129U8_yFYizyATBk4Iw&usg=AFQjCNHOUw7QdkPJHfZYkBOzpl6AZ2eG9w&sig2=KkFCrkM4j3v3dAco6FwgXg&bvm=bv.70138588,d.aWw
2. Clive Best (see the link at the end of this section) has been working hard on this issue, and talking with NASA about cloud data. He says this:
“…recent measurements from the Clouds and the Earth’s Radiant Energy System CERES [5] show that the net average cooling effect of clouds is larger (-21 W/m2)…”
That’s a whole lot more than what the Sun does in terms of W/m2 variations. Recent developments and data is cementing in this issue. Clouds, as whispy as they are, beat the Sun by more than a length as they say in horse race language.
3. Clive goes on to say:
“The fall in cloud cover coincides with a rapid rise in temperatures from 1983-1999. Thereafter the temperature and cloud trends have both flattened. The CO2 forcing from 1998 to 2008 increases by a further ~0.3 W/m2 which is evidence that changes in clouds are not a direct feedback to CO2 forcing.”
I find this to be compelling evidence of an intrinsic null hypothesis that is fully capable of checkmating CO2 driven or solar driven weather pattern trends in global temperature data.
http://clivebest.com/blog/?p=5694
C. My take is that what is happening here on Earth regarding temperature trends is intrinsic in Earth’s own highly variable systems. Our own atmosphere and oceans are far more capable of driving trends and creating lags than anything variation the Sun can throw at us at the top of the atmosphere. In addition this hypothesis is measurable and has a plausible intrinsic mechanism.
D. But can intrinsic mechanisms be used to determine long term (IE longer than seasonal predictions but less than a Milankovitch Cycle) weather pattern variations without regard to catastrophic events (IE super equatorial volcanic eruptions http://climate.envsci.rutgers.edu/IVI2/)? Broadly I think yes with a more complete understanding of oceanic and atmospheric oscillations and their interactions. Eventually I see a suite of variously weighted and combined statistical/dynamical (some more statistical, others more dynamical) ENSO coupled with GCM models (with no CO2 global warming or aerosol or solar fudge factors) that allows various resettings (some more often than others) based on current conditions (especially of cloud data) to more narrowly define the prediction as we get nearer to the designated predicted time
One more comment to my musings. As for ENSO model parts, I think they need to include calculations based on data obtained regarding oceanic absorption of surface solar insolation as well as heat loss via evaporation in that same band under clear sky and cloudy conditions based on oscillations calculated from MEI data.
Terry, I know it has taken some time to break into my hard head, but I am finally coming round to your falsification comments as being quite reasonably supported and exacting.
Pamela Gray:
It’s been hard to get this across to our colleagues in the climate blogs. To hear that my message has reached you is quite heartening!
William Astley says:
July 9, 2014 at 8:06 am
Observations are moving the solution and the consequences of the solar magnetic cycle interruption along.
There has been and will not be a solar magnetic cycle ‘interruption’ [with the usual meaning of that word]. Apart from the fact that you have not defined what that means.
joannenova says:
July 9, 2014 at 12:34 am
Joanne, first, my thanks for coming here to defend your model. That’s what science is about. Were my comments true? I think they were. Were they over-the-top? Quite possibly, I get passionate about these matters, and I sincerely apologize to you for any excesses of tone and style in my comments wherever that occurred.
However, I make no apology for their content.
Both Leif and I pointed out that to get your precipitous drop in the results you had invented 900 days worth of data and tacked it on to the end of the real data before running the 11-year smooth. Leif called this “almost fraudulent”, which as I stated in the other thread I though was an over-reaction. It is NOT, however, standard scientific practice in any form.
As to the “newbie mistake”, you invented data, tacked it on to the end of real data, and used the world’s worst smoothing for solar data (an 11-year boxcar) on the result. This gave results which show a precipitous fall at the end, purely due to your methods. If you look at the underlying data, you’ll see that no such fall exists. It is created by a combination of adding invented data (which I correctly described as “bogus”) and a really bad choice of smoothing method. Regardless of the words used to describe it, it is hardly defensible science.
Joanne, you made up 900 days of “data” and tacked them onto the end of the real data. I fail to see how that is not “inventing” data.
And in my book, anyone who refuses to publish the data and code when they publish the study and the results is a “pseudo-scientist”. If you don’t want the label, don’t hide your work.
No, because they are demonstrably true.
You still don’t get it, I guess. You think that because you are on the side of the angels you get to hide your data and code from public view and still call it science.
Again, when you make up data, giving it an arbitrary value, and add it to real data, that is doctoring the data and it is fabrication. I did not say, nor do I agree with, Leif’s contention that “Mr. Evans did not intend …”, because I don’t have any information as to David’s intent. I try to avoid commenting on motive and intent because often I’m not clear on my own motives and intentions, so how could I know David’s?
Please re-read what I have said. I have never said that you haven’t released the “full model”. Both Steven Mosher and I agree that you have released the model … but we also agree that it is of no use. We still cannot test the model, because you have not released the code for determining the values of the arbitrary parameters. Without that information, your model is not testable.
We need that information to do “out-of-sample” tests, the simple tests that I described as “grade-school stuff”. At the time I made the comment, you assured me that the tests had already been done … so where are they? That is the other part that you have not yet released.
Joanne, when I was commenting at your site, over and over you, David, and other people kept saying some variation of “Wait until it’s all released before you comment on it” … so that is what I’ve done with regards to your site. Now, having specifically told me to hold my comments until all is revealed, and my complying with your request at your site, you claim I’m “afraid to comment” because I’ve done exactly what you and the others at your site asked me to do?
When and if you do get around to revealing it all, and give up this game of revealing it in dribs and drabs, then we’ll have something to discuss at your site. Until then, I’ll comment here, thanks.
As you can see by my response above, I have done no such thing as claiming I was quoted out of context. I said that
a) you invented 900 days of imaginary data and added it to the real data, and then
b) smoothed the combination of real and imaginary data with the world’s worst smoother for the purpose, leading to
c) a wholly fictitious “fall” in TSI.
And while by that point in the discussion perhaps my adjectives were somewhat over the top, I make no excuses for the content of what I said, and despite all of your bluster, all of those are completely true.
I am widely known in the blogosphere for admitting that when I am wrong … but I cannot “correct” the true statements that I have made. And I do send you my best wishes. You have gotten yourselves into a terrible bind by your refusal to share your work. This is damaging to you and to the skeptic cause, and it saddens me to see it happening on both accounts.
I cannot speak for Leif. I am very interested in accuracy … which is why I don’t invent 900 days of data and paste them onto real data before doing my own analyses, and why I select my filters carefully rather than heedlessly using an 11-year boxcar smooth on sunspot data, which turns times of high solar activity into times of low solar activity and vice versa …

I’m sorry … but that’s a “newbie mistake”.
Joanne, you still seem to be holding on to the idea that your five years of hard work somehow buy you an exemption from the normal rules of scientific transparency followed by the rest of us. If I tried what you have just done, if I were to announce some great, earthshaking breakthrough and then when asked for details of data and code I’d said “Sorry, I worked really hard on this project, so you have no right to ask me for details. I’ll release the data and code when I’m damn good and ready, and not before, so don’t bother me” … well, I’d be attacked by everyone on both sides of the climate aisle, and I’d get my okole handed to me on a platter.
And rightly so. No code, no data, no science … and that is the position you now find yourself in. And that’s the other reason I’ve withdrawn from commenting at your site until you do finally release all the data and code … because I much prefer to comment at scientific sites.
In addition, at your site, due to the fact that you discussed your model and results for ten posts without providing data or testing, you have gathered around you an entire coterie of folks who already believe in your model despite the fact that they have neither seen any testing of the model nor have we been given the means to test it ourselves. This kind of “true believers” are very difficult to deal with, because like the brainless followers of climate alarmists, they believe without evidence … which makes them very unpleasant to deal with.
In closing, you said in an earlier comment:
joannenova says:
July 9, 2014 at 12:22 am
Of course not. All it means is that you can fit an elephant with five parameters. As a result, the fit between your model results and historical temperatures, which seems to impress you and the less-than-inquisitive folks at your site so much, is meaningless. The fact that you can fit the historical temperatures means nothing at all about the quality or strength of your model. With 11 parameters, I would be shocked indeed if you could NOT fit the temperature.
It also means that the only way to test such a model is by “out of sample” testing … but we can’t do that until you reveal how you calculated the values of your arbitrary parameters.
And of course, I’m also interested in the results of your own use of out-of-sample testing, which you previously assured me have already been done. Is there a timeframe for posting them as well?
My best regards to you, Joanne. I am not at all happy about this turn of events, but the rules of science apply to everyone, and they require complete transparency.
w.
Willis Eschenbach or “model” in its essence is not an attempt forecast? Only three years we have to wait, or a check. Do you have different forecast? If your a true, then you win.
For example, the forecast Vukcevic works much better than the forecast of NASA.
I may be missing something here, but it seems to me that the use of time-specific results (e.g. the atmospheric nuclear bomb test data) will make it difficult to do out of sample testing.
William Sears:
Good point. In the construction of a model, a situation to be overcome is that the events of the future differ from the events of the past when the various events are described in sufficient detail. Sometimes this problem is successfully addressed through abstraction: the descriptions of the various events are abstracted from details that distinguish the events of the past from the events of the future.
My solar/climate connection theory is easy to understand in that it has hard solar parameter numbers and says if those numbers are reached the climate will have an x response.
I do not place much faith in any models when it comes to predicting the climate because there are to many unknowns, data will never be complete enough, and the accuracy of the data will always be in question, not to mention getting the beginning state of the climate correct.
A great example of data being off is there are not any accurate records of what TSI has been over the last 100 years. I think there are as many as 14 different data sets. Logic then follows: which is how could one produce an accurate model if a major part of the puzzle(tsi ) is missing. This applies to the AGW models as well as this solar model we are all discussing. It does not make much sense to me . .
And if that is not bad enough there are thresholds which can throw everything off.
My approach is not to be bold but to take a more general approach saying if solar parameters reach x levels expect x secondary effects which will move the climate into an x direction due to solar variability and the secondary effects.
I also say random events such as Volcanic Activity or the current terrestrial situation such as OHC could to a varied degree cause projections to be off in time and degree of magnitude change.
If my solar projections are approached or better yet reached and the climate in general terms goes in the direction I predict I will have a very strong case for my theory which is the amount/duration of solar variability that is needed to have an impact on the climate. .
William Sears says:
July 9, 2014 at 11:49 am
I may be missing something here, but it seems to me that the use of time-specific results (e.g. the atmospheric nuclear bomb test data) will make it difficult to do out of sample testing.
I don’t think so as the model should work with whatever. But an issue is here do they get the extremely large effect of the atom bomb tests form? And it seems way too high.
This type of claim need be examined by its own authors for uniqueness and for physical reality of its parameters, neither of which has been offered, so it’s assumed by default scientific thinking to be not unique and that there is no physics behind the various parameters. In that situation, this is a classic exercise in just making a black box bet on the future, the likes of which cause various stock market gurus to become famous for a season or two. There is perfectly good chance of a sudden cooling plunge by chaotic statistical physics alone as revealed in the main Greenland ice core. Also, just how was that plunge achieved and how unique and robust is it to different parameter settings and what is the effect on it of just matching temperature without the massive nuclear testing correction to temperature? That is what should have been addressed in the initial release, since it’s the first questions asked by any trained scientist who has been disciplined by his elders.
How much harder will it now be to further expose a climate alarm hoax when instead of behaving as whistleblowers, mainstream skeptics are now seen as wiggle matching mavericks? The big red flag for me was the blunt claim that the future would tell but a plunge in no way supports such a model in the same way that lack of a plunge falsifies it. And of course the plunge could also be indefinitely extended beyond any further pause by simply re-running the training parameters. This just seems like a big PR stunt rather than a real model at all, a gimmick in order to claim an opposing model, to fire up the troops who can now claim an alternative model. The tell here is how unrealistically the authors didn’t expect criticism based on it being mere wiggle matching. That sounds now like political pandering for support against normal everyday scientific criticism.
The final quote from Lubos Motl towards a wiggle matching fiasco was: “Dear Jo, I won’t join you in that cesspool. Please ask someone else, like David. / This article was about a technical topic – why the “model” is self-evidently wrong. It’s unfortunate that by this kind of obstructionism and ad hominem attacks, you are trying to convert it to something completely different.”
http://motls.blogspot.com/2014/06/david-evans-notch-filter-theory-of.html#comment-1449463088
Joanne simply claims Lubos misunderstood the claim. I imagine he understood it only too well.
Terry Oldberg says:
July 8, 2014 at 9:30 pm
Thus, the climate sensitivity does not exist as a scientific concept.
I can think of an example of sensitivity. A creek can be sensitive to rainfall with flow rates jumping all over the place based on rainfall.
Tisdale has brought up the idea of step changes.
http://tinypic.com/view.php?pic=15p4uia&s=7#.U72NzPldUV0
What could cause such steps? I think it would consistent with a system that is at times unstable or highly sensitive. Perhaps the problem with agreeing on what the climate sensitivity is is because it varies.
Ragnaar:
Your response to my post seems to pose a question. If so, thank you for posing it.
The flow of a stream is sensitive to the local flow of rain. Then why, you seem to ask, is it unscientific to claim that the change in the global temperature is sensitive to the change in the logarithm of the atmospheric carbon dioxide concentration?
This, however, is not the claim that is unscientific. The unscientific claim is that the ratio of the change in the global temperature AT EQUILIBRIUM to the change in the atmospheric carbon dioxide concentration is CONSTANT. As the global temperature AT EQUILIBRIUM is insusceptible to being observed, that this ratio is CONSTANT is not falsifiable. As it is not falsifiable, the claim is not scientific.
Steven Mosher says:
July 8, 2014 at 8:03 pm
Maybe you should have asked Willis to help you.
Then again, maybe not.