Guest Post By Bob Tisdale
For the past few years, we’ve been showing in numerous blog posts that the observed multidecadal variations in sea surface temperatures of the North Atlantic (known as the Atlantic Multidecadal Oscillation) are not represented by the forced components of the climate models stored in the CMIP5 archive (which were used by the IPCC for their 5th Assessment Report). We’ve done this by using the Trenberth and Shea (2006) method of determining the Atlantic Multidecadal Oscillation, in which global sea surface temperature anomalies (60S-60N) are subtracted from the sea surface temperature anomalies of the North Atlantic (0-60N, 80W-0). As shown in Figure 1, sea surface temperature data show multidecadal variations in the North Atlantic above and beyond those of the global data, while the climate model outputs, represented by the multi-model mean of the models stored in the CMIP5 archive, do not. (See the post here regarding the use of the multi-model mean.) We’ll continue to use the North Atlantic as an example throughout this post for simplicity sake.
Figure 1 (Figure 3 from the post Questions the Media Should Be Asking the IPCC – The Hiatus in Warming.)
Michael Mann and associates have attempted to revise the definition of multidecadal variability in their new paper Steinman et al. (2015) Atlantic and Pacific multidecadal oscillations and Northern Hemisphere temperatures. Michael Mann goes on to describe their efforts in the RealClimate post Climate Oscillations and the Global Warming Faux Pause. There Mann writes:
We propose and test an alternative method for identifying these oscillations, which makes use of the climate simulations used in the most recent IPCC report (the so-called “CMIP5” simulations). These simulations are used to estimate the component of temperature changes due to increasing greenhouse gas concentrations and other human impacts plus the effects of volcanic eruptions and observed changes in solar output. When all those influences are removed, the only thing remaining should be internal oscillations. We show that our method gives the correct answer when tested with climate model simulations.
It appears their grand assumption is that the outputs of the climate models stored in the CMIP5 archive can be used as a reference for how surface temperatures should actually have warmed…when, as shown as an example in Figure 1, climate models show no skill at being able to simulate the multidecadal variability of North Atlantic. (There are posts linked at the end of this article that show climate models are not capable of simulating sea surface temperatures over multidecadal time frames, including the satellite era.)
Let’s take a different look at what Steinman et al. have done. Figure 2 compares the model and observed sea surface temperature anomalies of the North Atlantic for the period of 1880 to 2014. The data are represented by the NOAA ERSST.v3b dataset, and the models are represented by the multi-model mean of the climate models stored in the CMIP5 archive. Both the model outputs and the sea surface temperature data have been smoothed with 121-month filters, the same filtering used by NOAA for their AMO data.
Figure 2
As illustrated, the data indicate the surfaces of the North Atlantic are capable of warming and cooling at rates that are very different over multidecadal periods than the forced component of the climate models. The forced component is represented by the multi-model mean. (Once again, see the post here about the use of the multi-model mean.) In fact, the surfaces of the North Atlantic warmed from about 1910 to about 1940 at a rate that was much higher than hindcast by the models. They then cooled from about 1940 to the mid-1970s at a rate that was very different than the models. Not too surprisingly, as a result of their programming, the models then align much better during the period after the mid-1970s.
Steinman et al., according to Mann’s blog post, have subtracted the models from the data. This assumes that all of the warming since the mid-1970s is caused by the forcings used to drive the climate models. That’s a monumental assumption when the data have indicated the surfaces of the North Atlantic are capable of warming at rates that are much higher than the forced component on the models. In other words, they’re assuming that the North Atlantic since the mid-1970s has not once again warmed at a rate that is much higher than forced by manmade greenhouse gases.
What Steiman et al. have done is similar to subtracting an exponential curve from a sine wave…where the upswing in the exponential curve aligns with the last minimum to maximum of the sine wave…without first establishing a relationship between the two totally different curves.
MICHAEL MANN PRESENTED A CLEAR INDICATION OF HOW POORLY CLIMATE MODELS SIMULATE MULTIDECADAL SURFACE TEMPERATURE VARIABILITY
I had to laugh when I saw the following illustration presented in Michael Mann’s blog post at RealClimate. I assume it’s from Steinman et al. In it, the simulations of the surface temperatures (represented by the multi-model mean of CMIP5-archived models) of the North Atlantic, North Pacific and Northern Hemisphere surface temperatures have been subtracted from the data. That illustration clearly shows that the climate models in the CMIP5 archive are not capable of simulating the multidecadal variations in the sea surface temperatures of the North Atlantic and the North Pacific or the surface temperatures of the Northern Hemisphere.
Figure 3
In other words, that illustration presents model failings.
If we were to invert those curves, by subtracting reality (data) from computer-aided speculation (models), the resulting differences would show how greatly the models have overestimated the warming of the North Pacific and Northern Hemisphere in recent years.
What were they thinking? We’d let that go by without calling it to everyone’s attention?
Thank you, Michael Mann and Steinman et al (2015). You’ve made my day.
OTHER REFERENCES
We’ve illustrated and discussed how poorly climate models simulate sea surface temperatures in the posts:
- Alarmists Bizarrely Claim “Just what AGW predicts” about the Record High Global Sea Surface Temperatures in 2014
- IPCC Still Delusional about Carbon Dioxide
For more information on the Atlantic Multidecadal Oscillation, refer to the NOAA Frequently Asked Questions About the Atlantic Multidecadal Oscillation (AMO) webpage and the posts:
- An Introduction To ENSO, AMO, and PDO — Part 2
- Multidecadal Variations and Sea Surface Temperature Reconstructions
CLOSING
Some readers might think that Steinman et al. is nothing more than misdirection, a.k.a. smoke and mirrors. What do you think?
Thanks to blogger “Alec aka Daffy Duck” for the heads-up.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.



Here is Fig 3 from Steinman et al.
http://i60.tinypic.com/6z0tvo.jpg
Note: In Panel A, the phase locked nature of the oscillations must have contributed mightly to the warming from 1975 to 1995. Then the authors only accede that the Pause occurred only when they went negative after 2000.
They wrote in conclusion (my bold):
What a load of BS: In the coming decades the AMO and PDO will be both negative, and grand cooling, along with a possible quiet solar magnetic period spells quite a bit of problem for mankind in the next 20 years. And a death knell to CAGW.
The models do not contain the effect of oceanic oscillations (A). Therefore the observed temperature record (B) and the model mean (C) can be used to estimate (internal variability A) .
A = B – C
This seems to me what Steinman et al 2015 is all about. The paper claims that the model “back projections” can be used to demonstrate a loose fit between the warming and cooling caused by prior oceanic oscillations.
Why did they not combine the effects of oceanic oscillations and observed warming (B-A)? Since the range in A seems to be about 0.4 degrees Celsius and the warming (B) seems to have been around 0.7 degrees Celsius this ought to give a non-trivial result. And C would be derive by a purely empirical method would it not?
Then C = B – A. In effect, this approach would tell us what the output of the model should have been based on observations of the real world. Any model or combination of models could be used to test against C to estimate the fit between observations and the models. C could be compared with Csubm where Csubn is the output from model n.
Moreover, if every model were tested by deriving the difference between C and Csubn, then a new multi-model statistic could be derived that would represent the multi-model best estimate of what should be observed.
I conclude, based on this thought experiment that what is wrong with Steinman et al 2015 is that the authors have treated the output from the models as being equivalent to observations.
The study design is fatally flawed.
“Why did they not combine the effects of oceanic oscillations and observed warming (B-A)?”
Doing would vividly highlight model error, e. Your C should have been subtracted from Csubn to give e. Csubn – C = e. If Steinman et al were adherents of science, this is what scientists do who want to assess validity of the models.
That Steinman chose the opposite, and attempt to redefine observation to fit the model failures exposes them as pseudo-scientists. Along with their fellow pseudo-scientist editors at Science Mag, what they have done is ultimatly to further destroy the reputation of science.
So it’s not a real pause when something natural makes the temperature pause, it’s only a pause when human activities cause a pause. So natural pauses don’t count, and neither do natural warming factors.
Someone might look back at this one day and shake their head.
There’s no reason you can’t shake your head now.
I am shaking my head now!!!!!!!
Some poignant quotes from an article about it in The Australian-
‘FORCES of natural climate variability have caused the apparent slowdown in global warming this century but the effect will be temporary, according to new research.
Byron Steinman, of the University of Minnesota Duluth, and Michael Mann and Sonya Miller, of Pennsylvania State University, found that these natural, or “internal”, forces had recently been offsetting the rise in global mean surface temperature caused by increasing greenhouse gas concentrations.
They published their results in the latest edition of the American journal Science.
The deceleration in global mean surface temperature this century despite rising greenhouse gas levels has fuelled the climate wars.
Greenhouse sceptics have seized on it as evidence that the Intergovernmental Panel on Climate Change and other scientists adopting the “consensus” view have exaggerated the risk of global warming.
Some research, including studies by Australian scientists, suggests that an increase in the heat taken up by the deeper waters of the Pacific as well as a pronounced strengthening of Pacific trade winds in recent years due to natural climate variability is responsible.
The team used modelling results from the big international science program, the Coupled Model Intercomparison Project phase 5, to estimate the externally forced component – mainly due to human activity – of northern hemisphere temperature readings since 1854.
“We subtracted this externally forced component from the observational data to isolate the internal variability in northern hemisphere temperatures caused by the Atlantic multidecadal oscillation and a component of the Pacific decadal oscillation, ”Professor Steinman told The Australian. (These natural climate systems are defined by temperature patterns across the oceans and influence climate globally.)
“This showed that the current slowdown is being driven largely by a negative internal variability trend in the Pacific,” he said.
He said the negative shift had been counteracting some of the anthropogenic warming.
“In coming decades, the trend will likely reverse and accelerate the increase in surface temperatures,” he said.’
Welcome to post-normal science- “In coming decades the trend will likely.”……[fill in whatever takes your fancy folks]
Idiots!
Remember the process- Global Warming morphs to Climate Change morphs to Extreme Weather Events and now morphs to Forces of Natural Climate Variability morphs to The Trend Will Likely…???
Woohoo roll the drums and sound the trumpets! Victory over the skeptics and deniers at long last.
Bob
I mentioned SST and the period between 1910 and 9140 before in one of your other posts. From looking at how the Met Office addressed bucket corrections (i.e. with lots of assumptions and one cursory experiment performed 20 odd years ago) I don’t know if that rise in temperature is “real”.
The original data was adjusted so the change was less marked but I’m wondering if even this is an under-estimation of bucket bias. There may not be that much of a temperature change in that period. Or it may even be as warm if not warmer in the early 20th century than now?
If anyone has time you can check out the uncertainty description in the HADSST data sets. The bucket measuring technique hasn’t been characterised fully which leaves a lot more uncertainty on the table. They don’t address measurement process to a degree that a good scientist and engineer should which makes me wonder what were temperatures really doing back then.
At any rate, a good post Bob.
mickyhcorbett75, the early (1910-1940) rise in North Atlantic sea surface temperature is greater with the source ICOADS data
Cheers
The following article appeared on the MSN home page for a few hours discussing the Steinman, et al article plus an article on how long the pause would last. I thought that it was kind of interesting to see that the chances of the “pause” lasting 20 years was only 1%. My question is what happens, when the pause lasts another 15 years (plus the ~ 18 years it has already lasted), as it has in the historical past. I wonder what the odds of that happening are?
http://www.msn.com/en-us/weather/topstories/scientists-now-know-why-global-warming-has-slowed-down-and-it%e2%80%99s-not-good-news-for-us/ar-BBhZW8r
The more I learn, the less I know.
Dan Sage
There is a underlying trend caused by humans-Global dimming until 1980 and then global brightening. Aerosols used to cool now it does not in NH at least.
Is this just a reworking of Mann, Steinman and Miller(2014)? Replacing output from Energy Balance Model(s) with that from GCM’s. With the same ‘models too good to reject’ premise of the previous paper, as described by Matthew Marler above. Comprehensively examined by Nic Lewis at the time, as I recall.
Is ‘Science’ short of climate papers?
Science is All In on the pseudo-science that is ACO2=CAGW. Marcia McNutt &Co. seemed to have forgotten, like Mann an Steinman have here, that model outputs are not observation. And most importantly, they want to bend reality to fit the model, because the model promises boatloads of research funding.
If there is, indeed, an underlying warming trend caused by GHGs, then it is a very low trend, certainly less than the theory predicts.
If the explanation for the pause, is natural variability, then that variability also exists in the past as well. The “natural” extension would be to pull that past variability out of the past temperatures and arrive at the underlying GHG trend. Go back to 1880, estimate the trend. Theory has to be re-written.
How come Michael Mann, Trenberth and Foster and Rahmstorf stop at carrying out the natural extension. You know what. They have done exactly that but have chosen to not present the results. Because it says the Theory has to be rewritten.
Hi Bob,
and
, the result is (in crude terms)
, which basically means that we have no idea what the result is. If we then take two of these numbers:
and
and subtract them, we get something like
.
I think you are being far too kind when you accept the notion that the MME mean is, in fact, a meaningful quantity. Indeed, calling the collection of models in CMIP5 a “statistical ensemble” is a bit of a travesty all by itself. They are not independent and identically distributed samples drawn from a distribution of perfectly correct models (plus noise). They fail in this on almost every specific criterion in the statement. They are not independent — they share code, history, and many of them were written as minor variants of the same program by a single federal agency that is funded at phenomenal levels because of what they predict, project, prophesy, whatever. They are not identically distributed objects generated by a random process unless dice were used at some point during the actual construction of the models (as opposed to within the models in some sort of Monte Carlo), although sometimes the code itself might look as though is was written by mad monkeys armed with dice. Nor are they drawn from a distribution of perfectly correct models plus errors that are collectively free from bias. Rather, since they share so much actual code and so many of the same limitations, they are almost certainly not collectively free from bias, including systematic bias introduced by shared errors in the dynamical assumptions and physics.
Finally, the process that they are modelling is not a linear, or even a well-behaved, process. The models being averaged individually fail to come close to replicating the dynamical spatiotemporal scales visible in the real world data. That is, they have the wrong autocorrelation, the wrong amplitude of fluctuation, and a generally diverging envelope of possible future trajectories per model, where some of the models included are so obviously incorrect that it is difficult to take them seriously but they are all included anyway because the worst models show the most warming and without that, even the MME mean would be far closer to reality and far less “catastrophic”.
Two other comments. Your curves above, for the most part, present lines as if those lines are “the” temperatures being presented. In actual fact, those lines all come with error bars. In a sane computational universe, those error bars would start at a substantial level in the present (HadCRUT4’s acknowledged total error is around 0.2 C in the present) and would increase to many times the present error in the increasingly remote past. IMO the claims for precision/accuracy in the remote past are absurd — HadCRUT4 claims total error in 1850 is only around 0.4 C for the global anomaly, twice that of 2015, for example. This is absurd, given the importance of things like Pacific Ocean temperatures in any estimate of global temperature or its anomaly and given the simple fact that ENSO was only discovered, named, and subsequently studied in roughly 1893. In 1850 vast tracts of the Earth were terra incognita, not inhabited or systematically measured by Europeans wielding even indifferent thermometric instrumentation let alone the comparatively high precision instrumentation of the present. If the best HadCRUT4 can manage is halving the total error claimed in 1850 with the vast collection of modern thermometers at their current disposal, including the entire ARGO array, there is something seriously wrong, and yet 0.2 C seems quite reasonable for a current error estimate, possibly generous given that HadCRUT4 does not, apparently, correct for certain systematic biases such as the UHI effect.
Still, decorating the lines that appear on your graphs with even error estimates that fail a statistical common-sense sanity check and are probably seriously underestimated is better than presenting the lines themselves as if they are free from error, or as if error is confined to the width of the drawn lines. Yes, it makes the graphs messier, but without them the graphs are potentially meaningless.
I cannot emphasize this point enough, because it is pervasive in public presentations of climate science. It also leads to my second comment. In the graph above of the AMO, PMO, and NMO, the displayed errors are truly absurd. As I noted above, ENSO was only discovered in 1893, and expeditions to study it were subsequently launched. Perhaps by 1900 it and the PMO were being observed in a reasonably systematic way by scientists, although at the time they doubtless had to launch “expeditions” to do so and I’m quite certain that the record is sparse and incomplete well into the 20th century. In contrast, the Atlantic was heavily trafficked and surrounded on nearly all sides by ports with cities and thermometers. Yet it is the NMO that has the large, apparently diverging error bars in the graph above, followed by the AMO (both errors exploding pre-1900) while the PDO was, apparently, known then to better precision in 1880 than it was in 1970.
Say what?
A second point to make is that these curves supposedly are the result of double differences. By that I mean that they are the result of data that has twice had a “mean” background behavior subtracted out. The first time is when actual thermometric data has some base value subtracted (as if this base value, often the result of a local average over some modern-era reference period, is known to infinite precision) to form the “anomaly”. The second time is when the global anomaly, either surface temperature or sea surface temperature, is subtracted out to discover the (A,N,P) multidecadal “oscillation”. From the sound of it, the curves above have a third subtraction, the CMIP5 MME mean.
There are rules for compounding precision. If I subtract two big numbers — such as (for example) 288.54 and 288.17 to make a small number, 0.37 — the small number loses three significant figures of precision. If I subtract two big numbers uncertain at some level — such as (for example)
These are serious problems. I’m using a very simple lab device to teach physics at the moment that has a wheel — actually I think it is a mouse wheel — that measures how far a cart travels at roughly 1 mm of precision. One then has to estimate things like velocity and acceleration from this mm scale data. In a typical experiment, the cart moves along at speeds from 0 to 500 mm/sec, and samples the wheel output at a temporal resolution of maybe 100 Hz. Velocity is estimated by taking numbers like .687 and .689 (two successive wheel readings in mm) and dividing by 0.01 (multiplying by 100) to get 0.2 m/sec. Acceleration is formed by taking two successive velocity estimates and subtracting them (and dividing by the sampling time). As you can see, there is a problem with this when the cart is moving this slowly. The acceleration thus formed has no significant digits left. It actually looks almost like a random variable perhaps very slightly biased from zero on a graph.
In the case of a rolling cart, of course, we can make certain assumptions about monotonicity and the second order linear nature of the underlying dynamical system that permit us to do better — smooth the data over multiple data points, fit higher order curves to the primary data and differentiate those fits instead of using direct data differences — but those all come at a price in precision and entail numerous assumptions about the distribution of actual errors in the measuring apparatus as well as the underlying data. Some of the results are non-physical — accelerations start to happen well before the change in data that signals e.g. an actual collision. There are no free lunches in data analysis as you are always limited by the actual information content of the data and cannot squeeze a signal out of noise without assuming a knowledge that all too often one does not have.
In any event, I call foul on the AMO/PMO/NMO data. The error bars are completely unbelievable (a few hundredths of a degree of precision in an anomaly of an anomay, larger errors in 1970 than in 1880, really?) , and the curves are far, far too smooth and regular.
rgb
rgb:
Nicely said. I particularly like your unequivocal and apt phrasing:
“IMO the claims for precision/accuracy in the remote past are absurd — HadCRUT4 claims total error in 1850 is only around 0.4 C for the global anomaly, twice that of 2015, for example. This is absurd, given the importance of things like Pacific Ocean temperatures in any estimate of global temperature or its anomaly and given the simple fact that ENSO was only discovered, named, and subsequently studied in roughly 1893. ”
It all seems to me to be so much magical thinking, false precision and hubris. Even to talk about error bars with such a level of uncertainty and lack of knowledge strikes me as absurd.
1993.
===
“This is absurd, given the importance of things like Pacific Ocean temperatures in any estimate of global temperature or its anomaly and given the simple fact that ENSO was only discovered, named, and subsequently studied in roughly 1893.”
////////////////////////////////////
I agree with the point you make regarding past error bars. The reality is that we have no good information on GLOBAL temperatures pre the 1930s, and ocean temperatures are riddled with errors and prior to ARGO extremely unreliable.
I frequently make the point that we do not know whether, on a global basis, temperatures are warmer today than they were in the 1880s or the 1930s, but as far as the US is concerned it was probably warmer in the 1930s than today. That is the extent of our knowledge.
Whilst I accept that ocean phenomena were beginning to be studied in the late 19th early 20 th century, I consider that the recognition and study of ENSO was a little later than you are suggesting. See http://www.earthgauge.net/wp-content/fact_sheets/CF_ENSO.pdf
“Now well known to scientists, the El Niño-Southern Oscillation (ENSO) was discovered in stages. The term El Niño (“the infant” in Spanish) was likely coined in the 19th century by Peruvian fishermen who noticed the appearance of a warm current of water every few years around Christmas. The cause of the current’s appearance was a mystery to them. In 1899, India experienced a severe drought-related famine, prompting greater focus on understanding the
Indian monsoon system, arguably the nation’s most important source of water. In the early 1900’s, the British Mathematician Sir Gilbert Walker noticed a statistical correlation between the monsoon’s behavior and semiregular variation in atmospheric pressure over the tropical Pacific. He coined this variation the Southern Oscillation, defined as the periodic shift in atmospheric pressure differences between Tahiti (in the southeastern Pacific) and Darwin, Australia (near Indonesia). It was
not until 1969, however, that meteorologist and early numerical weather modeler Jacob Bjerknes proposed that the El Niño phenomenon off the coast of South America and the Southern Oscillation were linked through a circulation system that he termed the Walker circulation (see image right). ENSO has since become recognized as the strongest and most ubiquitous source
of inter-annual climate variability.”
North Atlantic Ocean’s temperature oscillations past, present and future are closely related by the Arctic’s events
http://www.vukcevic.talktalk.net/NAII.gif
NA ice index: http://www.essc.psu.edu/essc_web/seminars/spring2006/Mar1/Bond%20et%20al%202001.pdf
Arctic GMF : http://www.gfz-potsdam.de/fileadmin/gfz/sec23/data/Models/CALSxK/cals7k2.zip
The global warming propheteers (or profiteers) leave out one major significant fact that this “new research” ignores.
Steinman says, “It appears as though internal variability has offset warming over the last 15 or so years,”
However if natural “internal variability” has caused “the pause” the past 15 years, how do we know that natural “internal variability” didn’t contribute the the warming for the previous 15 years leading up to 1998?
Thats the fallacy in models and attributing the slight warming leading up to 1998 to a less than 1/100th of 1% increase in CO2 level in the overall makeup of the atmosphere. The the temporary cooling is other causes, then the temporary warming up until then could also be other causes.
By the way, where is our false promises of Global Warming this winter? I have lived in Michigan for 22 years now, and I thought last winter was cold, but this year has been even more brutal.
Dell from Michigan
Further to my earlier comment
http://wattsupwiththat.com/2015/02/26/on-steinman-et-al-2015-michael-mann-and-company-redefine-multidecadal-variability-and-wind-up-illustrating-climate-model-failings/#comment-1870168
This paper adds absolutely nothing to our understanding of climate science and indeed perpetuates the grossest error of the models on which the IPCC CAGW scare is based,.All they do is go a very round about route to find the 60 year +/- periodicity in the Hadley temperature data which anyone can see at a glance
http://3.bp.blogspot.com/-fsZYBCaAYRo/U9aXzNnfWJI/AAAAAAAAAVc/CfFP12Oh438/s1600/HadSST314.jpg
The Hadley peaks are obvious at about 1880 1940 and 2000. They finally massage their data to produce more or less the same peaks in the red line in their Fig 3C see above Joelobryan 2/26 / 9:14 pm
The same periodicity is seen in the GISS data – their FIg 3A
They then attribute these temperature periodicities to “natural internal variability” in the ocean systems.
as if this advances our understanding by giving the model – reality differences another name. Nowhere do the suggest what is driving these variabilities.
As to the future they simply say the internal variability derived cooling trend will reverse in the coming decades. Looking at their own curves it is easy to draw the conclusion that with the last peak at about 2000 they would expect cooling until 2030 and renewed warming until 2060.Presumably that would be a modulation of the underlying model linear increase which they would attribute to CO2.However they perpetuate the modelers scientific disaster by ignoring in all their estimates and attributions the 1000 year periodicity so obvious in the temperature data,- see Figs 5-9 and the cooling forecasts at
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
Again the model procedure is exactly like taking the temperature trend from say Jan – July and then projecting it forwards linearly for 10years or so -Junk science at its worst.
This says it all. Basically says we test and prove our climate models by running more climate models.
I worked in the insurance sector for awhile as IT program manager. If a development team told me they planned on validating results by comparing individual policy processing with more software they were going to develop, I would know it was time to re-constitute the team. The only way to validate results is to compare against verified results, which was either someone who understood the required policy processing backwards and forwards or an older system that had a proven track record over multiple years.
In the case of climate models this would be verifying results against past climate, instead of past climate models. Climate models have re-written the rule of validation and now the known or observed is less important than model output.
Which means bookies everywhere take note, the winner of the next international soccer championship is the team predicted by consensus and statistical modeling to win, not the team who actually wins.
All forward modelling will inevitably end wrong, since two important variables (mentioned in the comment above) and there many others, are unpredictable. Further more even if one can predict their intensity, degree and effect of their interaction can be determined only after the event.
To paraphrase Steven Mosher: Climate models are “un needed”.
Vukcevic
I would appreciate any comments you may have on my previous comments -see Norman Page at 2/27/ 8:45
and on the cooling forecasts at the blogpost linked earlier.
Hi Dr. Page
I often read your comments and occasionally look at your blog, but normally do not often comment outside of my ‘comfort zone’.
I just posted this elsewhere:
“Both 10Be and C14 nucleation are strongly modulated by the Earth’s field. Pre-instrumental paleo-magnetic data are going back ‘millions’ of years but dating is not particularly accurate + or – 50 years/millennium (usually carbon dated, circular judgment!).
Declination/inclination compass readings go back to 1600, magnetometer data to 1840. Magnetometer obtained data show that the Earth’s field beside its own independent variability has a strong 22 year component, much stronger than the heliospheric magnetic field at the Earth’s orbit (implying common driving force ?!). For the above reasons all estimates of the solar activity pre-1600 (sunspot count availability) can not be taken with any degree of certainty.”
I have in past occasionally commented on the reliability of the 10Be data, it is an opinion which can be taken into account, or as it happens it is mostly ignored.
regards, mv.
Not long ago it was claimed that the proof of AGW was that human CO2 was the only explanation for the difference between the models and actual temperature, because all other natural factors had been accounted for. Now they’re claiming that there ARE other factors. This new claim by Steinman et al. means their original ‘evidence’ for AGW was wrong.
Vukcevic Thanks for your reply- I always follow with interest your posts and comments. I agree entirely with your correlation of the detrended NH temperatures with the Geo-solar cycle. This clearly shows the 60 year +/- periodicity seen in the Hadley temperature data and obviously is commensurable with the Saturn/ Jupiter
lap 3x 19.859.= 59.577. This too is commensurable with the 960 +/- year cycle 16x 19.859 = 953.232 which also equals the USJ lap.
Of course Leif would do his nut at the mention of such “correlations” but it points to the place where solar physicists should think about possible connecting processes.. I suggest torque and torsion at the tachycline for openers, although I have an uneasy feeling that electro magnetic effects may also be involved.
All I do in my cooling forecasts is simply say that the underling temperature trend detrended out in your graph is obviously part of the 350 year uptrend of the 960 +/- periodicity seen in the temperature data – see Figs 5-9 at
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
By projecting the 960 year cycle forward, from its current peak climate forecasting appears to me to be reasonably simple and obvious – at least as far as getting into the ballpark is concerned.
The entire IPCC – modeling approach on which the whole UNFCCC circus is based is simply an example of the academic herd instinct and scientific incompetence to the point of stupidity and an unwillingness to see and use the obvious as the first approach to problems.
Where is your comment on the reliability of the 10Be data? Regards Norman.
My comments on WUWT are often linked to THIS short article (you could google the link).
Following illustration shows that some of the long term solar components based on 10Be data are also found in the Earth’s magnetic field variability
http://www.vukcevic.talktalk.net/Stein1-Vuk.gif
Here is google selection
https://www.google.co.uk/search?esrch=Agad%3A%3APublic&hl=en-GB&source=hp&q=CET%2610Be.htm&gbv=2&oq=CET%2610Be.htm&gs_l=heirloom-hp.3…67513.67513.1.67722.1.1.0.0.0.0.56.56.1.1.0.msedr…0…1ac.1.34.heirloom-hp..2.0.0.i_L5bkRvhgQ
Steinhilber used Dongge cave (China) to re-affirm accuracy of his TSI reconstruction long term periodicity, but I found that geopolar magnetic field data (http://www.gfz-potsdam.de/fileadmin/gfz/sec23/data/Models/CALSxK/cals7k2.zip) is a more accurate representation than the Steinhilber TSI.
http://www.vukcevic.talktalk.net/Dongge.gif
In my view it is difficult to disentangle what portion of the 10Be is modulated by solar and what by the Earth’s magnetic field.
Fantastic – I see my favorite 1000 year (just under) periodicity stands out prominently-I will certainly use this graph in future posts if you don’t mind.( with proper attribution of course.
You are welcome to it, but the attribution may not do you much good.