By Andy May
In my last post, it was suggested that Michael Mann’s 2008 reconstruction (Mann, et al., 2008) was similar to Moberg’s 2005 (Moberg, Sonechkin, Holmgren, Datsenko, & Karlen, 2005) and Christiansen’s 2011/2012 reconstructions. The claim was made by a commenter who calls himself “nyolci.” He presents a quote, in this comment, from Christiansen’s co-author: Fredrik Charpentier Ljungqvist:
“Our temperature reconstruction agrees well with the reconstructions by Moberg et al. (2005) and Mann et al. (2008) with regard to the amplitude of the variability as well as the timing of warm and cold periods, except for the period c. AD 300–800, despite significant differences in both data coverage and methodology.” (Ljungqvist, 2010).
A quick google search uncovers this quote in a paper by Ljungqvist in 2010 (Ljungqvist, 2010), one year before the critical reconstruction by Christiansen and Ljungqvist in 2011 (Christiansen & Ljungqvist, 2011) and two years before their 2012 paper (Christiansen & Ljungqvist, 2012). It turns out that Ljungqvist’s 2010 reconstruction is quite different than those he did with Christiansen over the next two years. All the reconstructions are of the Northern Hemisphere. Ljungqvist’s and Christiansen’s are of the extra-tropical (> 30°N) Northern Hemisphere and Moberg’s and Mann’s are supposed to be of the whole Northern Hemisphere, but the big differences lie in the methods used.
With regard to the area covered, Moberg only has one proxy south of 30°N. Mann uses more proxies, but very few of his Northern Hemisphere proxies are south of 30°N. Figure 1 shows all the reconstructions as anomalies from the 1902-1973 average.

Figure 1. A comparison of all four reconstructions. All are smoothed with 50 year moving averages except for the Ljungqvist (2010) reconstruction which is a decadal record. All have been shifted to a common baseline (1902-1973) to make them easier to compare. Ljungqvist(2010) is a decadal record.
As Figure 1 shows, the original Ljungqvist(2010) record is similar to Mann(2008) and Moberg(2005). A couple of years after publishing Ljungqvist(2010), Ljungqvist collaborated with Bo Christiansen and made the record labeled Christiansen(2012). It starts with the same proxies as Ljungqvist(2010), but uses a different method of combining the proxies into a temperature record that they call “LOC.”
In 2008, Michael Mann created several different proxy records, the one plotted in Figure 1 is the Northern Hemisphere EIV Land and Ocean record. EIV stands for “error-in-variables” and is a total least squares regression methodology. Mann states at the beginning of his paper that he would address the criticism (“suggestions”) in the 2006 National Research Council report (National Research Council, 2006). The result is a complex and hard to follow discussion of various statistical techniques used on various combinations of proxies. He doesn’t have one result, but many, then he compares them to one another.
Moberg (2005) also uses regression to combine his proxies but characterizes them by resolution to preserve more short-term variability. The statistical technique used by Ljungqvist in his 2010 paper is similar and called “composite-plus-scale” or CPS. This technique is also discussed by Mann in his 2008 paper and he found that it produced similar results to his EIV technique. Since these three records were created using similar methods, they all agree quite well.
Christiansen and Ljungqvist (2011 and 2012)
Everyone admits that using regression-type methods to combine multiple proxies into one temperature reconstruction reduces and dampens the temporal resolution of the resulting record. Instrumental (thermometer) measurements are normally accurately dated, at least down to a day or two. Proxy dates are much less accurate, many of them are not even known to the year. Those that are accurate to a year often only reflect the temperature during the growing season, during winter or during the flood season. Ljungqvist’s 2010 record is only decadal due to these problems.
Inaccurate dates, no matter how carefully they are handled, lead to mismatches when combining proxy records and result in unintentional smoothing and dampening of high-frequency variability. The regression process itself, leads to low-frequency variability, Christiansen and Ljungqvist write:
“[Their] reconstruction is performed with a novel method designed to avoid the underestimation of low-frequency variability that has been a general problem for regression-based reconstruction methods.”
Christiansen and Ljungqvist devote a lot of their paper to explaining how regression-based proxy reconstructions, like the three shown in Figure 1, underestimate low-frequency variability by 20% to 50%. They list many papers that discuss this problem. These reconstructions cannot be used to compare current warming to the pre-industrial era. The century-scale detail, prior to 1850, simply isn’t there after regression is used. Regression reduces statistical error, but at the expense of blurring critical details. Therefore Mann adding instrumental temperatures onto his record in Figure 1 makes no sense. You might as well splice a satellite photo onto a six-year-old child’s hand-drawn map of a town.
Christiansen and Ljungqvist make sure all their proxies have a good correlation to the local instrumental temperatures. About half their proxies have annual samples and half decadal. The proxies that correlate well with local (to the proxy) temperatures are then regressed against the local instrumental temperature record. That is the local temperature is the independent variable or the “measurements.” The next step is to simply average the local reconstructed temperatures to get the extratropical Northern Hemisphere mean. Thus, only minimal and necessary regression is used, so as not to blur the resulting reconstruction.
Discussion
Regression does reduce the statistical error in the predicted variable, but it reduces variability significantly, up to 50%. So, using regression to build a proxy temperature record to “prove” recent instrumental measured warming is anomalous is disingenuous. The smoothed regression-based records in Figure 1 show Medieval Warm Period (MWP) to Little Ice Age (LIA) cooling of about 0.8°C, this is more likely to be 1°C to 1.6°C, or more, after correcting for the smoothing due to regression. There is additional high-frequency smoothing, or dampening, of the reconstruction due to poorly dated proxies.
The more cleverly constructed Christiansen and Ljungqvist record (smoothed) shows a 1.7°C change, which is more in line with historical records, borehole temperature data, and glacial advance and retreat data. See Soon and colleagues 2005 paper for a discussion of the evidence (Soon, Baliunas, Idso, Idso, & Legates, 2003b). Christiansen and Ljungqvist stay much closer to the data in their analysis to avoid distorting it, this makes it easier to interpret. Figure 2 shows the same Christiansen and Ljungqvist 2012 curve shown in black in Figure 1 and the yearly Northern Hemisphere averages.



Figure 2. Christiansen and Ljungqvist 2012 50-year smoothed reconstruction and the one-year reconstruction. The black line is the same as in Figure 1, but the scale is different.
The one-year reconstruction is the fine gray line in Figure 2. It is a simple average of Northern Hemisphere values and is unaffected by regression, thus it is as close to the data as possible. Maximum variability is retained. Notice how fast temperatures vary from year-to-year, sometimes by over two degrees in just one year, 542AD is an example. From 976AD to 990AD temperatures rose 1.6°C. These are proxies and not precisely dated, so they are not exact values and take them with a grain of salt, but they do show us what the data say, because they are minimally processed averages.
The full range of yearly average temperatures over the 2,000 years shown is 4.5°C. The full range of values with the 50-year smoothing is 1.7°C. Given that nearly half of the proxies used are decadal, and linearly interpolated to one-year, I trust the 50-year smoothed record more than the yearly record over the long-term. But, seeing the variability in the one-year record is illuminating, it reinforces the foolishness of comparing modern yearly data to ancient proxies. Modern statistical methods and computers are useful, but sometimes they take us too far away from the data and lead to misinterpretations. I think that often happens with paleo-temperature reconstructions. Perhaps with modern temperature records as well.
It is quite possible that we will never know if past climatic warming events were faster than the current warming rate or not. The high-quality data needed doesn’t exist. What we do know for sure, is that regression methods, all regression methods, significantly reduce low-frequency variability. Mixing proxies with varying resolutions and imprecise dates, using regression, destroys high-frequency variability. Comparing a proxy record to the modern instrumental record tells us nothing. Figure 1 shows how important the statistical methods used are, they are the key difference in those records. They all have access to the same data.
Download the bibliography here.
there is also Loehle’s reconstruction which well correlates to the Arctic’s magnetic field change
http://www.vukcevic.co.uk/LLa.htm
That truly is a remarkable thing! One bit tho’ that seems odd: at least as the graphs present, it seems that temperature change leads magnetic field change. I would have thought it would be the other way around. Temperature Δ in response to stratosphere and upper troposphere galactic high energy cosmic ray nuclei, as moderated by the magnetic field intensity.
See what I mean? It is kind of a big-hard-stretch to expect that Earth’s magnetic field would modulate in correlation to the mean global temperature. Like ‘why?’
Yours,
GoatGuy
Hi GG
It is unlikely that the temperature variability would affect the Earth’s MF, while there is possibility or even probability of the other way around.
I don’t think that either the global temperature or the magnetic field reconstructions before mid 1800’s are particularly accurate. Introducing a 10 year delay in the temperature’s ‘response’ and changing delta-t to 22 years (for reason that the Earth’s magnetic field for some unknown reason has strong Hale cycle spectral component, see inset in the link below) the inconsistency disappears, but a much longer ‘delay’ is apparent in coming out of the LIA.
Using more accurate global data from 1870’s to the present results in a slightly lower R^2
http://www.vukcevic.co.uk/CT4-GMF.htm
(Link to the post 1870 magnetic data is included)
Would it be possible that the sun’s MF modulates earths magnetic field as well as temperatures, and that may be why earths MF and temperatures may be in sync? It also seems somewhat coincidental to me that magnetic north pole is moving so quickly during the time of a grand solar minimum.
Could it be somewhat like the induction or RF heating of a large steel ball bearing? The air around the steel ball bearing will start warming up as soon as the induction coil is energized and it could take a few more minutes to heat up the mass of steel.
Could also be the Sun is influencing the movement of the magnet pole and thus changing the location of the void around the pole. The void could move much faster than the mass of magnetic field in the earth.
I think similar thoughts every time I watch induction coils heat metals.
Thanks for the graphs.
Why do you use HadCruft4 ? Land+sea averages are physically meaningless. I suggest using an SST record.
As I pointed out years ago, HadSST3 bucket fiddling removed the majority of the variability from the majority of the record. In particular the early 20th c. rise.
https://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/
Hi Greg
Thanks for the comment, I am aware of some of the points you made. Most of the temperatures data available follow similar up-down trajectory. In this particular case it important to note the degree of correlation. If and when it can be shown that a sustainable causation hypothesis can be formulated then may be worth looking into the data sources preferences.
Thank you Andy – I’ve enjoyed your many articles and posts.
I’ve studied climate science since ~1985 and published since 2002. Climate is a fascinating subject and I’m pleased with my conclusions to date and those of others who publish here on wattsup and on several other sites.
However, I may have written my last article, although I’ll probably continue to comment on blogs. My reasons are:
1 I think I’ve sorted most of the major climate-and-energy technical issues in my papers published from 2002 to 2020.
2 It has been obvious for decades that there is no real climate crisis.
3 It has been obvious for more than a decade that the alleged climate crisis is not just false – it’s a scam.
4 The technical issues are no longer the main event – the greatest need today is to fight the scam – the phony linking of the Climate-and-Covid false crises, and the fraudsters’ full-Marxist solution – the “Great Reset” – aka “Live like a Chinese serf, under the heel of a dictator”.
Best personal regards, Allan
Thanks, Andy. A good presentation of data in the act of being tortured by some and analyzed by others. Your fig. 1 is the key for me, it shows 2,000 years of declining temperature interrupted by two warming events (at 2,000 years and modern). This is in keeping with sea level variance as I understand it, and is likely the actual situation, ie, the earth is sliding downward (in temperature) towards another glacial cycle in this Ice Age we live in. Let’s get as much CO2 into the atmosphere as we can, it is a buffer for humanity.
A modern warming that seems to be well within ‘adjustments’, without error bars.
PETER FOSTER: SUSTAINABLE NEWSPEAK BY 2050
Like the word ‘social,’ ‘sustainable’ tends to vitiate or reverse the meaning of words to which it is attached. Thus ‘sustainable’ development is development retarded by top-down control
Peter Foster, Jan 05, 2021
https://financialpost.com/opinion/peter-foster-sustainable-newspeak-by-2050?mc_cid=24866edf09&mc_eid=da89067c4f
And for readers like nyolci who seem to get inspiration from Mannian climate performances, the $2 Stores are having a special this month on “100 Home Magic Tricks To Amaze All Your Friends And Family”
The first trick in the package is that they market it through the $2 Stores, but get you to pay $3 for it.
Is it just me, or are the images missing?
I can see 3 images.
Dave, Did that fix it?
They’re visible now. Since the format change, it seems images are displaying differently.
Could be my mistake. I referenced the images on my web site, to save space. The change I made was to put a copy on WUWT, then you could see them. I was trying to save space and time, but that never works.
Once again, if the y-axis were scaled honestly these foreboding lines would vanish.
See attached example.
I long ago concluded that most of the paleoclimate stuff is not fit for purpose. This includes Mann 2008 and Marcott 2013. The reason is basic. We know the GAST rise from ~1920-1945 is virtually indistinguishable from ~1975-2000. Yet even the IPCCAR4 said the former period was mostly natural; there simply was not enough rise in CO2. This raises the attribution problem about the latter period.
In order for paleoclimate to shed light on such matters, it needs something of an equivalent resolution. Such resolution simply is not there, either in the underlying proxies or in their statistical synthesis into a paleoclimate quesstimate. Essay Cause and Effectbin ebook Blowing Smoke deconstructs an example using Shakun’s 2012 paper. In the end, he produced an absurd statistical hash provable from the SI.
The best solution for an exercise in futility is to simply stop. But that would mean no more grants.
Rud says; “I long ago concluded that most of the paleoclimate stuff is not fit for purpose.”
I believe there was a climate gate email that essentially said the same thing in far more colorful language.
Thanks for another excellent article Andy – very clear presentation. If I understand correctly the choice of baseline is of no consequence to the relative comparison of the reconstructions, but I’m still left curious why you picked 1902-1973?
MJB,
I don’t remember exactly. The first reconstruction I started with had that baseline, so as I added additional reconstructions to the spreadsheet I changed them to match. I show only 4 here, but I have a lot more.
All you need to know about Mann08, and all the others mentioned in this article, can be found here at Climate Audit – Mann 2008
Yes!!
Some links at Climate Audit appear to be broken.
CORRECTIONS TO THE MANN et. al. (1998) PROXY DATA BASE AND NORTHERN HEMISPHERIC AVERAGE TEMPERATURE SERIES (McIntyre and McKitrick 2003) can be found at https://climateaudit.files.wordpress.com/2005/09/mcintyre.mckitrick.2003.pdf.
The abstract (emphasis added):
HOCKEY STICKS, PRINCIPAL COMPONENTS, AND SPURIOUS SIGNIFICANCE (McIntyre and McKitrick GRL 2005) can be found at http://www.climateaudit.info/pdf/mcintyre.mckitrick.2005.grl.pdf.
From the conclusions (emphasis added):
THE M&M CRITIQUE OF THE MBH98 NORTHERN HEMISPHERE CLIMATE INDEX: UPDATE AND IMPLICATIONS (McIntyre and McKitrick EE 2005) can be found at https://climateaudit.files.wordpress.com/2009/12/mcintyre-ee-2005.pdf
The abstract (emphasis added):
I would like to add that proxy reconstructions suffer from a significant flaw that makes calculating accurate confidence limits very difficult if not impossible. Typically a proxy is “calibrated” by comparison to the instrumental record. The problem with this is that the instrumental record only covers typically 5 to 10% of the range of the proxy. Thus the calibration of the “instrument” (the treemometer, in the case of tree rings), which determines its accuracy and precision, is only valid over a very small part of the range over which the instrument is “measuring” whatever it is measuring. There is no scientific basis for the assumption that this calibration is valid over the rest of the range nor is there any scientific basis for the assumption that the uncertainty measured as part of the calibration is an accurate estimate of the uncertainty over the rest of the range. Normally, the use of instrumentation outside its calibration range is considered unscientific. There is no cure for this shortcoming.
very nice review Phil of Mike Mann’s lies and distortions. mann is an outright fraud on science, and Science not only doesn’t care, it cheer leads on the deception and calls it “follow the science.”
Mann is holding the entire coterie of “climate change” cheer leaders for ransom. If they throw him under the bus for his lack of sound science, and outright fraud, they’re all lost. It’s that word “unprecedented” that will do them in. It’s like their use of “consensus”. The first is patently untrue, based on massive amounts of evidence and thorough falsification of the “hockey stick” . Consensus is just a logical fallacy … two actually appeal to population and appeal to authority.
“Mann is holding the entire coterie of “climate change” cheer leaders for ransom. If they throw him under the bus for his lack of sound science, and outright fraud, they’re all lost.”
Good point. This is another impetus to keep the scam going.
Excellent point Phil.
Excellent post, Phil. Thanks for the details and links.
The original MBH graph compared to a corrected version produced by MacIntyre and McKitrick after undoing Mann’s errors.
Historical review:
https://rclutz.wordpress.com/2018/03/11/rise-and-fall-of-the-modern-warming-spike/
Andy, a good article. Regressions and averages both hide and smooth over critical knowledge of the data. Variability is often reduced to nothing by averaging and then smoothing.
I must disagree somewhat with one of your statements. “Regression does reduce the statistical error in the predicted variable, but it reduces variability significantly, up to 50%.”. Regression can cause error of its own that are statistical in nature. These are time series and one must be conscious of the assumptions when combining different series. Stationarity is one assumption that is often overlooked. Read the following on regressions for a little introduction.
https://www.stat.berkeley.edu/~stark/SticiGui/Text/regressionErrors.htm
Jim, I did not find it in your link, but I’m familiar with stationarity as a concept. I guess you mean that the reason temperatures change today may not be the same as reasons in the past. True enough. It is also very true that the relationship between the proxies and temperature may not be the same today as in the past. This has been shown to be true for tree rings for example.
So I agree with you. The assumption made when using regression is that the mechanism connecting Y to X always stays the same. This assumption is probably not true with temperature proxies over the last 2,000 years.
I would rather say that if the relationship between the proxies and temperature does not hold true today – such as tree rings, it is a falsification of the proxy being a true record of the temperature. The instrumental record is so short, that a brief correlation does not establish it as a proxy. Only a full correlation with all available data would indicate that it may be a true proxy.
It’s also the fact that individual proxies must have the same statistical parameters, i.e., variance, means, etc. There are ways to handle these if they are different but I’ll bet a dime to a donut that this was not done. Time series simply can’t be averaged or a regression done if the series don’t match.
Here is a copy if you wish to check: https://cp.copernicus.org/articles/8/765/2012/cp-8-765-2012.pdf
I would be more confident if they didn’t use pseudo-proxy methods.
You will note that B. Christiansen, F. C. Ljungqvist list multiple tree ring proxies in their Table 1.
Meaning they treat dendrochronology as accurate for temperature reconstructions.
ATheok,
Some tree rings may be OK. I wouldn’t reject all of them out of hand. The problem with Mann’s methodology is that he accepted tree ring records that did not correlate with temperature. This was especially true with some North American tree ring series.
“ensemble pseufo-proxy methods that both estimates” immediately tells me that the used multiple wrong trends and hoped to get a correct answer. There are mathematical methods available for combining trends without stationarity. Why didn’t they use those? Stock analysts use them all the time to investigate trends of different stocks to get a degree of accuracy in what the combination may do.
I really wish more attention would be paid to the uncertainty of the temperature record. Take the interval between 1600 and 1900 in Figure 2. Most of the temperature averages are between -0.5C and -1.5C. That is the within the “nominal” uncertainty range of +/- 0.5C. Thus while the 50 year trace shows an uptick from 1600 to 1900 it could actually have been a down tick. You simply don’t know because of the uncertainty in the record.
Even worse, if those are annual averages then the uncertainty is guaranteed to be more than +/- 0.5C.
All those data lines should be made with a magic marker the width of 1C. I’ve attached a graphic showing the use of a 1C wide pencil to follow the temperatures in Fig 2. If you understand that your “temperature” line can be anywhere in the black area you can darn near draw a horizontal line from year 0 to year 2000 with only small dips at 1300 and 1600. If you increase the width of the uncertainty to +/- .8C (absolutely not unreasonable) then you *can* draw that horizontal line at 0C.
You simply can *not* assume that a stated or calculated quantity is 100% accurate. That should violate all rules of physical science – but apparently not for climate science.
Whoosh!
I’ve asked this layman’s question a number of times in various forums, but have never received a rational answer, viz –
if the proxy temps after 1960-something were found to show a decline that needed hiding by being supplanted by actual thermometer readings, then what’s to say that the previous centuries’ proxy temps were not also equally inaccurate?
Or the decline was real and called the Ice Age scare. And, like the real LIA and the MWP, it made fearsome, even fatal wounds to their CO2 done it theory.
Warmunists are doing their best to expunge the Great Ice Age Scare of the 1960’s-1970’s from the historical record. They must have their noses rubbed in it at every opportunity.
Mr., This point was driven home repeatedly in the McIntyre and McKitrick papers, the National Academy of Sciences Report and the Wegman Report.
Even Keith Briffa, Mann’s friend, made the point in AR4, :
Tim, none of those proxies have any distinct physical relationship to temperature. Labeling the y-axis as ΔT (⁰C) is a lie.
It’s not Andy May’s lie. It’s Michael Mann’s lie and the rest of those folks. Consciously honorable (with an exception), lying by reflexive acceptance of prior pseudo-art.
The same conclusion of physically meaninglessness is obvious, too, from the fact that the so-called proxy ( 🤮 ) trends are variable with statistical method. Physical results are not variable with subjective choices of statistical methods.
Pat, good comment, I agree. See the nearby comment on stationarity.
This “nominal uncertainty range of +/- .5C” is the artifact of your imagination. By the way, it was Jim who was bullshiting about this, I think you accidentally switched role.
No one assumed this. They calculated a 95% confidence interval. It’s explained in the article.
Is it your favorite stick horse of “uncertainty is increased by averaging”? If so, lemme quote Jim: “Variability is often reduced to nothing by averaging and then smoothing.” Now it is very hard to argue that uncertainty and variability are very-very different things. NB. C&L2012 used annual or decadal values.
Assuming 0.5 is quite generous nyolci. The true error is larger. People use Monte Carlo to estimate error and assume that is all the error. Sorry, there is still systemic error. It is unseen with Monte Carlo estimates, but often larger.
This is my main point. We use statistical methods to estimate values and error, then make the mistake of believing the results. Most errors are systemic and unknown, this is especially true with temperature reconstructions. See the nearby discussion of stationarity.
All regression methods assume that the mechanisms changing “y” don’t change, but they do.
Proclamations, as usual. In science you prove things. This guy just asserts an arbitrary value. You say it’s even more.
This bloke (and I) was specifically talking about uncertainty, not systemic errors. This debate between us has a history at other posts. Anyway, you again proclaim something without any proof. How on earth do you know there’s still systemic error? You have that strange feeling in your guts?
Now I’d pull my usual methodology and appeal to authority by pointing out that you are NOT authority in this field. Neither Jim (who is confusingly very much like Tim).
nyolci,
“Proclamations, as usual. In science you prove things. This guy just asserts an arbitrary value. You say it’s even more.”
It is virtually guaranteed to be even more. If you can’t quantify all the various factors that determine tree ring width (and other characteristics) then it is impossible to even make an estimate of the uncertainty in your stated value!
“This bloke (and I) was specifically talking about uncertainty, not systemic errors. This debate between us has a history at other posts. Anyway, you again proclaim something without any proof. How on earth do you know there’s still systemic error? You have that strange feeling in your guts?”
Once again you demonstrate your lack of understanding of physical science. The term is actually systematic uncertainty. While random uncertainty (i.e. variations in the measurement of the same thing using the same measurement device) can be estimated statistically, systematic error (independent measurement of different things) can be quantified only through research and analysis.
from this site: https://www.slac.stanford.edu/econf/C030908/papers/TUAT004.pdf
“Most measurements of physical quantities in high energy physics and astrophysics involve both a statistical uncertainty and an additional “systematic” uncertainty. Systematic uncertainties play a key role in the measurement of physical quantities, as they are often of comparable scale to the statistical uncertainties”
“Statistical uncertainties are the result of stochastic fluctations arising from the fact that a measurement is based on a finite set of observations. Repeated measurements of the same phenomenon will therefore result in a set of observations that will differ, and the statistical uncertainty is a measure of the range of this variation. By definition, statistical variations between two identical measurements of the same phenomenon are uncorrelated, and we have well-developed theories of statistics that allow us to predict and take account of such uncertainties in measurement theory, in inference and in hypothesis testing”
“Systematic uncertainties, on the other hand, arise from uncertainties associated with the nature of the measurement apparatus, assumptions made by the experimenter, or the model used to make inferences based on the observed data.”
As I have pointed out to you before, even the Argo floats have an uncertainty associated with them, uncertainties that have nothing to do with the resolution of the actual sensor. So do field measurement devices. You simply don’t know if something has impacted the water or air flow in the SYSTEM and therefore the stated value read from the SYSTEM is uncertain. That uncertainty is is not able to be reduced statistically. IT IS NOT RANDOM ERROR conducive to statistical analysis.
“Now I’d pull my usual methodology and appeal to authority by pointing out that you are NOT authority in this field. Neither Jim (who is confusingly very much like Tim).”
Authorities are authorities because they can back up what they say. Andy and Jim have certainly done so. I have given you reference after reference to back up what I say, references that actually do the math. All you have to offer is an Appeal to Authority – commonly known as name dropping. Name dropping proves nothing.
You can continue to proclaim your ignorance for everyone to see or you can actually do some research on the subject. Which you choose is up to you!
Tim, Nice comment, I agree.
Correct the field of study is called “systematic uncertainty” universally. As newly minted wordsmith, I have a problem with the name and used “systemic” deliberately because it more accurately reflects what I meant.
systematic: means methodical. It is a plan to affect the system.
systemic: Related to an outside force that affects the whole system. A pandemic is systemic, for example.
I used “systemic” because I meant that in the past, there were forces on climate that we do not have today, for example the Maunder minimum. Today we may have forces that did not exist in the past, industrialization for example.
Let me add that you never gave any kind of scientific explanation of what the uncertainty in a recorded measurement of 75 actually is. I’ll say again, that is a non-repeatable measurement that fades away in time never to be made again. You can never know where the mercury was other than between 74.5 and 75.5 degrees. That is uncertainty in measurement! It is not amenable to statistical treatment since it is a single measurement with no distribution surrounding it.
The usual bullshiting… Uncertainty is dependent on the type of the instrument and its calibration. Whether this 75 is degs.F or degs.C does matter too, so again, please be more precise. You (or Tim, I can’t distinguish you two) started to speak about systematic uncertainty but I’m pretty sure you don’t speak about that (but you didn’t specify).
See, you can’t talk about the “uncertainty in a recorded measurement of 75”, this doesn’t make sense without specifying more. You gave a totally arbitrary “minimum uncertainty interval of +/- .5”. I guess you were talking about 75F, but then my simple household thermometer has a much better uncertainty of +/- .11 around 75F.
You are blowing smoke! The calibration of field measurement stations often happens at infrequent intervals. Should be annually but many times isn’t. Argo floats get calibrated every five years if memory serves.
I’ve told you this before but the federal guidelines accept a +/- 0.6C uncertainty in their measurement stations. See the Federal Meteorological Handbook No. 1 for documentation.
The uncertainty on my high-priced Davis weather station is +/- 0.5C *when purchased* for temps above -7C and +/- 1C for temps under -7C. There is no guarantee for uncertainty after the 1 year warranty period. I sincerely doubt that a “simple household thermometer” stuck outside in the weather for a year will have a better uncertainty than my weather station. Even the inside temperature on the Davis console is only rated at +/- 0.3C.
Even most expensive indoor thermometers (e.g. the $50 Oria – check out Amazon) only have +/- 1C uncertainty – though they may list the “resolution” as 0.1C. Resolution and uncertainty are two totally different things – as you should be aware of by now.
Produce the physical theory that converts a proxy metric into Celsius, nyolci.
You can’t do. And neither can anyone else.
Absent that theory — and it is absent — the y-axis is physically meaningless.
What is the meaning of physical error in a physically meaningless number?
Systematic error in the air temperature record. Here. (869.8 kb pdf) And here. And more to come.
A demonstration that the people in the field are incompetent: here. (1 mb pdf).
You scoff at others who have strong engineering degrees, as not experts. In what are you expert? How do you know an engineer is not expert in physical error analysis?
And how can you possibly think that physical error in climatology is in any way handled differently than physical error in the rest of science and engineering? It isn’t.
You’re clueless all the while you expostulate, nyolci. Maybe that’s why you’re so confident. You have no idea that you have no idea.
I don’t have to. C&L have just done that. Mann has done that. etc. You’re asking for something that is commonplace in climate science.
I have a “strong engineering degree” and I don’t think I’m an expert in climate science. I’m an expert in my field. That’s why I take climate scientists seriously. They are the experts of their field.
Speakin about Jim? (or Tim?) They may be experts in their narrow fields but statistical analysis is beyond them. Good illustration: [JT]im thought “combining measurements” increases uncertainty (non systematic). Now “combining”, if done properly reduces uncertainty. This you learn during the first 1-2 grades in any serious university engineering programme. They likely studied that but didn’t use (or didn’t understand at the first place).
Exactly. I don’t think it’s handled differently. I don’t even understand why you think I think it’s handled differently.
Look, who is talkin… 🙂
“I don’t have to. C&L have just done that. Mann has done that. etc. You’re asking for something that is commonplace in climate science.”
And those reconstructions have been debunked over and over. They made no attempt to account for confounding variables as well as other mistakes in their statistical analysis.
“but statistical analysis is beyond them.”
Malarky. The uncertainty in single, one-time measurements are not able to be analyzed using statistics. Why do you keep saying they are?
“[JT]im thought “combining measurements” increases uncertainty (non systematic). Now “combining”, if done properly reduces uncertainty. This you learn during the first 1-2 grades in any serious university engineering programme. They likely studied that but didn’t use (or didn’t understand at the first place).”
It’s not obvious that you ever took any serious university engineering classes at all.
———————————————-
From “Data Reduction and Error Analysis”, 3rd Edition by Bevington and Robinson
A study of the distribution of the result of repeated measurements of the same quantity can lead to an understanding of these errors so that the quoted error is a measure of the spread of the distribution. However, for some experiments it may not be feasible to repeat the measurements and experimenters must therefor attempt to estimate the errors based on an understanding of the apparatus and their own skill in using it.
———————————————–
Temperature measurements are *NOT* repeated measurements of the same quantity.
How many times must that be repeated before it sinks in?
The uncertainty in single, independent temperature measurements must have the uncertainty in those measurements estimated on the understanding of the apparatus.
John Taylor in “An Introduction to Error Analysis” (Chapter 3.5) states: “When measured quantities are added or subtracted the uncertainties add”.
Think about calculating circumference of a table top. First you measure the width and then the length. Those are independent measurements of different quantities. When you add them together then the uncertainties in each add. The overall uncertainty INCREASES, it does *not* reduce. There simply is no way to analyze each of those single measurements using statistics to reduce the uncertainty.
Now, taking a temperature reading at a station in Kansas City, MO and one in Olathe, KS are similar. They are independent measurements of different quantities (just as with the table) – therefore if you try to average them then you *must* increase the uncertainty of the sum by adding the uncertainties in quadrature – root sum square.
When you average the single, independent temperature readings at 100 stations you have to add their uncertainties in quadrature. I..e (uncertainty) x (sqrt(100)) = 10 x uncertainty. Your +/- 0.5C uncertainty becomes +/- 5C.
That uncertainty simply overwhelms the ability to actually determine a difference of 0.1C from year to year!
BTW, you never answered Jim’s question. What is the uncertainty of a reading of 75F at the Olathe, KS airport measurement station? Don’t run away. Don’t blow smoke up our butts. Just give a straightforward, simple answer. If you have no idea then just admit it!
nyolci,
If you disagree with estimates by others of error or uncertainty, then it is incumbunt upon you to quote your own preferred values.
I have been asking this question of Australia’s BOM for 6 years now, with no useful answer. Q: “If a person seeks to know the separation of two daily temperatures in degrees C that allows a confident claim that the two temperatures are different statistically, by how much would the two values be separated?”
nyolci, are you able to provide an answer? Geoff S
Hm, I don’t have preferred values. I pointed out that all the scientific papers mentioned here gave the errors usually as 95% confidence intervals.
Another bullshit question. We have to know the thermometer’s type and calibration for this. Actually, in a sense we can never say this is bigger than that, ‘cos we are here dealing with probability distributions that are null out only asymptotically. But we can tell with a certain probability that is very high.
But if we take J/Tim’s bullshit seriously then we can use a continuous uniform distribution with a interval length of 1(C?). Because he/they think the actual value can be anywhere in this interval (see the thick line example above) uniformly. Actually, uncertainty is the standard deviation of the (possibly empirical) probability distribution, so the actual interval length is ~3.46C but doesn’t matter. Then we need a separation of 1C (or ~3.46C).
“Hm, I don’t have preferred values. I pointed out that all the scientific papers mentioned here gave the errors usually as 95% confidence intervals.”
What errors? What scientific papers? I *never* see any uncertainty interval quoted in any of the climate science papers.
You need to stop digging the hole you are standing in.
Uncertainty is usually defined as the 95% confidence interval for the true value. But if you never quantify the uncertainty then the term “confidence interval” is meaningless!
See:http://www.physics.pomona.edu/sixideas/old/labs/LRM/LR03.pdf
———————————–
“How can one quantify uncertainty? For our purposes in this course, we will define a value’s uncertainty in terms of the range centered on our measured value within which we are 95% confid-
ent that the “true value” would be found if we could measure it perfectly. This means that we expect that there is only one chance in 20 that the true value does not lie within the specified range.This range is called the 95% confidence range or 95% confidence interval.
The conventional way of specifying this range is to state the measurement value plus or minus a certain number. For example, we might say that the length of an object is 25.2 cm ± 0.2 cm:
the measured value in this case is 25.2 cm, and the uncertainty U in this value is defined to be ±0.2 cm. The uncertainty thus has a magnitude equal to the difference between the measured value
and either extreme edge of the uncertainty range. This statement means that we are 95% confident that the measurement’s true value lies within the range 25.0 cm to 25.4 cm.”
—————————————————-
“Another bullshit question. We have to know the thermometer’s type and calibration for this. Actually, in a sense we can never say this is bigger than that, ‘cos we are here dealing with probability distributions that are null out only asymptotically. But we can tell with a certain probability that is very high.”
Individual, single, independent temperature readings do *NOT* have a probability distribution! You keep falling back on the idiocy that you can take multiple readings of the temperature and use the central limit theory to calculate an ever more accurate mean.
You simply can’t do that. When you take a temperature measurement that quantity is gone, fini, disappeared into the 4th dimension – never to return. The mean of that reading is the reading itself. The standard deviation of that reading is zero. There is no variance since there is only one data point in the data set so there is no standard deviation either.
I’ve given you the Federal Meteorology Handbook No. 1 that specifies the type and calibration standards for federal measurement stations. So why are you still quibbling about thermometer type and calibration?
“But if we take J/Tim’s bullshit seriously then we can use a continuous uniform distribution with a interval length of 1(C?). Because he/they think the actual value can be anywhere in this interval (see the thick line example above) uniformly.”
The definition of a continuous uniform distribution is where each point in the interval has an equal chance to happen. When did anyone say this? An uncertainty interval has *NO* probability distribution at all. Not even a uniform one. Who knows if all points in the interval have an equal probability of being the true value? I don’t. You might think you do but you don’t. As the quoted document above states, the uncertainty interval is just that interval we are 95% confident contains the true value – nothing more. It specifies *nothing* about a probability distribution for the values in that interval.
That black line you speak of is the uncertainty interval. The true value could be *anywhere* in the interval. That uncertainty interval does *not* define a probability distribution – period, exclamation point.
“Actually, uncertainty is the standard deviation of the (possibly empirical) probability distribution, so the actual interval length is ~3.46C but doesn’t matter. Then we need a separation of 1C (or ~3.46C).”
How do you have a variance, standard deviation, and mean in one, single, independent measurement? It’s a data set of size one. If you have none of those then you have no probability distribution. And an uncertainty interval is *NOT*. ,let me repeat, IS NOT a probability distribution.
““Now I’d pull my usual methodology and appeal to authority by pointing out that you are NOT authority in this field. Neither Jim (who is confusingly very much like Tim).”
Am I an authority? You bet your butt I am not. That is why I make sure I can provide technical references that provide the math and explanations for the claims I make about uncertainty. I should note that I have yet to see one technical reference from you that backs up what your claimed climate scientist authorities have done.
The other part of the response is that I may not be an expert, I sure am knowledgeable. Have you ever designed wideband RF amplifiers with a given noise figure and gain? Dealt with measuring devices that were not precise enough or exactly accurate? How about matching output and input impedance so you don’t spoil the noise figure and gain?
I recently installed all new real wood trim, base and crown, and doors in my house. I can assure you frame carpenters and drywall folks have little knowledge about measurement uncertainty or certainty for that matter. Try installing a door where the top frame is not level and one wall leans one way and the other wall leans the other way. Try cutting base trim where the walls both lean AND don’t meet at 90 degrees. Add to that mitre saw lack of ability to precisely repeat angled cuts and you’ll soon learn about uncertainty. You’ll also learn to appreciate silicon caulking too! Oh, and I forgot operator error also!
You don’t appear to have dealt with any of these issues in a professional job. I suspect that is why you only refer to expert opinion.
See the articles referenced in Andy’s writing above. Like C&L2012 or Mann2008.
I’m pretty sure you’re good in this. What you wrote about statistical analysis revealed you were not good in that.
I have to confess my sins, yes… 🙂
nyolci,
“This “nominal uncertainty range of +/- .5C” is the artifact of your imagination. By the way, it was Jim who was bullshiting about this, I think you accidentally switched role.”
No figment here! If our best measuring stations have a +/- 0.5C uncertainty then tree ring proxies *CERTAINTLY* have a wider uncertainty range! I was being unbelievably trusting in only applying +/- 0.5C to the tree ring proxy.
“No one assumed this. They calculated a 95% confidence interval. It’s explained in the article.”
Then why is there nothing in the report figures about the uncertainty? I don’t think you actually know what a confidence interval. A confidence interval can’t tell you what the actual true value is. It just tells you that it is probably somewhere in an interval. And proxies don’t provide enough data points to actually know if the probability distribution around each individual value is normal or not! There are too many confounding factors to actually do even an accurate estimate! Was it temperature (e.g. a late or early frost) or precipitation or insects or surrounding tree density (i.e. shade) that actually determined the width of each tree ring? Do *YOU* know from 2000 years ago? Does *anyone* know from 2000 years ago? That is why it is so important to spend significant effort in determining what uncertainty should be applied!
“Is it your favorite stick horse of “uncertainty is increased by averaging”? If so, lemme quote Jim: “Variability is often reduced to nothing by averaging and then smoothing.” Now it is very hard to argue that uncertainty and variability are very-very different things. NB. C&L2012 used annual or decadal values.”
It’s not a “stick horse”. It is actual physical science. Uncertainty and variability ARE very-very different things!
How do you get annual or decadal values? Are they calculated or measured? If they are calculated then the uncertainty of values used in the calculation must be propagated along with the calculation itself.
Variability is how much things change in the short term. Uncertainty lays out how well you can measure that variability. If you think your stated value of variability from natural causes is 0.2C but your uncertainty in each measurement used to calculate that variability is 0.5C then you don’t actually know *what* the true value of the variability is.
What you want us to believe is that when you say a 2″x4″ stud is 96″ long +/- 0.25″ then it really means that it is exactly 96″ long. You want to throw away the uncertainty in your measurement. *THAT* is what the climate scientists are doing!
Note … a pre-cut stud for an 8′ wall is 92 5/8 inches … exactly. There are three 1.5 inch plates to account for and deduct.
Rory,
That is to allow for the double 2″x4″ top plate and the single 2″x4″ bottom plate in a stud wall. So substitute 92 5/8″ for 96. My statement still holds.
Good points Tim. It is annoying when someone, whether it is Mann or nyolci, takes a regression computed 2 standard deviations from the mean and assumes that encompasses the true value. Happens all the time. The more computers are used to massage data, the dumber we seem to get as a society. It is why I push the concept of staying as close to the actual measurements as humanly possible.
You really do need to learn the differnce between uncertainty and error. THEY ARE NOT THE SAME THING. Please obtain this book and learn the information in it. Until you do, you are going to appear ignorant to the folks here that deal with the two different things. An Introduction to Error Analysis. The Study of Uncertainties in Physical measurements by Dr. John R. Taylor
The real physical variability is not changed but it is hidden behind the averaged and smoothed process.
Here are some references you can give us an argument about. Please don’t use a simple appeal to authority that says these people know what they are doing! Don’t be stupid, read and study these references to learn something about which you are trying to appear knowledgeable.
====================================
“Variances must increase when two variables are combined: there can be no cancellation because variabilities accumulate.”
https://intellipaat.com/blog/tutorial/statistics-and-probability-tutorial/sampling-and-combination-of-variables/
====================================
“We can form new distributions by combining random variables. If we know the mean and standard deviation of the original distributions, we can use that information to find the mean and standard deviation of the resulting distribution.
We can combine means directly, but we can’t do this with standard deviations. We can combine variances as long as it’s reasonable to assume that the variables are independent.
https://www.khanacademy.org/math/ap-statistics/random-variables-ap/combining-random-variables/a/combining-random-variables-article
==================================
The root sum of squares is the way that combines the standard uncertainties of more than one contributor to provide our overall combined uncertainty. This is not influenced by the number of measurements we take to determine our standard uncertainty and there is no division by the number of measurements involved.
https://pathologyuncertainty.com/2018/02/21/root-mean-square-rms-v-root-sum-of-squares-rss-in-uncertainty-analysis/#:~:text=The%20root%20sum%20of%20squares%20is%20the%20way,no%20division%20by%20the%20number%20of%20measurements%20involved.
==================================
And here is a reference show how to combine random variables with different variances.
Combined Variance | eMathZone
Let’s notice that nyolci is silent on the basic question of physical meaning.
His other arguments are mere distractions.
I didn’t speak about error. It was Andy with “systemic error”. The rest of your rant is irrelevant but I think I know where you got the feeling you have:
Yes. And variance halves when averaging two independent variables with the same distribution. This does decrease standard deviation.
If you mean two independent variables with the same mean and standard deviation then when you add them (in order to average them) the variance doubles and the standard deviation goes up by sqrt(2).
———————————–
The Pythagorean Theorem of StatisticsQuick. What’s the most important theorem in statistics? That’s easy. It’s the central limit theorem (CLT), hands down. Okay, how about the second most important theorem? I say it’s the fact that for the sum or difference of independent random variables, variances add:
For independent random variables X and Y,
Var(X +/- Y) = Var(X) + Var(Y)
———————————–
If the distribution are the same then Var(X) = Var(Y) and Var(X+/-Y) = 2Var(x) = 2Var(Y).
If you then divide by 2 (in order to do an average) you wind up with Var(X = Var(Y). The variance does *not* halve and it does not decrease standard deviation.
And none of this applies to uncertainty because uncertainty has no variance, standard deviation, or mean!
Large uncertainties in temperature
Large uncertainties in time
Large uncertainty if tree ring are useful for temperature, at all.
Leave out those tree rings you can get something very different
Sea proxies also show a distinct warmer period.
Especially in the Arctic
Andy
Nice article
This graphic is taken from my article on the Intermittent little Ice age and uses CET (including my extension to 1539.
The graphic looks at the temperatures experienced in England over a 70 year life time. It bears a close resemblance to your figure 2. CET is often said to be a reasonable but not perfect proxy for at least the Northern Hemisphere and some argue a wider area.
slide4.png (720×540) (wordpress.com)
Freaking hilarious! 🙂
I made headlines!!!
We can fell Andy’s awkwardness from his writing ‘cos this paper does not confirm his assertions however hard he’s trying to push it. Two quotes below are from the conclusions’ section. The paper states many more times that its results are in agreement with previous work. IMHO it’s a refinement for extra-tropical NH.
Please note that we are 1C above the mid 20th century temperature, furthermore this paper is not a good choice against Mann w/r/t MWP (while it reconstructs a bit higher peak) for other reasons too:
Andy, being The Andy, cannot leave his characteristics. He is coming up with some tiring bs:
One is justifiably suspicious about these sudden 2C drops ‘cos this usually signifies strong volcanic eruptions, confirmed indeed for the 540s. This is above variability, and these events are well identifiable in proxies, nothing special for C&L2012.
More Andyisms:
Thank you, Andy. We didn’t know that. Now we do, thank you. Seriously, do you really believe scientists think a reconstructed temperature series is usable the same way as the instrumental record?
Current warming is way faster than what can be characterized as “low frequency”. And it’s getting even faster.
Another EVIDENCE FREE rant…
YAWN !
Warming is now COOLING. !
Many places haven’t warmed at all this century.
Trends have not increased .. unless data gets “adjusted”
You FAILED again, nye !!
Yes, 70 year cycles. Should be cooling now.
Judging from the weather right now we are already witnessing some cooling.
MWP existed globally, and was warmer than now
GET OVER IT !
Stop your ignorant and childish, evidence-free Climate Change DENIAL.
I can’t believe you said that nyolci. You do realize that Michael Mann, et al. did exactly that when he spliced the instrumental record onto his reconstruction. He did this in 1998, 1999, and in 2008. His infamous conclusion, that was rejected by the National Research Council and others was, as follows. There are other versions, but this one is from Mann(2008):
See the contradiction?
🙂 I was talking exactly about this 😉 Scientists know very well how to use these reconstructions with the instrumental record, they don’t mix up what should not be mixed up, they don’t get confused. Unlike you. Mann’s method has been explained numberless times. Well, you should be a more careful, I didn’t say you could never compare or whatever these. I only said scientists didn’t think they were usable in the same way.
BTW, there’s another small error in your text:
It wasn’t rejected.
Back to business, I think you start to realize by now that bringing up C&L was an error, they reinforce science. No wonder you only addressed a single (misunderstood) sentence from my post.
nyolci, It was rejected, this is a quote from page 4 and 115 of the National Research Council Report:
I didn’t misunderstand your statement. You didn’t read the report. Other critiques, including Wegman’s were harsher.
No, it wasn’t. It noted some minor uncertainties, that’s what you lifted out of context. But by and large:
“The basic conclusion of Mann et al. (1998, 1999) was that the late 20th century warmth in the Northern Hemisphere was unprecedented during at least the last 1,000 years. This conclusion has subsequently been supported by an array of evidence that includes both additional large-scale surface temperature reconstructions and pronounced changes in a variety of local proxy indicators“.
and
“Based on the analyses presented in the original papers by Mann et al. and this newer supporting evidence, the committee finds it plausible that the Northern Hemisphere was warmer during the last few decades of the 20th century than during any comparable period over the preceding millennium”
But you again try to use science to deny science. This report, as expected, confirmed our (now almost religious 🙂 ) faith in climate science including such things that models, observational confirmation of warming and the worth of reconstructions.
Well, either you did or you pretend you did, the latter is because this is the simplest way to duck answering the question whether you are embarrassed to discover that C&L2011+12 can’t be used for denial.
“Finds it plausible” ??
You’ve got to be kidding me.
Mann said: “ supporting the conclusion that both the past decade and past year are likely the warmest for the Northern Hemisphere this millennium”
If you don’t know the difference between “likely” and “plausible” I’m afraid I can’t help you.
What is the difference? Can you quantify it?
…..Is the COWARDLY d’nyholist way
Just avoid answering at all.
Let’s watch the d’nyholist avoids complete, yet again 😀
1… Do you have any empirical scientific evidence for warming by atmospheric CO2?
2… In what ways has the global climate changed in the last 50 years , that can be scientifically proven to be of human released CO2 causation?
““The basic conclusion of Mann et al. (1998, 1999) was that the late 20th century warmth in the Northern Hemisphere was unprecedented during at least the last 1,000 years. This conclusion has subsequently been supported by an array of evidence that includes both additional large-scale surface temperature reconstructions and pronounced changes in a variety of local proxy indicators“.”
Regional surface temperature charts refute Mann’s claim that 1998 was the hottest year in a 1,000 years.
Regional surface temperature charts show the Early Twentieth Century was just as warm or warmer than 1998.
James Hansen said the decade of the 1930’s was the hottest decade and 1934 was hotter than 1998.
Michael Mann should have talked to Hansen before making this ridiculous “hockey stick” claim.
**This conclusion has subsequently been supported by an array of evidence that includes both additional large-scale surface temperature reconstructions and pronounced changes in a variety of local proxy indicators“.**
It has not been supported by any legitimate science. His friends changed a couple of proxies but retained the main problem and claimed SUCCESS!! The the nonsense was refuted by Steve Mcintyre.
Did you read Christiansen and Ljungqvist 2011 and 2012?
I don’t think you did.
I agree they reinforced science. They showed Mann, 1998, 1999, and 2008 were nonsense.
They helped paleo-temperature science tremendously by showing what it can and cannot do. They illuminated the data, opened the curtain. I think you need to re-read the papers.
“Scientists”: Which scientists? When? In what publication? Be specific. Improper references are unscientific. So is poor grammar.
There, I fixed it for you.
“Scientists”: Again, please be specific as to which scientists, when and in what publication.
“Current warming is way faster than what can be characterized as “low frequency”. And it’s getting even faster.”
There is no acceleration in the satellite lower troposphere record, only in the homogenized surface record. One wonders why.
Notice how Warmunists like A-holist have moved the goalposts once more. The problem is no longer the temperature (which refuses to cooperate), but the rate of increase.
Nyolci, one of the best integrating proxies for global temperature is sea-level rise averaged over periods longer than El Nino/La Nina. Warming causes both sea water expansion and it causes land grounded glacial ice melt which also adds to sea level rise. Water impoundment and aquifer withdrawal are small – typically needing only small short term corrections. The only correction needed is for land subsidence, but that can accurately be done with a site specific linear adjustment. That adjustment doesn’t affect acceleration estimates, so we can confidently assess acceleration. The sea-level rise showed fast acceleration after the end of the Little Ice Age starting after 1850 (before CO2 started to rise) and slow acceleration through the early 1900s. There is very, very little to no acceleration now, indicating that there is likely no acceleration in tropospheric atmosphere temperatures. That and the satellite based temperature record indicates that you’re badly mistaken on your (unsupported) claim of an accelerating temperature rise..
See the accompanying sea level rise graph from Climate Etc.:
nyolci, OK you complain I didn’t respond to your other comments, most of them are not worth responding to, but here we go.
The reconstructions show that the MWP is as warm or warmer than the mid-twentieth century, true.
You say we are one degree warmer than the mid-twentieth century now. I say so what? What happens in the next 50 years? Guy Callendar was brilliant in 1938 and an idiot in 1962. Climate is long-term.
Yes, volcanoes reduce temperatures. What volcanic eruption was in 542AD? Why is a two degree drop in 542 and a 1.6 degree rise between 976 and 990AD insignificant and a one degree rise between 1950 and 2021 a big deal? Where is your perspective?
Current warming is faster than low frequency? What about 542AD and 976AD?
If you believe the paleo temperature record and think that it can be compared to todays temperatures, be consistent. The data for your position has the same credibility as the data against. We are not allowed to pick and choose the data that suits us.
If 1950 to 2021 is important, so is 976-990. Or maybe, the data for 976-990 is not very accurate? If that is the case, then how can we compare 1950-2021 to anything in the paleo record?
Sorry nyolci, you are losing me.
At last!
At last you accept facts!
Yep. And we have a long term instrumental record and very good reconstructions. That’s why we know with high confidence (these are quantifiable things, and quantified by people knowledgeable in these matters), so we know with high confidence that the cause is anthropogenic build up of greenhouse gasses.
Fcuk knows. I wasn’t there. It was in Iceland. They found it out using Swiss ice cores. It was a bit before 542. FYI the “year without a summer” phenomenon is long connected with volcanic eruptions.
What 1.6 degree rise between 976 and 990? 🙂 You mix up a trend and the variability on it (the “grey line”). That was a 0.16C/decade warming, less what we have today, and much less than what we have in the last few decades, accelerating. And the latter is scientifically proven not to be variability.
In the science.
Yes. It’s evident in time scales less than 30-50 years (rule of thumb low freq. limit) and proven not to be the result of variability.
Tryin hard 🙂 The good thing in science that you have the evidence. So for comparisons this is why we should use the smoothed record, not the grey lines you love so much. For today, we have the appropriate smoothing too. Scientists have worked out these, and as a result we can see the actual trend as opposed to variability.
As above. Now we have quite a clear picture what constitutes variability and what constitutes trend both in the reconstruction and nowadays.
At last you accept facts!”
The far north (at the very least) was WARMER than T-O-D-A-Y. Lots of EMIPRICAL evidence demonstrates this.
The far south too……”Reporting in the peer-reviewed journal Geology, scientists encountered what appeared to be the fresh remains of Adelie penguins in a region where penguins are not known to live. Carbon dating showed the penguin remains were approximately 800 years old, implying the remains had very recently been exposed by thawing ice”
If you disagree, please provide proof instead of simple hand waving or a proxy reconstruction. Your bullshit is getting boring.
When you rely on smoothed time-series data, you only know what the average climate was like, and not what the short term variations were. That is, if you are relying on anomalies and the temperature anomaly one year was +2 and then the next year was -2, the average (0) makes it appear that there was no change. However, it was one Hell of a ride for the farmers over those two years.
Where did this come from? You’ve presented no evidence that anthropogenic GHGs had anything to do with warming. neither has anyone else.
So you admit that we cannot rely on the paleo-temperature record. We have no idea what the natural variability was before the instrumental record, thus no idea how recent warming compares to past warming. Nice to see you are accepting the facts.
But, if you don’t accept the accuracy of the paleo-temperature record, how is it evident? You are contradicting yourself.
For your other silly comments, see above.
We are all STILL WAITING for you produce any scientific evidence, d’nyholist.
It seems that YOU don’t actually have any evidence.
RUBBISH!
STILL WAITING for your evidence, d’nyholist. !
Evidence-free norwegian-blue “blah, blah”, is NOT science..
….. but seems to be all you have, is that right, d’nyholist !!!
Would like to run away, yet again ?? Everyone is watching and LAUGHING. 😆
1… Do you have any empirical scientific evidence for warming by atmospheric CO2?
2… In what ways has the global climate changed in the last 50 years , that can be scientifically proven to be of human released CO2 causation?
990-976=14 years 1.6/1.4=1.1 degrees/decade.
Work on your math skills.
And you are PROVABLY WRONG, d’nyholist
MAybe it is somewhere, in the middle of an urban cluster
But globally.. NOPE
And the warming since the LIA has absolutely zero human cause except Urban warming smeared all over the place where it doesn’t belong.
There is NO EVIDENCE of warming by ahuman released atmospheric CO2
If you think there is , then have the guts to at least attempt to answer these two questions,.. WITH EVIDENCE..
Oh wait.. you don’t “believe” in scientific evidence, do you., d’nyholist. !
1… Do you have any empirical scientific evidence for warming by atmospheric CO2?
2… In what ways has the global climate changed in the last 50 years , that can be scientifically proven to be of human released CO2 causation?
Oops,
third graph was meant to be this one
Let’s add a couple from around the world .
And a couple more for good measure
Andes, South America
Central Asia
Central Siberia.
“You say we are one degree warmer than the mid-twentieth century now.”
I’m wondering where nyolci got this one-degree C figure?
My understanding is when we reached the highpoint temperature of 2016 the so-called “hottest year evah! (tied with 1998), we were at that time one degree C above the average for the period from 1850 to the present (figured using a bastardized, Hockey Stick Chart).
Since 2016, the temperatures have dropped by about 0.7C, so that would put us at about 0.3C above the 1850 to present average as of today. Not 1.0C but 0.3C.
It’s getting cooler, nyolci. Have you noticed?
Here’s the UAH satellite chart:
http://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_December_2020_v6.jpg
“One is justifiably suspicious about these sudden 2C drops ‘cos this usually signifies strong volcanic eruptions, confirmed indeed for the 540s. This is above variability, and these events are well identifiable in proxies, nothing special for C&L2012.
More Andyisms:”
Thanks for your comment, I was also struck by this point. The one-year paleorecords seem to be showing year over year variability that is significantly greater than that seen in the instrumental records. I do not know how you can look at those jumps and think, “we can trust that this annual variability holds for the pre-instrumental period.”
Any post discussing proxies should start with
McShane and Wyner
https://projecteuclid.org/euclid.aoas/1300715170
“In this paper, we assess the reliability of such reconstructions and their statistical significance against various null models. We find that the proxies do not predict temperature significantly better than random series generated independently of temperature.”
After they posted this in 2010, many groups including Mann and Schmidt commented.
I think it is very valuable to read through this story and most importantly the “rejoinder” where McShane and Wyner defend their findings against all criticism and particularly “destroy” the comment by Mann as his work seems to have math errors, coding errors and his numbers do not support the critique he is raising. All said and done, McShane and Wyners work stands and this unfortunately means all the papers you cite here are quite meaningless!
One of their biggest issue they raise, is quite easy to see really:
The process of proxy selection and screening is not adequately captured in the statistical modeling and thus the calculated number and uncertainties are quite meaningless..
Like exploring the sugar content of apples by harvesting in April..
Brilliant! I wish I’d written that!
Well thank you, Andy!
I almost did not write it, because it seems to be well known for more than a decade. “Proxy artists” keep publishing their data, while they seemingly ignoring that paper with its basic and very problematic statements. I cannot understand why every proxy paper after McShane and Wyner is not called out right away. This field of science needs to deal with that paper before they can move on!
“McShane and Wyner defend their findings against all criticism and particularly “destroy” the comment by Mann as his work seems to have math errors, coding errors and his numbers do not support the critique he is raising.”
Mann’s work contains no errors. He did exactly what he wanted to do.
“The process of proxy selection and screening is not adequately captured in the statistical modeling and thus the calculated number and uncertainties are quite meaningless..
Like exploring the sugar content of apples by harvesting in April..”
Really, McIntyre has covered this over and over and over and over… But activist scientists completely ignore valid criticisms.
”We find that the proxies do not predict temperature significantly better than random series generated independently of temperature.”
Absolutely, but when you have proxies telling you 1000 years ago was warmer than today together with melting penguins, alpine paths, stone tools and tree stumps now showing up and carbon dated to around the same period, that there is science brother!
What say you nyolci?
Mike, good points. The historical, archaeological, and glacial advance and retreat data, plus the borehole temperature data is what I find convincing. All the proxy crap, not so much. Soon, Baliunas, et. al (see bibliography) do a good job of laying all this out.
So I went back to the C&L 2012 paper to look at the proxies used. They give a table of 91 proxies considered. What I find weird is just how many of them start at 1500, hence are not used. In fact, all but one or two either start at year 1 or 1500. Also, when you look at figure 3, you have to scratch your head and wonder what is the purposes of trying to combine such a diverse set of proxies into a single proxy. If there is a story to tell its in the differences among proxies through time. I view the paper as a complete waste of time.
Nelson, I found the paper illuminating and enjoyed reading it. Bo Christiansen and Fredrick Ljungqvist cut through the monumental BS of the whole paleo-temperature facade and revealed the true problem.
The statistical techniques were flawed. They used a straightforward and logical method to look at the proxies and found that variability increased! This had been predicted with theoretical studies, but never tried before.
Many of us had questioned modern statistical techniques for many decades, but C&L showed it was true. I think they made a valuable contribution to the community.
You say:
This is very true, but C&L showed that was the case. And for that, I commend them.
Stick to the data, the observations don’t lie. Statistics lie.
“Lies, damned lies and statistics.” Mark Twain.
Andy, What exactly are they trying to do? They start with a cross sectional time series data base that preserves the unique properties of each time series and contains unique site specific data and then they mash them together in an ad hoc way to obtain a single time series that has no meaning. Most of the individual proxies look nothing like the final mess. Variability of what increased? is my point. Variability of something that has no meaning. Who cares.
Nelson, My read? They are trying to show the over-the-top statistical methods used in making paleo-temperature reconstructions from proxies are BS. Soon and Baliunas showed the same thing in 2003. But, the BS won.
Take the data, examine it carefully, average it and try and get a reasonable trend. That is the best you can do. Mann and the others using regression are polishing a turd, but it’s still a turd.
Soon and Baliunas have been gaslighted by all and sundry of the hit squad ever since and deemed beyond the pale of corporate “climate science”.
I’ve noticed that if they are not trying to “polish” them, they’re devising for new methods to pick them up by the “clean end”.
The vicious character assassination meted out to Soon and Baliunas demonstrates conclusively that this business has absolutely nothing to do with actual science.
Graemethecat and Nelson. Soon, Baliunas and their colleagues are still the definitive paper on temperature changes over the past 1,000 years IMHO. Christiansen and Ljungqvist is the best reconstruction using proxies, it fits the historical record fairly well. But, as Nelson says, it is not quantitative, the proxies don’t allow that. The only reason I believe it is in the ballpark, is the match to historical records and the borehole temperature data.
“The only reason I believe it is in the ballpark, is the match to historical records and the borehole temperature data.”
Hockey Stick temperature data?
Stationarity in time series is a well known issue and has substantial methods to identify it and sometimes make them compatible. But not always.
Not being a scientist let alone a statistician if I were to attempt a global temperature reconstruction I would first decide what was a reliable global temperature proxy then collect sample measurements at random from the entire class, I suspect nothing meaningful would result.
Can anybody show me the graph of a single weather station thermometer that has been around since, say 1850, which would clearly depict a dire situation?
Short answer: No.
Even for comparison purposes, Mann’s abomination is not a viable reconstruction. Those that imitate or simulate Mann fall into the same/similar traps.
From: https://arxiv.org/ftp/arxiv/papers/1204/1204.5871.pdf
“For this study, it is also of interest that one recent reconstruction (Christiansen and Ljungqvist 2012, CL12) includes a high percentage of east Asian proxies.
Contrasting to the possible orbital effects in high latitudes, there is no clear indication for a biasing effect of east Asian proxies.
However, in interpreting east Asian climate proxies some peculiarities have to be considered as for example the importance of the Tibetan Plateau as a source of elevated atmospheric heating and the relation of the (east) Asian summer monsoon to Pacific decadal variability (e.g. Chang 2000) and tropical Pacific SST-variability (e.g. Wang 2000)”
Oddly, East Asian proxies show strong cooling.
One can only assume the proxies gets tortured to comply. The Mannian /marxist way.
The ccean to the east of Asia (Western Pacific Warm Pool) have also cooled significantly during the Holocene
I must say, you are on the right track. This is what a friend and I have discovered also. If you try to find offsetting temperature rise in other areas, you simply can’t. This is what the GAT folks simply won’t address although I have asked a couple of them.
Their usual dismissal is that temps are correlated out to 1200 km. When you point out that they are correlated only by season they reference a study.
The temperatures in St Paul and Kansas City correlate pretty well, especially if you adjust for seasonal variation. But when you do that you totally lose the the difference in climate between the two locations. That’s what happens when you try to depend on anomalies from a local historic base. You lose what the actual climate is!
“He doesn’t have one result, but many, then he compares them to one another.”
“When you have many ‘standards,’ you don’t have a standard.” Spencer (1990)
I’ve seen several of these proxy arguments lately including one on Roy Spencer’s blog
I’m always amazed at the lengths the hockey team will go to to defend their stick as so many angels on a pinhead when there is evidence of forests under retreating glaciers and tree lines far higher up mountains or north into the tundra being clear unambiguous facts that it has been much warmer during the Holocene than it is today.
One called Entropy man stated that the proxies conclude that it is 0.4c warmer now than the Holocene optimum.
Pretty specific.
How do you argue with such people?
”How do you argue with such people?”
Show them this…
Andy wrote, “These reconstructions cannot be used to compare current warming to the pre-industrial era.”
These reconstructions are based on treemometers, and tree rings as proxies for temperatures going back hundreds of years is a F#$&ing joke and a fraud on science.
So many things affect tree growth
Temperature is just one small thing
CO2, water, surrounding trees, wildlife, other local conditions etc etc
To even “pretend” that treeometers have any use at all at a temperature guide is a petty uch like believuing in a Mills and Boon romance novel.
Which d’nyholist would believe if it was a Mann and gloom novel.
Your point is so obvious even a child of 10 would understand it, but not Michael Mann nor A-Holist, it seems.
“So many things affect tree growth”
That’s why I’ve never understood the idea of tree rings as proxies for much of anything. WAY too many variables to say it’s this one specific thing
Good video:
THE SHIFTING GOALPOSTS OF CANADA’S PUBLIC OFFICIALS
https://youtu.be/NFm3FpEmdUo
By Anthony Furey, Columnist/Oped editor for Sun papers/Postmedia
Trends in bristle cone pine tree ring width compared to tree widths from ten other sites in US:
http://www.climatedata.info/proxies/tree-rings/files/stacks_image_9787.png
The website menu also shows tree ring proxies from NH and SH showing no apparent trends 1600 – 2000.
Interesting
https://climateaudit.org/2005/08/28/bristlecone-dc13-the-nail-in-the-coffin/
Seems that tree rings in bristle cone pines (one of mickey mann’s faves)
are a facet of water use efficiency due to rising CO2 🙂
Nothing to do with temperature.!
OOPS . mickey mann goofed big time. !!
This is apples and oranges. It is totally illegitimate science to compare incomparable data or graft instrumental records onto the end of proxies. This has been basis of Mike’s Nature Trick and P.E. Jones’ even more dishonest version of it which was distributed world wide on the WMO year 2000 report.
I really learned many new things from your content.
Thanks
A few points to add:
“the most recent (post-1960s) tree ring responses were noted to be negatively correlated with temperature”
Which suggests to me that they’re completely useless for that sort of measurement.
***With regard to the area covered, Moberg only has one proxy south of 30°N. Mann uses more proxies, but very few of his Northern Hemisphere proxies are south of 30°N.***
Mann may have uised more proxies, but his “chart” is heavily weighted to one Bristlecone Pine in western USA. Steve McIntyre destroyed Mann’s “science” in Climateaudit.
When I see anything by Mann I stop reading.
Mann’s 2008 reconstruction is junk. It’s based on the inverted Tiljander proxies and bristlecone pines, and uses a technique that mines even random red data for hockeysticks. See here for details.
w.
And here’s a correlation analysis of the study … bad scientist, no cookies.
w.