Michael Mann’s 2008 Reconstruction

By Andy May

In my last post, it was suggested that Michael Mann’s 2008 reconstruction (Mann, et al., 2008) was similar to Moberg’s 2005 (Moberg, Sonechkin, Holmgren, Datsenko, & Karlen, 2005) and Christiansen’s 2011/2012 reconstructions. The claim was made by a commenter who calls himself “nyolci.” He presents a quote, in this comment, from Christiansen’s co-author: Fredrik Charpentier Ljungqvist:

“Our temperature reconstruction agrees well with the reconstructions by Moberg et al. (2005) and Mann et al. (2008) with regard to the amplitude of the variability as well as the timing of warm and cold periods, except for the period c. AD 300–800, despite significant differences in both data coverage and methodology.” (Ljungqvist, 2010).

A quick google search uncovers this quote in a paper by Ljungqvist in 2010 (Ljungqvist, 2010), one year before the critical reconstruction by Christiansen and Ljungqvist in 2011 (Christiansen & Ljungqvist, 2011) and two years before their 2012 paper (Christiansen & Ljungqvist, 2012). It turns out that Ljungqvist’s 2010 reconstruction is quite different than those he did with Christiansen over the next two years. All the reconstructions are of the Northern Hemisphere. Ljungqvist’s and Christiansen’s are of the extra-tropical (> 30°N) Northern Hemisphere and Moberg’s and Mann’s are supposed to be of the whole Northern Hemisphere, but the big differences lie in the methods used.

With regard to the area covered, Moberg only has one proxy south of 30°N. Mann uses more proxies, but very few of his Northern Hemisphere proxies are south of 30°N. Figure 1 shows all the reconstructions as anomalies from the 1902-1973 average.

Figure 1. A comparison of all four reconstructions. All are smoothed with 50 year moving averages except for the Ljungqvist (2010) reconstruction which is a decadal record. All have been shifted to a common baseline (1902-1973) to make them easier to compare. Ljungqvist(2010) is a decadal record.

As Figure 1 shows, the original Ljungqvist(2010) record is similar to Mann(2008) and Moberg(2005). A couple of years after publishing Ljungqvist(2010), Ljungqvist collaborated with Bo Christiansen and made the record labeled Christiansen(2012). It starts with the same proxies as Ljungqvist(2010), but uses a different method of combining the proxies into a temperature record that they call “LOC.”

In 2008, Michael Mann created several different proxy records, the one plotted in Figure 1 is the Northern Hemisphere EIV Land and Ocean record. EIV stands for “error-in-variables” and is a total least squares regression methodology. Mann states at the beginning of his paper that he would address the criticism (“suggestions”) in the 2006 National Research Council report (National Research Council, 2006). The result is a complex and hard to follow discussion of various statistical techniques used on various combinations of proxies. He doesn’t have one result, but many, then he compares them to one another.

Moberg (2005) also uses regression to combine his proxies but characterizes them by resolution to preserve more short-term variability. The statistical technique used by Ljungqvist in his 2010 paper is similar and called “composite-plus-scale” or CPS. This technique is also discussed by Mann in his 2008 paper and he found that it produced similar results to his EIV technique. Since these three records were created using similar methods, they all agree quite well.

Christiansen and Ljungqvist (2011 and 2012)

Everyone admits that using regression-type methods to combine multiple proxies into one temperature reconstruction reduces and dampens the temporal resolution of the resulting record. Instrumental (thermometer) measurements are normally accurately dated, at least down to a day or two. Proxy dates are much less accurate, many of them are not even known to the year. Those that are accurate to a year often only reflect the temperature during the growing season, during winter or during the flood season. Ljungqvist’s 2010 record is only decadal due to these problems.

Inaccurate dates, no matter how carefully they are handled, lead to mismatches when combining proxy records and result in unintentional smoothing and dampening of high-frequency variability. The regression process itself, leads to low-frequency variability, Christiansen and Ljungqvist write:

“[Their] reconstruction is performed with a novel method designed to avoid the underestimation of low-frequency variability that has been a general problem for regression-based reconstruction methods.”

Christiansen and Ljungqvist devote a lot of their paper to explaining how regression-based proxy reconstructions, like the three shown in Figure 1, underestimate low-frequency variability by 20% to 50%. They list many papers that discuss this problem. These reconstructions cannot be used to compare current warming to the pre-industrial era. The century-scale detail, prior to 1850, simply isn’t there after regression is used. Regression reduces statistical error, but at the expense of blurring critical details. Therefore Mann adding instrumental temperatures onto his record in Figure 1 makes no sense. You might as well splice a satellite photo onto a six-year-old child’s hand-drawn map of a town.

Christiansen and Ljungqvist make sure all their proxies have a good correlation to the local instrumental temperatures. About half their proxies have annual samples and half decadal. The proxies that correlate well with local (to the proxy) temperatures are then regressed against the local instrumental temperature record. That is the local temperature is the independent variable or the “measurements.” The next step is to simply average the local reconstructed temperatures to get the extratropical Northern Hemisphere mean. Thus, only minimal and necessary regression is used, so as not to blur the resulting reconstruction.

Discussion

Regression does reduce the statistical error in the predicted variable, but it reduces variability significantly, up to 50%. So, using regression to build a proxy temperature record to “prove” recent instrumental measured warming is anomalous is disingenuous. The smoothed regression-based records in Figure 1 show Medieval Warm Period (MWP) to Little Ice Age (LIA) cooling of about 0.8°C, this is more likely to be 1°C to 1.6°C, or more, after correcting for the smoothing due to regression. There is additional high-frequency smoothing, or dampening, of the reconstruction due to poorly dated proxies.

The more cleverly constructed Christiansen and Ljungqvist record (smoothed) shows a 1.7°C change, which is more in line with historical records, borehole temperature data, and glacial advance and retreat data. See Soon and colleagues 2005 paper for a discussion of the evidence (Soon, Baliunas, Idso, Idso, & Legates, 2003b). Christiansen and Ljungqvist stay much closer to the data in their analysis to avoid distorting it, this makes it easier to interpret. Figure 2 shows the same Christiansen and Ljungqvist 2012 curve shown in black in Figure 1 and the yearly Northern Hemisphere averages.

Figure 2. Christiansen and Ljungqvist 2012 50-year smoothed reconstruction and the one-year reconstruction. The black line is the same as in Figure 1, but the scale is different.

The one-year reconstruction is the fine gray line in Figure 2. It is a simple average of Northern Hemisphere values and is unaffected by regression, thus it is as close to the data as possible. Maximum variability is retained. Notice how fast temperatures vary from year-to-year, sometimes by over two degrees in just one year, 542AD is an example. From 976AD to 990AD temperatures rose 1.6°C. These are proxies and not precisely dated, so they are not exact values and take them with a grain of salt, but they do show us what the data say, because they are minimally processed averages.

The full range of yearly average temperatures over the 2,000 years shown is 4.5°C. The full range of values with the 50-year smoothing is 1.7°C. Given that nearly half of the proxies used are decadal, and linearly interpolated to one-year, I trust the 50-year smoothed record more than the yearly record over the long-term. But, seeing the variability in the one-year record is illuminating, it reinforces the foolishness of comparing modern yearly data to ancient proxies. Modern statistical methods and computers are useful, but sometimes they take us too far away from the data and lead to misinterpretations. I think that often happens with paleo-temperature reconstructions. Perhaps with modern temperature records as well.

It is quite possible that we will never know if past climatic warming events were faster than the current warming rate or not. The high-quality data needed doesn’t exist. What we do know for sure, is that regression methods, all regression methods, significantly reduce low-frequency variability. Mixing proxies with varying resolutions and imprecise dates, using regression, destroys high-frequency variability. Comparing a proxy record to the modern instrumental record tells us nothing. Figure 1 shows how important the statistical methods used are, they are the key difference in those records. They all have access to the same data.

Download the bibliography here.

4.4 24 votes
Article Rating
146 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Vuk
January 11, 2021 10:11 am

there is also Loehle’s reconstruction which well correlates to the Arctic’s magnetic field change
http://www.vukcevic.co.uk/LLa.htm

GoatGuy
Reply to  Vuk
January 11, 2021 11:40 am

That truly is a remarkable thing!  One bit tho’ that seems odd: at least as the graphs present, it seems that temperature change leads magnetic field change.  I would have thought it would be the other way around.  Temperature Δ in response to stratosphere and upper troposphere galactic high energy cosmic ray nuclei, as moderated by the magnetic field intensity.  

See what I mean?  It is kind of a big-hard-stretch to expect that Earth’s magnetic field would modulate in correlation to the mean global temperature.  Like ‘why?’

Yours,
GoatGuy

Vuk
Reply to  GoatGuy
January 11, 2021 12:32 pm

Hi GG
It is unlikely that the temperature variability would affect the Earth’s MF, while there is possibility or even probability of the other way around.
I don’t think that either the global temperature or the magnetic field reconstructions before mid 1800’s are particularly accurate. Introducing a 10 year delay in the temperature’s ‘response’ and changing delta-t to 22 years (for reason that the Earth’s magnetic field for some unknown reason has strong Hale cycle spectral component, see inset in the link below) the inconsistency disappears, but a much longer ‘delay’ is apparent in coming out of the LIA.
Using more accurate global data from 1870’s to the present results in a slightly lower R^2
 http://www.vukcevic.co.uk/CT4-GMF.htm
(Link to the post 1870 magnetic data is included)

Jeff Labute
Reply to  Vuk
January 11, 2021 1:59 pm

Would it be possible that the sun’s MF modulates earths magnetic field as well as temperatures, and that may be why earths MF and temperatures may be in sync? It also seems somewhat coincidental to me that magnetic north pole is moving so quickly during the time of a grand solar minimum.

UzUrBrain
Reply to  Vuk
January 11, 2021 4:02 pm

Could it be somewhat like the induction or RF heating of a large steel ball bearing? The air around the steel ball bearing will start warming up as soon as the induction coil is energized and it could take a few more minutes to heat up the mass of steel.
Could also be the Sun is influencing the movement of the magnet pole and thus changing the location of the void around the pole. The void could move much faster than the mass of magnetic field in the earth.

ATheoK
Reply to  Vuk
January 11, 2021 4:06 pm

I think similar thoughts every time I watch induction coils heat metals.

Greg
Reply to  Vuk
January 12, 2021 1:05 am

Thanks for the graphs.
Why do you use HadCruft4 ? Land+sea averages are physically meaningless. I suggest using an SST record.

As I pointed out years ago, HadSST3 bucket fiddling removed the majority of the variability from the majority of the record. In particular the early 20th c. rise.
https://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/

The principal effect of these adjustments is to selectively remove the majority of the long term variation from the earlier 2/3 of the data record and to disrupt circa 10-11y patterns clearly visible in the data. These changes are fundamentally altering the character of the original data.

Vuk
Reply to  Greg
January 12, 2021 2:42 am

Hi Greg
Thanks for the comment, I am aware of some of the points you made. Most of the temperatures data available follow similar up-down trajectory. In this particular case it important to note the degree  of correlation. If and when it can be shown that a sustainable  causation hypothesis can be formulated then may be worth looking into the data sources preferences. 

January 11, 2021 10:25 am

Thank you Andy – I’ve enjoyed your many articles and posts.

I’ve studied climate science since ~1985 and published since 2002. Climate is a fascinating subject and I’m pleased with my conclusions to date and those of others who publish here on wattsup and on several other sites.

However, I may have written my last article, although I’ll probably continue to comment on blogs. My reasons are:
1   I think I’ve sorted most of the major climate-and-energy technical issues in my papers published from 2002 to 2020.
2   It has been obvious for decades that there is no real climate crisis.
3   It has been obvious for more than a decade that the alleged climate crisis is not just false – it’s a scam.
4   The technical issues are no longer the main event – the greatest need today is to fight the scam – the phony linking of the Climate-and-Covid false crises, and the fraudsters’ full-Marxist solution – the “Great Reset” – aka “Live like a Chinese serf, under the heel of a dictator”.

Best personal regards, Allan

Ron Long
January 11, 2021 10:27 am

Thanks, Andy. A good presentation of data in the act of being tortured by some and analyzed by others. Your fig. 1 is the key for me, it shows 2,000 years of declining temperature interrupted by two warming events (at 2,000 years and modern). This is in keeping with sea level variance as I understand it, and is likely the actual situation, ie, the earth is sliding downward (in temperature) towards another glacial cycle in this Ice Age we live in. Let’s get as much CO2 into the atmosphere as we can, it is a buffer for humanity.

ATheoK
Reply to  Ron Long
January 11, 2021 4:12 pm

A modern warming that seems to be well within ‘adjustments’, without error bars.

January 11, 2021 10:34 am

PETER FOSTER: SUSTAINABLE NEWSPEAK BY 2050
Like the word ‘social,’ ‘sustainable’ tends to vitiate or reverse the meaning of words to which it is attached. Thus ‘sustainable’ development is development retarded by top-down control
Peter Foster, Jan 05, 2021 
https://financialpost.com/opinion/peter-foster-sustainable-newspeak-by-2050?mc_cid=24866edf09&mc_eid=da89067c4f

Mr.
January 11, 2021 10:53 am

And for readers like nyolci who seem to get inspiration from Mannian climate performances, the $2 Stores are having a special this month on “100 Home Magic Tricks To Amaze All Your Friends And Family”

The first trick in the package is that they market it through the $2 Stores, but get you to pay $3 for it.

Editor
January 11, 2021 10:59 am

Is it just me, or are the images missing?

Vuk
Reply to  David Middleton
January 11, 2021 11:15 am

I can see 3 images.

Reply to  Andy May
January 11, 2021 12:39 pm

They’re visible now. Since the format change, it seems images are displaying differently.

Nick Schroeder
January 11, 2021 11:16 am

Once again, if the y-axis were scaled honestly these foreboding lines would vanish.
See attached example.

Lie w statistics.jpg
Rud Istvan
January 11, 2021 11:16 am

I long ago concluded that most of the paleoclimate stuff is not fit for purpose. This includes Mann 2008 and Marcott 2013. The reason is basic. We know the GAST rise from ~1920-1945 is virtually indistinguishable from ~1975-2000. Yet even the IPCCAR4 said the former period was mostly natural; there simply was not enough rise in CO2. This raises the attribution problem about the latter period.

In order for paleoclimate to shed light on such matters, it needs something of an equivalent resolution. Such resolution simply is not there, either in the underlying proxies or in their statistical synthesis into a paleoclimate quesstimate. Essay Cause and Effectbin ebook Blowing Smoke deconstructs an example using Shakun’s 2012 paper. In the end, he produced an absurd statistical hash provable from the SI.

The best solution for an exercise in futility is to simply stop. But that would mean no more grants.

David A
Reply to  Rud Istvan
January 11, 2021 7:25 pm

Rud says; “I long ago concluded that most of the paleoclimate stuff is not fit for purpose.”

I believe there was a climate gate email that essentially said the same thing in far more colorful language.

MJB
January 11, 2021 11:25 am

Thanks for another excellent article Andy – very clear presentation. If I understand correctly the choice of baseline is of no consequence to the relative comparison of the reconstructions, but I’m still left curious why you picked 1902-1973?

Jeff Alberts
January 11, 2021 11:28 am

All you need to know about Mann08, and all the others mentioned in this article, can be found here at Climate Audit – Mann 2008

Gerald Machnee
Reply to  Jeff Alberts
January 11, 2021 2:41 pm

Yes!!

Phil
Reply to  Jeff Alberts
January 11, 2021 8:18 pm

Some links at Climate Audit appear to be broken.

CORRECTIONS TO THE MANN et. al. (1998) PROXY DATA BASE AND NORTHERN HEMISPHERIC AVERAGE TEMPERATURE SERIES (McIntyre and McKitrick 2003) can be found at https://climateaudit.files.wordpress.com/2005/09/mcintyre.mckitrick.2003.pdf.

The abstract (emphasis added):

The data set of proxies of past climate used in Mann, Bradley and Hughes (1998, “MBH98” hereafter) for the estimation of temperatures from 1400 to 1980 contains collation errors, unjustifiable truncation or extrapolation of source data, obsolete data, geographical location errors, incorrect calculation of principal components and other quality control defects. We detail these errors and defects. We then apply MBH98 methodology to the construction of a Northern Hemisphere average temperature index for the 1400-1980 period, using corrected and updated source data. The major finding is that the values in the early 15th century exceed any values in the 20th century. The particular “hockey stick” shape derived in the MBH98 proxy construction – a temperature index that decreases slightly between the early 15th century and early 20th century and then increases dramatically up to 1980 — is primarily an artefact of poor data handling, obsolete data and incorrect calculation of principal components.

HOCKEY STICKS, PRINCIPAL COMPONENTS, AND SPURIOUS SIGNIFICANCE (McIntyre and McKitrick GRL 2005) can be found at http://www.climateaudit.info/pdf/mcintyre.mckitrick.2005.grl.pdf.

From the conclusions (emphasis added):

PC analyses are sensitive to linear transformations of data, even if such transformations only appear to be ‘‘standardizations’’. Here we have shown, in the case of MBH98, that a ‘‘standardization’’ step (that the authors did not even consider sufficiently important to disclose at the time of their study) significantly affected the resulting PC series. Indeed, the effect of the transformation is so strong that a hockey-stick shaped PC1 is nearly always generated from (trendless) red noise with the persistence properties of the North American tree ring network. This result is disquieting, given that the NOAMER PC1 has been reported to be essential to the shape of the MBH98 Northern Hemisphere temperature reconstruction.

THE M&M CRITIQUE OF THE MBH98 NORTHERN HEMISPHERE CLIMATE INDEX: UPDATE AND IMPLICATIONS (McIntyre and McKitrick EE 2005) can be found at https://climateaudit.files.wordpress.com/2009/12/mcintyre-ee-2005.pdf

The abstract (emphasis added):

The differences between the results of McIntyre and McKitrick [2003] and Mann et al. [1998] can be reconciled by only two series: the Gaspé cedar ring width series and the first principal component (PC1) from the North American tree ring network. We show that in each case MBH98 methodology differed from what was stated in print and the differences resulted in lower early 15th century index values. In the case of the North American PC1, MBH98 modified the PC algorithm so that the calculation was no longer centered, but claimed that the calculation was “conventional”. The modification caused the PC1 to be dominated by a subset of bristlecone pine ring width series which are widely doubted to be reliable temperature proxies. In the case of the Gaspé cedars, MBH98 did not use archived data, but made an extrapolation, unique within the corpus of over 350 series, and misrepresented the start date of the series. The recent Corrigendum by Mann et al. denied that these differences between the stated methods and actual methods have any effect, a claim we show is false. We also refute the various arguments by Mann et al. purporting to salvage their reconstruction, including their claims of robustness and statistical skill. Finally, we comment on several policy issues arising from this controversy: the lack of consistent requirements for disclosure of data and methods in paleoclimate journals, and the need to recognize the limitations of journal peer review as a quality control standard when scientific studies are used for public policy.

I would like to add that proxy reconstructions suffer from a significant flaw that makes calculating accurate confidence limits very difficult if not impossible. Typically a proxy is “calibrated” by comparison to the instrumental record. The problem with this is that the instrumental record only covers typically 5 to 10% of the range of the proxy. Thus the calibration of the “instrument” (the treemometer, in the case of tree rings), which determines its accuracy and precision, is only valid over a very small part of the range over which the instrument is “measuring” whatever it is measuring. There is no scientific basis for the assumption that this calibration is valid over the rest of the range nor is there any scientific basis for the assumption that the uncertainty measured as part of the calibration is an accurate estimate of the uncertainty over the rest of the range. Normally, the use of instrumentation outside its calibration range is considered unscientific. There is no cure for this shortcoming.

Last edited 2 months ago by Phil
Joel O’Bryan
Reply to  Phil
January 11, 2021 9:42 pm

very nice review Phil of Mike Mann’s lies and distortions. mann is an outright fraud on science, and Science not only doesn’t care, it cheer leads on the deception and calls it “follow the science.”

Rory Forbes
Reply to  Joel O’Bryan
January 12, 2021 12:20 am

Mann is holding the entire coterie of “climate change” cheer leaders for ransom. If they throw him under the bus for his lack of sound science, and outright fraud, they’re all lost. It’s that word “unprecedented” that will do them in. It’s like their use of “consensus”. The first is patently untrue, based on massive amounts of evidence and thorough falsification of the “hockey stick” . Consensus is just a logical fallacy … two actually appeal to population and appeal to authority.

Tom Abbott
Reply to  Rory Forbes
January 13, 2021 5:25 am

“Mann is holding the entire coterie of “climate change” cheer leaders for ransom. If they throw him under the bus for his lack of sound science, and outright fraud, they’re all lost.”

Good point. This is another impetus to keep the scam going.

Tom Abbott
Reply to  Phil
January 13, 2021 5:02 am

Excellent post, Phil. Thanks for the details and links.

Reply to  Jeff Alberts
January 12, 2021 8:37 am

The original MBH graph compared to a corrected version produced by MacIntyre and McKitrick after undoing Mann’s errors.

Historical review:

https://rclutz.wordpress.com/2018/03/11/rise-and-fall-of-the-modern-warming-spike/

Mann et al corrected.png
Jim Gorman
January 11, 2021 11:30 am

Andy, a good article. Regressions and averages both hide and smooth over critical knowledge of the data. Variability is often reduced to nothing by averaging and then smoothing.

I must disagree somewhat with one of your statements. “Regression does reduce the statistical error in the predicted variable, but it reduces variability significantly, up to 50%.”. Regression can cause error of its own that are statistical in nature. These are time series and one must be conscious of the assumptions when combining different series. Stationarity is one assumption that is often overlooked. Read the following on regressions for a little introduction.
https://www.stat.berkeley.edu/~stark/SticiGui/Text/regressionErrors.htm

Paul C
Reply to  Andy May
January 11, 2021 2:11 pm

I would rather say that if the relationship between the proxies and temperature does not hold true today – such as tree rings, it is a falsification of the proxy being a true record of the temperature. The instrumental record is so short, that a brief correlation does not establish it as a proxy. Only a full correlation with all available data would indicate that it may be a true proxy.

Jim Gorman
Reply to  Andy May
January 11, 2021 2:59 pm

It’s also the fact that individual proxies must have the same statistical parameters, i.e., variance, means, etc. There are ways to handle these if they are different but I’ll bet a dime to a donut that this was not done. Time series simply can’t be averaged or a regression done if the series don’t match.

ATheoK
Reply to  Jim Gorman
January 11, 2021 4:38 pm

Here is a copy if you wish to check: https://cp.copernicus.org/articles/8/765/2012/cp-8-765-2012.pdf

e.g.; “We use a reconstruction method, LOCal (LOC), that recently has been shown to confidently reproduce low-frequency variability. Confidence intervals are obtained by an ensemble pseudo-proxy method that both estimates the variance and the bias of the reconstructions.”

I would be more confident if they didn’t use pseudo-proxy methods.

You will note that B. Christiansen, F. C. Ljungqvist list multiple tree ring proxies in their Table 1.

Meaning they treat dendrochronology as accurate for temperature reconstructions.

Jim Gorman
Reply to  ATheoK
January 12, 2021 4:42 am

“ensemble pseufo-proxy methods that both estimates” immediately tells me that the used multiple wrong trends and hoped to get a correct answer. There are mathematical methods available for combining trends without stationarity. Why didn’t they use those? Stock analysts use them all the time to investigate trends of different stocks to get a degree of accuracy in what the combination may do.

Tim Gorman
January 11, 2021 11:42 am

I really wish more attention would be paid to the uncertainty of the temperature record. Take the interval between 1600 and 1900 in Figure 2. Most of the temperature averages are between -0.5C and -1.5C. That is the within the “nominal” uncertainty range of +/- 0.5C. Thus while the 50 year trace shows an uptick from 1600 to 1900 it could actually have been a down tick. You simply don’t know because of the uncertainty in the record.

Even worse, if those are annual averages then the uncertainty is guaranteed to be more than +/- 0.5C.

All those data lines should be made with a magic marker the width of 1C. I’ve attached a graphic showing the use of a 1C wide pencil to follow the temperatures in Fig 2. If you understand that your “temperature” line can be anywhere in the black area you can darn near draw a horizontal line from year 0 to year 2000 with only small dips at 1300 and 1600. If you increase the width of the uncertainty to +/- .8C (absolutely not unreasonable) then you *can* draw that horizontal line at 0C.

You simply can *not* assume that a stated or calculated quantity is 100% accurate. That should violate all rules of physical science – but apparently not for climate science.

proxy_temp_graph.png
Mr.
Reply to  Tim Gorman
January 11, 2021 12:05 pm

Whoosh!

I’ve asked this layman’s question a number of times in various forums, but have never received a rational answer, viz –

if the proxy temps after 1960-something were found to show a decline that needed hiding by being supplanted by actual thermometer readings, then what’s to say that the previous centuries’ proxy temps were not also equally inaccurate?

David A
Reply to  Mr.
January 11, 2021 7:32 pm

Or the decline was real and called the Ice Age scare. And, like the real LIA and the MWP, it made fearsome, even fatal wounds to their CO2 done it theory.

Graemethecat
Reply to  David A
January 12, 2021 12:39 am

Warmunists are doing their best to expunge the Great Ice Age Scare of the 1960’s-1970’s from the historical record. They must have their noses rubbed in it at every opportunity.

Pat Frank
Reply to  Tim Gorman
January 11, 2021 12:36 pm

Tim, none of those proxies have any distinct physical relationship to temperature. Labeling the y-axis as ΔT (⁰C) is a lie.

It’s not Andy May’s lie. It’s Michael Mann’s lie and the rest of those folks. Consciously honorable (with an exception), lying by reflexive acceptance of prior pseudo-art.

The same conclusion of physically meaninglessness is obvious, too, from the fact that the so-called proxy ( 🤮 ) trends are variable with statistical method. Physical results are not variable with subjective choices of statistical methods.

nyolci
Reply to  Tim Gorman
January 11, 2021 12:53 pm

That is the within the “nominal” uncertainty range of +/- 0.5C.

This “nominal uncertainty range of +/- .5C” is the artifact of your imagination. By the way, it was Jim who was bullshiting about this, I think you accidentally switched role.

You simply can *not* assume that a stated or calculated quantity is 100% accurate.

No one assumed this. They calculated a 95% confidence interval. It’s explained in the article.

Even worse, if those are annual averages then the uncertainty is guaranteed to be more than +/- 0.5C.

Is it your favorite stick horse of “uncertainty is increased by averaging”? If so, lemme quote Jim: “Variability is often reduced to nothing by averaging and then smoothing.” Now it is very hard to argue that uncertainty and variability are very-very different things. NB. C&L2012 used annual or decadal values.

nyolci
Reply to  Andy May
January 11, 2021 1:42 pm

The true error is larger. People use Monte Carlo to estimate error and assume that is all the error.

Proclamations, as usual. In science you prove things. This guy just asserts an arbitrary value. You say it’s even more.

Sorry, there is still systemic error.

This bloke (and I) was specifically talking about uncertainty, not systemic errors. This debate between us has a history at other posts. Anyway, you again proclaim something without any proof. How on earth do you know there’s still systemic error? You have that strange feeling in your guts?

Most errors are systemic and unknown, this is especially true with temperature reconstructions.

Now I’d pull my usual methodology and appeal to authority by pointing out that you are NOT authority in this field. Neither Jim (who is confusingly very much like Tim).

Tim Gorman
Reply to  nyolci
January 11, 2021 3:15 pm

nyolci,

Proclamations, as usual. In science you prove things. This guy just asserts an arbitrary value. You say it’s even more.”

It is virtually guaranteed to be even more. If you can’t quantify all the various factors that determine tree ring width (and other characteristics) then it is impossible to even make an estimate of the uncertainty in your stated value!

“This bloke (and I) was specifically talking about uncertainty, not systemic errors. This debate between us has a history at other posts. Anyway, you again proclaim something without any proof. How on earth do you know there’s still systemic error? You have that strange feeling in your guts?”

Once again you demonstrate your lack of understanding of physical science. The term is actually systematic uncertainty. While random uncertainty (i.e. variations in the measurement of the same thing using the same measurement device) can be estimated statistically, systematic error (independent measurement of different things) can be quantified only through research and analysis.

from this site: https://www.slac.stanford.edu/econf/C030908/papers/TUAT004.pdf

“Most measurements of physical quantities in high energy physics and astrophysics involve both a statistical uncertainty and an additional “systematic” uncertainty. Systematic uncertainties play a key role in the measurement of physical quantities, as they are often of comparable scale to the statistical uncertainties”

“Statistical uncertainties are the result of stochastic fluctations arising from the fact that a measurement is based on a finite set of observations. Repeated measurements of the same phenomenon will therefore result in a set of observations that will differ, and the statistical uncertainty is a measure of the range of this variation. By definition, statistical variations between two identical measurements of the same phenomenon are uncorrelated, and we have well-developed theories of statistics that allow us to predict and take account of such uncertainties in measurement theory, in inference and in hypothesis testing”

“Systematic uncertainties, on the other hand, arise from uncertainties associated with the nature of the measurement apparatus, assumptions made by the experimenter, or the model used to make inferences based on the observed data.”

As I have pointed out to you before, even the Argo floats have an uncertainty associated with them, uncertainties that have nothing to do with the resolution of the actual sensor. So do field measurement devices. You simply don’t know if something has impacted the water or air flow in the SYSTEM and therefore the stated value read from the SYSTEM is uncertain. That uncertainty is is not able to be reduced statistically. IT IS NOT RANDOM ERROR conducive to statistical analysis.

Now I’d pull my usual methodology and appeal to authority by pointing out that you are NOT authority in this field. Neither Jim (who is confusingly very much like Tim).”

Authorities are authorities because they can back up what they say. Andy and Jim have certainly done so. I have given you reference after reference to back up what I say, references that actually do the math. All you have to offer is an Appeal to Authority – commonly known as name dropping. Name dropping proves nothing.

You can continue to proclaim your ignorance for everyone to see or you can actually do some research on the subject. Which you choose is up to you!

Jim Gorman
Reply to  nyolci
January 11, 2021 3:39 pm

Let me add that you never gave any kind of scientific explanation of what the uncertainty in a recorded measurement of 75 actually is. I’ll say again, that is a non-repeatable measurement that fades away in time never to be made again. You can never know where the mercury was other than between 74.5 and 75.5 degrees. That is uncertainty in measurement! It is not amenable to statistical treatment since it is a single measurement with no distribution surrounding it.

nyolci
Reply to  Jim Gorman
January 15, 2021 11:35 am

Let me add that you never gave any kind of scientific explanation of what the uncertainty in a recorded measurement of 75 actually is.

The usual bullshiting… Uncertainty is dependent on the type of the instrument and its calibration. Whether this 75 is degs.F or degs.C does matter too, so again, please be more precise. You (or Tim, I can’t distinguish you two) started to speak about systematic uncertainty but I’m pretty sure you don’t speak about that (but you didn’t specify).
See, you can’t talk about the “uncertainty in a recorded measurement of 75”, this doesn’t make sense without specifying more. You gave a totally arbitrary “minimum uncertainty interval of +/- .5”. I guess you were talking about 75F, but then my simple household thermometer has a much better uncertainty of +/- .11 around 75F.

Tim Gorman
Reply to  nyolci
January 15, 2021 12:35 pm

You are blowing smoke! The calibration of field measurement stations often happens at infrequent intervals. Should be annually but many times isn’t. Argo floats get calibrated every five years if memory serves.

I’ve told you this before but the federal guidelines accept a +/- 0.6C uncertainty in their measurement stations. See the Federal Meteorological Handbook No. 1 for documentation.

The uncertainty on my high-priced Davis weather station is +/- 0.5C *when purchased* for temps above -7C and +/- 1C for temps under -7C. There is no guarantee for uncertainty after the 1 year warranty period. I sincerely doubt that a “simple household thermometer” stuck outside in the weather for a year will have a better uncertainty than my weather station. Even the inside temperature on the Davis console is only rated at +/- 0.3C.

Even most expensive indoor thermometers (e.g. the $50 Oria – check out Amazon) only have +/- 1C uncertainty – though they may list the “resolution” as 0.1C. Resolution and uncertainty are two totally different things – as you should be aware of by now.

Pat Frank
Reply to  nyolci
January 11, 2021 3:56 pm

Produce the physical theory that converts a proxy metric into Celsius, nyolci.

You can’t do. And neither can anyone else.

Absent that theory — and it is absent — the y-axis is physically meaningless.

What is the meaning of physical error in a physically meaningless number?

Systematic error in the air temperature record. Here. (869.8 kb pdf) And here. And more to come.

A demonstration that the people in the field are incompetent: here. (1 mb pdf).

You scoff at others who have strong engineering degrees, as not experts. In what are you expert? How do you know an engineer is not expert in physical error analysis?

And how can you possibly think that physical error in climatology is in any way handled differently than physical error in the rest of science and engineering? It isn’t.

You’re clueless all the while you expostulate, nyolci. Maybe that’s why you’re so confident. You have no idea that you have no idea.

nyolci
Reply to  Pat Frank
January 15, 2021 11:46 am

Produce the physical theory that converts a proxy metric into Celsius, nyolci.

I don’t have to. C&L have just done that. Mann has done that. etc. You’re asking for something that is commonplace in climate science.

You scoff at others who have strong engineering degrees, as not experts.

I have a “strong engineering degree” and I don’t think I’m an expert in climate science. I’m an expert in my field. That’s why I take climate scientists seriously. They are the experts of their field.

How do you know an engineer is not expert in physical error analysis?

Speakin about Jim? (or Tim?) They may be experts in their narrow fields but statistical analysis is beyond them. Good illustration: [JT]im thought “combining measurements” increases uncertainty (non systematic). Now “combining”, if done properly reduces uncertainty. This you learn during the first 1-2 grades in any serious university engineering programme. They likely studied that but didn’t use (or didn’t understand at the first place).

And how can you possibly think that physical error in climatology is in any way handled differently than physical error in the rest of science and engineering? It isn’t.

Exactly. I don’t think it’s handled differently. I don’t even understand why you think I think it’s handled differently.

You’re clueless

Look, who is talkin… 🙂

Tim Gorman
Reply to  nyolci
January 15, 2021 1:26 pm

I don’t have to. C&L have just done that. Mann has done that. etc. You’re asking for something that is commonplace in climate science.”

And those reconstructions have been debunked over and over. They made no attempt to account for confounding variables as well as other mistakes in their statistical analysis.

“but statistical analysis is beyond them.”

Malarky. The uncertainty in single, one-time measurements are not able to be analyzed using statistics. Why do you keep saying they are?

[JT]im thought “combining measurements” increases uncertainty (non systematic). Now “combining”, if done properly reduces uncertainty. This you learn during the first 1-2 grades in any serious university engineering programme. They likely studied that but didn’t use (or didn’t understand at the first place).”

It’s not obvious that you ever took any serious university engineering classes at all.

———————————————-
From “Data Reduction and Error Analysis”, 3rd Edition by Bevington and Robinson

A study of the distribution of the result of repeated measurements of the same quantity can lead to an understanding of these errors so that the quoted error is a measure of the spread of the distribution. However, for some experiments it may not be feasible to repeat the measurements and experimenters must therefor attempt to estimate the errors based on an understanding of the apparatus and their own skill in using it.
———————————————–

Temperature measurements are *NOT* repeated measurements of the same quantity.

How many times must that be repeated before it sinks in?

The uncertainty in single, independent temperature measurements must have the uncertainty in those measurements estimated on the understanding of the apparatus.

John Taylor in “An Introduction to Error Analysis” (Chapter 3.5) states: “When measured quantities are added or subtracted the uncertainties add”.

Think about calculating circumference of a table top. First you measure the width and then the length. Those are independent measurements of different quantities. When you add them together then the uncertainties in each add. The overall uncertainty INCREASES, it does *not* reduce. There simply is no way to analyze each of those single measurements using statistics to reduce the uncertainty.

Now, taking a temperature reading at a station in Kansas City, MO and one in Olathe, KS are similar. They are independent measurements of different quantities (just as with the table) – therefore if you try to average them then you *must* increase the uncertainty of the sum by adding the uncertainties in quadrature – root sum square.

When you average the single, independent temperature readings at 100 stations you have to add their uncertainties in quadrature. I..e (uncertainty) x (sqrt(100)) = 10 x uncertainty. Your +/- 0.5C uncertainty becomes +/- 5C.

That uncertainty simply overwhelms the ability to actually determine a difference of 0.1C from year to year!

BTW, you never answered Jim’s question. What is the uncertainty of a reading of 75F at the Olathe, KS airport measurement station? Don’t run away. Don’t blow smoke up our butts. Just give a straightforward, simple answer. If you have no idea then just admit it!

Geoff Sherrington
Reply to  nyolci
January 11, 2021 6:46 pm

nyolci,
If you disagree with estimates by others of error or uncertainty, then it is incumbunt upon you to quote your own preferred values.
I have been asking this question of Australia’s BOM for 6 years now, with no useful answer. Q: “If a person seeks to know the separation of two daily temperatures in degrees C that allows a confident claim that the two temperatures are different statistically, by how much would the two values be separated?

nyolci, are you able to provide an answer? Geoff S

nyolci
Reply to  Geoff Sherrington
January 15, 2021 12:07 pm

If you disagree with estimates by others of error or uncertainty, then it is incumbunt upon you to quote your own preferred values.

Hm, I don’t have preferred values. I pointed out that all the scientific papers mentioned here gave the errors usually as 95% confidence intervals.

If a person seeks to know the separation of two daily temperatures in degrees C that allows a confident claim that the two temperatures are different statistically, by how much would the two values be separated?

Another bullshit question. We have to know the thermometer’s type and calibration for this. Actually, in a sense we can never say this is bigger than that, ‘cos we are here dealing with probability distributions that are null out only asymptotically. But we can tell with a certain probability that is very high.
But if we take J/Tim’s bullshit seriously then we can use a continuous uniform distribution with a interval length of 1(C?). Because he/they think the actual value can be anywhere in this interval (see the thick line example above) uniformly. Actually, uncertainty is the standard deviation of the (possibly empirical) probability distribution, so the actual interval length is ~3.46C but doesn’t matter. Then we need a separation of 1C (or ~3.46C).

Tim Gorman
Reply to  nyolci
January 15, 2021 2:02 pm

Hm, I don’t have preferred values. I pointed out that all the scientific papers mentioned here gave the errors usually as 95% confidence intervals.”

What errors? What scientific papers? I *never* see any uncertainty interval quoted in any of the climate science papers.

You need to stop digging the hole you are standing in.

Uncertainty is usually defined as the 95% confidence interval for the true value. But if you never quantify the uncertainty then the term “confidence interval” is meaningless!

See:http://www.physics.pomona.edu/sixideas/old/labs/LRM/LR03.pdf

———————————–
“How can one quantify uncertainty? For our purposes in this course, we will define a value’s uncertainty in terms of the range centered on our measured value within which we are 95% confid-
ent that the “true value” would be found if we could measure it perfectly. This means that we expect that there is only one chance in 20 that the true value does not lie within the specified range.This range is called the 95% confidence range or 95% confidence interval.

The conventional way of specifying this range is to state the measurement value plus or minus a certain number. For example, we might say that the length of an object is 25.2 cm ± 0.2 cm:
the measured value in this case is 25.2 cm, and the uncertainty U in this value is defined to be ±0.2 cm. The uncertainty thus has a magnitude equal to the difference between the measured value
and either extreme edge of the uncertainty range. This statement means that we are 95% confident that the measurement’s true value lies within the range 25.0 cm to 25.4 cm.”
—————————————————-

Another bullshit question. We have to know the thermometer’s type and calibration for this. Actually, in a sense we can never say this is bigger than that, ‘cos we are here dealing with probability distributions that are null out only asymptotically. But we can tell with a certain probability that is very high.”

Individual, single, independent temperature readings do *NOT* have a probability distribution! You keep falling back on the idiocy that you can take multiple readings of the temperature and use the central limit theory to calculate an ever more accurate mean.

You simply can’t do that. When you take a temperature measurement that quantity is gone, fini, disappeared into the 4th dimension – never to return. The mean of that reading is the reading itself. The standard deviation of that reading is zero. There is no variance since there is only one data point in the data set so there is no standard deviation either.

I’ve given you the Federal Meteorology Handbook No. 1 that specifies the type and calibration standards for federal measurement stations. So why are you still quibbling about thermometer type and calibration?

“But if we take J/Tim’s bullshit seriously then we can use a continuous uniform distribution with a interval length of 1(C?). Because he/they think the actual value can be anywhere in this interval (see the thick line example above) uniformly.”

The definition of a continuous uniform distribution is where each point in the interval has an equal chance to happen. When did anyone say this? An uncertainty interval has *NO* probability distribution at all. Not even a uniform one. Who knows if all points in the interval have an equal probability of being the true value? I don’t. You might think you do but you don’t. As the quoted document above states, the uncertainty interval is just that interval we are 95% confident contains the true value – nothing more. It specifies *nothing* about a probability distribution for the values in that interval.

That black line you speak of is the uncertainty interval. The true value could be *anywhere* in the interval. That uncertainty interval does *not* define a probability distribution – period, exclamation point.

“Actually, uncertainty is the standard deviation of the (possibly empirical) probability distribution, so the actual interval length is ~3.46C but doesn’t matter. Then we need a separation of 1C (or ~3.46C).”

How do you have a variance, standard deviation, and mean in one, single, independent measurement? It’s a data set of size one. If you have none of those then you have no probability distribution. And an uncertainty interval is *NOT*. ,let me repeat, IS NOT a probability distribution.

Jim Gorman
Reply to  nyolci
January 12, 2021 10:05 am

““Now I’d pull my usual methodology and appeal to authority by pointing out that you are NOT authority in this field. Neither Jim (who is confusingly very much like Tim).”

Am I an authority? You bet your butt I am not. That is why I make sure I can provide technical references that provide the math and explanations for the claims I make about uncertainty. I should note that I have yet to see one technical reference from you that backs up what your claimed climate scientist authorities have done.

The other part of the response is that I may not be an expert, I sure am knowledgeable. Have you ever designed wideband RF amplifiers with a given noise figure and gain? Dealt with measuring devices that were not precise enough or exactly accurate? How about matching output and input impedance so you don’t spoil the noise figure and gain?

I recently installed all new real wood trim, base and crown, and doors in my house. I can assure you frame carpenters and drywall folks have little knowledge about measurement uncertainty or certainty for that matter. Try installing a door where the top frame is not level and one wall leans one way and the other wall leans the other way. Try cutting base trim where the walls both lean AND don’t meet at 90 degrees. Add to that mitre saw lack of ability to precisely repeat angled cuts and you’ll soon learn about uncertainty. You’ll also learn to appreciate silicon caulking too! Oh, and I forgot operator error also!

You don’t appear to have dealt with any of these issues in a professional job. I suspect that is why you only refer to expert opinion.

nyolci
Reply to  Jim Gorman
January 15, 2021 12:11 pm

I should note that I have yet to see one technical reference from you that backs up what your claimed climate scientist authorities have done.

See the articles referenced in Andy’s writing above. Like C&L2012 or Mann2008.

How about matching output and input impedance so you don’t spoil the noise figure and gain?

I’m pretty sure you’re good in this. What you wrote about statistical analysis revealed you were not good in that.

You don’t appear to have dealt with any of these issues in a professional job.

I have to confess my sins, yes… 🙂

Tim Gorman
Reply to  nyolci
January 11, 2021 2:50 pm

nyolci,

This “nominal uncertainty range of +/- .5C” is the artifact of your imagination. By the way, it was Jim who was bullshiting about this, I think you accidentally switched role.”

No figment here! If our best measuring stations have a +/- 0.5C uncertainty then tree ring proxies *CERTAINTLY* have a wider uncertainty range! I was being unbelievably trusting in only applying +/- 0.5C to the tree ring proxy.

“No one assumed this. They calculated a 95% confidence interval. It’s explained in the article.”

Then why is there nothing in the report figures about the uncertainty? I don’t think you actually know what a confidence interval. A confidence interval can’t tell you what the actual true value is. It just tells you that it is probably somewhere in an interval. And proxies don’t provide enough data points to actually know if the probability distribution around each individual value is normal or not! There are too many confounding factors to actually do even an accurate estimate! Was it temperature (e.g. a late or early frost) or precipitation or insects or surrounding tree density (i.e. shade) that actually determined the width of each tree ring? Do *YOU* know from 2000 years ago? Does *anyone* know from 2000 years ago? That is why it is so important to spend significant effort in determining what uncertainty should be applied!

“Is it your favorite stick horse of “uncertainty is increased by averaging”? If so, lemme quote Jim: “Variability is often reduced to nothing by averaging and then smoothing.” Now it is very hard to argue that uncertainty and variability are very-very different things. NB. C&L2012 used annual or decadal values.”

It’s not a “stick horse”. It is actual physical science. Uncertainty and variability ARE very-very different things!

How do you get annual or decadal values? Are they calculated or measured? If they are calculated then the uncertainty of values used in the calculation must be propagated along with the calculation itself.

Variability is how much things change in the short term. Uncertainty lays out how well you can measure that variability. If you think your stated value of variability from natural causes is 0.2C but your uncertainty in each measurement used to calculate that variability is 0.5C then you don’t actually know *what* the true value of the variability is.

What you want us to believe is that when you say a 2″x4″ stud is 96″ long +/- 0.25″ then it really means that it is exactly 96″ long. You want to throw away the uncertainty in your measurement. *THAT* is what the climate scientists are doing!

Rory Forbes
Reply to  Tim Gorman
January 12, 2021 12:38 am

Note … a pre-cut stud for an 8′ wall is 92 5/8 inches … exactly. There are three 1.5 inch plates to account for and deduct.

Tim Gorman
Reply to  Rory Forbes
January 12, 2021 4:16 am

Rory,

That is to allow for the double 2″x4″ top plate and the single 2″x4″ bottom plate in a stud wall. So substitute 92 5/8″ for 96. My statement still holds.

Jim Gorman
Reply to  nyolci
January 11, 2021 3:29 pm

You really do need to learn the differnce between uncertainty and error. THEY ARE NOT THE SAME THING. Please obtain this book and learn the information in it. Until you do, you are going to appear ignorant to the folks here that deal with the two different things. An Introduction to Error Analysis. The Study of Uncertainties in Physical measurements by Dr. John R. Taylor

The real physical variability is not changed but it is hidden behind the averaged and smoothed process.

Here are some references you can give us an argument about. Please don’t use a simple appeal to authority that says these people know what they are doing! Don’t be stupid, read and study these references to learn something about which you are trying to appear knowledgeable.

====================================
“Variances must increase when two variables are combined: there can be no cancellation because variabilities accumulate.”
https://intellipaat.com/blog/tutorial/statistics-and-probability-tutorial/sampling-and-combination-of-variables/

====================================
“We can form new distributions by combining random variables. If we know the mean and standard deviation of the original distributions, we can use that information to find the mean and standard deviation of the resulting distribution.
We can combine means directly, but we can’t do this with standard deviations. We can combine variances as long as it’s reasonable to assume that the variables are independent.

  • Make sure that the variables are independent or that it’s reasonable to assume independence, before combining variances.
  • Even when we subtract two random variables, we still add their variances; subtracting two variables increases the overall variability in the outcomes.
  • We can find the standard deviation of the combined distributions by taking the square root of the combined variances.”

 
 
https://www.khanacademy.org/math/ap-statistics/random-variables-ap/combining-random-variables/a/combining-random-variables-article
 
==================================

The root sum of squares is the way that combines the standard uncertainties of more than one contributor to provide our overall combined uncertainty. This is not influenced by the number of measurements we take to determine our standard uncertainty and there is no division by the number of measurements involved.
 
 
 
https://pathologyuncertainty.com/2018/02/21/root-mean-square-rms-v-root-sum-of-squares-rss-in-uncertainty-analysis/#:~:text=The%20root%20sum%20of%20squares%20is%20the%20way,no%20division%20by%20the%20number%20of%20measurements%20involved.
 

==================================

And here is a reference show how to combine random variables with different variances.

Combined Variance | eMathZone

Pat Frank
Reply to  nyolci
January 12, 2021 6:00 pm

Let’s notice that nyolci is silent on the basic question of physical meaning.

His other arguments are mere distractions.

nyolci
Reply to  Pat Frank
January 15, 2021 12:45 pm

You really do need to learn the differnce between uncertainty and error.

I didn’t speak about error. It was Andy with “systemic error”. The rest of your rant is irrelevant but I think I know where you got the feeling you have:

The root sum of squares is the way that combines the standard uncertainties of more than one contributor to provide our overall combined uncertainty.

Yes. And variance halves when averaging two independent variables with the same distribution. This does decrease standard deviation.

Tim Gorman
Reply to  nyolci
January 15, 2021 2:14 pm

If you mean two independent variables with the same mean and standard deviation then when you add them (in order to average them) the variance doubles and the standard deviation goes up by sqrt(2).

———————————–
The Pythagorean Theorem of StatisticsQuick. What’s the most important theorem in statistics? That’s easy. It’s the central limit theorem (CLT), hands down. Okay, how about the second most important theorem? I say it’s the fact that for the sum or difference of independent random variables, variances add:
 
For independent random variables X and Y,
Var(X +/- Y) = Var(X) + Var(Y)
———————————–

If the distribution are the same then Var(X) = Var(Y) and Var(X+/-Y) = 2Var(x) = 2Var(Y).

If you then divide by 2 (in order to do an average) you wind up with Var(X = Var(Y). The variance does *not* halve and it does not decrease standard deviation.

And none of this applies to uncertainty because uncertainty has no variance, standard deviation, or mean!

fred250
January 11, 2021 12:01 pm

Large uncertainties in temperature

Large uncertainties in time

Large uncertainty if tree ring are useful for temperature, at all.

Leave out those tree rings you can get something very different

comment image

Sea proxies also show a distinct warmer period.

comment image

Especially in the Arctic

comment image

tonyb
Editor
January 11, 2021 12:02 pm

Andy

Nice article

This graphic is taken from my article on the Intermittent little Ice age and uses CET (including my extension to 1539.

The graphic looks at the temperatures experienced in England over a 70 year life time. It bears a close resemblance to your figure 2. CET is often said to be a reasonable but not perfect proxy for at least the Northern Hemisphere and some argue a wider area.

slide4.png (720×540) (wordpress.com)

nyolci
January 11, 2021 12:19 pm

I made headlines!!!
We can fell Andy’s awkwardness from his writing ‘cos this paper does not confirm his assertions however hard he’s trying to push it. Two quotes below are from the conclusions’ section. The paper states many more times that its results are in agreement with previous work. IMHO it’s a refinement for extra-tropical NH.

The level of warmth during the peak of the MWP in the second half of the 10th century, equaling or slightly exceeding the mid-20th century warming, is in agreement with the results from other more recent large-scale multi-proxy temperature reconstructions by Moberg et al.(2005), Mann et al. (2008,2009), Ljungqvist(2010), and Ljungqvist et al.(2012).

[Discusses LIA temporal variation] This temporal variation of the temperature throughout the LIA is in line with most previous work. Most regional to global multi-proxy temperature reconstructions studies agree that the 17th century was the coldest century during the LIA (Ljungqvist,2010; Ljungqvist et al.,2012; Hegerl et el.,2007;Mann et al.,2008,2009; Moberg et al.,2005; National Research Council,2006) […]

Please note that we are 1C above the mid 20th century temperature, furthermore this paper is not a good choice against Mann w/r/t MWP (while it reconstructs a bit higher peak) for other reasons too:

Our two-millennia long reconstruction has a well-defined peak in the period 950–1050 AD with a maximum temperature anomaly of 0.6◦C. […] The reconstructions  of Mann et al.(2008,2009) show a longer peak warming covering the whole period 950–1100 AD

Andy, being The Andy, cannot leave his characteristics. He is coming up with some tiring bs:

Notice how fast temperatures vary from year-to-year, sometimes by over two degrees in just one year, 542AD is an example.

One is justifiably suspicious about these sudden 2C drops ‘cos this usually signifies strong volcanic eruptions, confirmed indeed for the 540s. This is above variability, and these events are well identifiable in proxies, nothing special for C&L2012.
More Andyisms:

But, seeing the variability in the one-year [paleo]record is illuminating, it reinforces the foolishness of comparing modern yearly data to ancient proxies.

Thank you, Andy. We didn’t know that. Now we do, thank you. Seriously, do you really believe scientists think a reconstructed temperature series is usable the same way as the instrumental record?

It is quite possible that we will never know if past climatic warming events were faster than the current warming rate or not. […] What we do know for sure, is that regression methods, all regression methods, significantly reduce low-frequency variability.

Current warming is way faster than what can be characterized as “low frequency”. And it’s getting even faster.

Last edited 2 months ago by nyolci
fred250
Reply to  nyolci
January 11, 2021 12:31 pm

Another EVIDENCE FREE rant…

YAWN !

Warming is now COOLING. !

Many places haven’t warmed at all this century.

Trends have not increased .. unless data gets “adjusted”

comment image

You FAILED again, nye !!

Gerald Machnee
Reply to  fred250
January 11, 2021 2:45 pm

Yes, 70 year cycles. Should be cooling now.

Graemethecat
Reply to  Gerald Machnee
January 12, 2021 12:46 am

Judging from the weather right now we are already witnessing some cooling.

fred250
Reply to  nyolci
January 11, 2021 12:36 pm

MWP existed globally, and was warmer than now

GET OVER IT !

Stop your ignorant and childish, evidence-free Climate Change DENIAL.

comment image

comment image

comment image

nyolci
Reply to  Andy May
January 11, 2021 1:21 pm

You do realize that Michael Mann, et al. did exactly that when he spliced the instrumental record onto his reconstruction.

🙂 I was talking exactly about this 😉 Scientists know very well how to use these reconstructions with the instrumental record, they don’t mix up what should not be mixed up, they don’t get confused. Unlike you. Mann’s method has been explained numberless times. Well, you should be a more careful, I didn’t say you could never compare or whatever these. I only said scientists didn’t think they were usable in the same way.

BTW, there’s another small error in your text:

[Mann’s] infamous conclusion, that was rejected by the National Research Council and others was, as follows.

It wasn’t rejected.

Back to business, I think you start to realize by now that bringing up C&L was an error, they reinforce science. No wonder you only addressed a single (misunderstood) sentence from my post.

Last edited 2 months ago by nyolci
nyolci
Reply to  Andy May
January 11, 2021 2:15 pm

It was rejected

No, it wasn’t. It noted some minor uncertainties, that’s what you lifted out of context. But by and large:
The basic conclusion of Mann et al. (1998, 1999) was that the late 20th century warmth in the Northern Hemisphere was unprecedented during at least the last 1,000 years. This conclusion has subsequently been supported by an array of evidence that includes both additional large-scale surface temperature reconstructions and pronounced changes in a variety of local proxy indicators“.
and
Based on the analyses presented in the original papers by Mann et al. and this newer supporting evidence, the committee finds it plausible that the Northern Hemisphere was warmer during the last few decades of the 20th century than during any comparable period over the preceding millennium
But you again try to use science to deny science. This report, as expected, confirmed our (now almost religious 🙂 ) faith in climate science including such things that models, observational confirmation of warming and the worth of reconstructions.

I didn’t misunderstand your statement.

Well, either you did or you pretend you did, the latter is because this is the simplest way to duck answering the question whether you are embarrassed to discover that C&L2011+12 can’t be used for denial.

Weekly_rise
Reply to  Andy May
January 12, 2021 7:52 am

What is the difference? Can you quantify it?

fred250
Reply to  nyolci
January 12, 2021 12:01 am

“simplest way to duck answering the question”

…..Is the COWARDLY d’nyholist way

Just avoid answering at all.

Let’s watch the d’nyholist avoids complete, yet again 😀

1… Do you have any empirical scientific evidence for warming by atmospheric CO2?

2… In what ways has the global climate changed in the last 50 years , that can be scientifically proven to be of human released CO2 causation?

Tom Abbott
Reply to  nyolci
January 13, 2021 5:39 pm

““The basic conclusion of Mann et al. (1998, 1999) was that the late 20th century warmth in the Northern Hemisphere was unprecedented during at least the last 1,000 years. This conclusion has subsequently been supported by an array of evidence that includes both additional large-scale surface temperature reconstructions and pronounced changes in a variety of local proxy indicators“.”

Regional surface temperature charts refute Mann’s claim that 1998 was the hottest year in a 1,000 years.

Regional surface temperature charts show the Early Twentieth Century was just as warm or warmer than 1998.

James Hansen said the decade of the 1930’s was the hottest decade and 1934 was hotter than 1998.

Michael Mann should have talked to Hansen before making this ridiculous “hockey stick” claim.

Gerald Machnee
Reply to  nyolci
January 13, 2021 8:14 pm

**This conclusion has subsequently been supported by an array of evidence that includes both additional large-scale surface temperature reconstructions and pronounced changes in a variety of local proxy indicators“.**

It has not been supported by any legitimate science. His friends changed a couple of proxies but retained the main problem and claimed SUCCESS!! The the nonsense was refuted by Steve Mcintyre.

Phil
Reply to  nyolci
January 11, 2021 8:43 pm

Scientists know very well how to use these reconstructions with the instrumental record, they don’t mix up what should not be mixed up, they don’t get confused.

“Scientists”: Which scientists? When? In what publication? Be specific. Improper references are unscientific. So is poor grammar.

Mann’s method has been explained refuted numberless times.

There, I fixed it for you.

Well, you should be a more careful, I didn’t say you could never compare or whatever these. I only said scientists didn’t think they were usable in the same way.

“Scientists”: Again, please be specific as to which scientists, when and in what publication.

DHR
Reply to  nyolci
January 11, 2021 12:52 pm

“Current warming is way faster than what can be characterized as “low frequency”. And it’s getting even faster.”

There is no acceleration in the satellite lower troposphere record, only in the homogenized surface record. One wonders why.

Graemethecat
Reply to  DHR
January 12, 2021 12:51 am

Notice how Warmunists like A-holist have moved the goalposts once more. The problem is no longer the temperature (which refuses to cooperate), but the rate of increase.

Meab
Reply to  nyolci
January 11, 2021 2:11 pm

Nyolci, one of the best integrating proxies for global temperature is sea-level rise averaged over periods longer than El Nino/La Nina. Warming causes both sea water expansion and it causes land grounded glacial ice melt which also adds to sea level rise. Water impoundment and aquifer withdrawal are small – typically needing only small short term corrections. The only correction needed is for land subsidence, but that can accurately be done with a site specific linear adjustment. That adjustment doesn’t affect acceleration estimates, so we can confidently assess acceleration. The sea-level rise showed fast acceleration after the end of the Little Ice Age starting after 1850 (before CO2 started to rise) and slow acceleration through the early 1900s. There is very, very little to no acceleration now, indicating that there is likely no acceleration in tropospheric atmosphere temperatures. That and the satellite based temperature record indicates that you’re badly mistaken on your (unsupported) claim of an accelerating temperature rise..

Clyde Spencer
Reply to  Meab
January 11, 2021 6:10 pm

See the accompanying sea level rise graph from Climate Etc.:

post-glacial_sea_level[1].png
Last edited 2 months ago by Clyde Spencer
nyolci
Reply to  Andy May
January 11, 2021 3:01 pm

The reconstructions show that the MWP is as warm or warmer than the mid-twentieth century, true.

At last!

You say we are one degree warmer than the mid-twentieth century now. I say so what?

At last you accept facts!

What happens in the next 50 years? […] Climate is long-term.

Yep. And we have a long term instrumental record and very good reconstructions. That’s why we know with high confidence (these are quantifiable things, and quantified by people knowledgeable in these matters), so we know with high confidence that the cause is anthropogenic build up of greenhouse gasses.

What volcanic eruption was in 542AD?

Fcuk knows. I wasn’t there. It was in Iceland. They found it out using Swiss ice cores. It was a bit before 542. FYI the “year without a summer” phenomenon is long connected with volcanic eruptions.

Why is […] a 1.6 degree rise between 976 and 990AD insignificant and a one degree rise between 1950 and 2021 a big deal?

What 1.6 degree rise between 976 and 990? 🙂 You mix up a trend and the variability on it (the “grey line”). That was a 0.16C/decade warming, less what we have today, and much less than what we have in the last few decades, accelerating. And the latter is scientifically proven not to be variability.

Where is your perspective?

In the science.

Current warming is faster than low frequency?

Yes. It’s evident in time scales less than 30-50 years (rule of thumb low freq. limit) and proven not to be the result of variability.

If you believe the paleo temperature record and think that it can be compared to todays temperatures, be consistent.

Tryin hard 🙂 The good thing in science that you have the evidence. So for comparisons this is why we should use the smoothed record, not the grey lines you love so much. For today, we have the appropriate smoothing too. Scientists have worked out these, and as a result we can see the actual trend as opposed to variability.

Or maybe, the data for 976-990 is not very accurate? If that is the case, then how can we compare 1950-2021 to anything in the paleo record?

As above. Now we have quite a clear picture what constitutes variability and what constitutes trend both in the reconstruction and nowadays.

Last edited 2 months ago by nyolci
Mike
Reply to  nyolci
January 11, 2021 4:30 pm

You say we are one degree warmer than the mid-twentieth century now. I say so what?

At last you accept facts!”

The far north (at the very least) was WARMER than T-O-D-A-Y. Lots of EMIPRICAL evidence demonstrates this.

The far south too……”Reporting in the peer-reviewed journal Geology, scientists encountered what appeared to be the fresh remains of Adelie penguins in a region where penguins are not known to live. Carbon dating showed the penguin remains were approximately 800 years old, implying the remains had very recently been exposed by thawing ice”

If you disagree, please provide proof instead of simple hand waving or a proxy reconstruction. Your bullshit is getting boring.

Last edited 2 months ago by Mike
Clyde Spencer
Reply to  nyolci
January 11, 2021 6:19 pm

When you rely on smoothed time-series data, you only know what the average climate was like, and not what the short term variations were. That is, if you are relying on anomalies and the temperature anomaly one year was +2 and then the next year was -2, the average (0) makes it appear that there was no change. However, it was one Hell of a ride for the farmers over those two years.

fred250
Reply to  nyolci
January 11, 2021 7:48 pm

“The good thing in science that you have the evidence.”

We are all STILL WAITING for you produce any scientific evidence, d’nyholist.

It seems that YOU don’t actually have any evidence.

“And the latter is scientifically proven not to be variability.

RUBBISH!

“that the cause is anthropogenic build up of greenhouse gasses.”

STILL WAITING for your evidence, d’nyholist. !

Evidence-free norwegian-blue “blah, blah”, is NOT science..

….. but seems to be all you have, is that right, d’nyholist !!!

Would like to run away, yet again ?? Everyone is watching and LAUGHING. 😆

1… Do you have any empirical scientific evidence for warming by atmospheric CO2?

2… In what ways has the global climate changed in the last 50 years , that can be scientifically proven to be of human released CO2 causation?

fred250
Reply to  Andy May
January 11, 2021 7:22 pm

“You say we are one degree warmer than the mid-twentieth century now. I say so what?

And you are PROVABLY WRONG, d’nyholist

comment image

comment image

comment image

MAybe it is somewhere, in the middle of an urban cluster

But globally.. NOPE

And the warming since the LIA has absolutely zero human cause except Urban warming smeared all over the place where it doesn’t belong.

There is NO EVIDENCE of warming by ahuman released atmospheric CO2

If you think there is , then have the guts to at least attempt to answer these two questions,.. WITH EVIDENCE..

Oh wait.. you don’t “believe” in scientific evidence, do you., d’nyholist. !

1… Do you have any empirical scientific evidence for warming by atmospheric CO2?

2… In what ways has the global climate changed in the last 50 years , that can be scientifically proven to be of human released CO2 causation?

fred250
Reply to  fred250
January 11, 2021 8:08 pm

Oops,

third graph was meant to be this one

comment image

Let’s add a couple from around the world .

comment image

comment image

Last edited 2 months ago by fred250
fred250
Reply to  fred250
January 11, 2021 8:14 pm

And a couple more for good measure

Andes, South America

comment image

Central Asia

comment image

Central Siberia.

comment image

Tom Abbott
Reply to  Andy May
January 13, 2021 5:51 pm

“You say we are one degree warmer than the mid-twentieth century now.”

I’m wondering where nyolci got this one-degree C figure?

My understanding is when we reached the highpoint temperature of 2016 the so-called “hottest year evah! (tied with 1998), we were at that time one degree C above the average for the period from 1850 to the present (figured using a bastardized, Hockey Stick Chart).

Since 2016, the temperatures have dropped by about 0.7C, so that would put us at about 0.3C above the 1850 to present average as of today. Not 1.0C but 0.3C.

It’s getting cooler, nyolci. Have you noticed?

Tom Abbott
Reply to  Tom Abbott
January 13, 2021 5:53 pm
Weekly_rise
Reply to  nyolci
January 12, 2021 7:27 am

“One is justifiably suspicious about these sudden 2C drops ‘cos this usually signifies strong volcanic eruptions, confirmed indeed for the 540s. This is above variability, and these events are well identifiable in proxies, nothing special for C&L2012.
More Andyisms:”

Thanks for your comment, I was also struck by this point. The one-year paleorecords seem to be showing year over year variability that is significantly greater than that seen in the instrumental records. I do not know how you can look at those jumps and think, “we can trust that this annual variability holds for the pre-instrumental period.”

Laws of Nature
January 11, 2021 12:45 pm

Any post discussing proxies should start with
McShane and Wyner
https://projecteuclid.org/euclid.aoas/1300715170
“In this paper, we assess the reliability of such reconstructions and their statistical significance against various null models. We find that the proxies do not predict temperature significantly better than random series generated independently of temperature.”

After they posted this in 2010, many groups including Mann and Schmidt commented.
I think it is very valuable to read through this story and most importantly the “rejoinder” where McShane and Wyner defend their findings against all criticism and particularly “destroy” the comment by Mann as his work seems to have math errors, coding errors and his numbers do not support the critique he is raising. All said and done, McShane and Wyners work stands and this unfortunately means all the papers you cite here are quite meaningless!
One of their biggest issue they raise, is quite easy to see really:
The process of proxy selection and screening is not adequately captured in the statistical modeling and thus the calculated number and uncertainties are quite meaningless..
Like exploring the sugar content of apples by harvesting in April..

Laws of Nature
Reply to  Andy May
January 11, 2021 7:50 pm

Well thank you, Andy!
I almost did not write it, because it seems to be well known for more than a decade. “Proxy artists” keep publishing their data, while they seemingly ignoring that paper with its basic and very problematic statements. I cannot understand why every proxy paper after McShane and Wyner is not called out right away. This field of science needs to deal with that paper before they can move on!

Jeff Alberts
Reply to  Laws of Nature
January 11, 2021 5:14 pm

McShane and Wyner defend their findings against all criticism and particularly “destroy” the comment by Mann as his work seems to have math errors, coding errors and his numbers do not support the critique he is raising.”

Mann’s work contains no errors. He did exactly what he wanted to do.

“The process of proxy selection and screening is not adequately captured in the statistical modeling and thus the calculated number and uncertainties are quite meaningless..
Like exploring the sugar content of apples by harvesting in April..”

Really, McIntyre has covered this over and over and over and over… But activist scientists completely ignore valid criticisms.

Mike
Reply to  Laws of Nature
January 11, 2021 7:24 pm

We find that the proxies do not predict temperature significantly better than random series generated independently of temperature.”
Absolutely, but when you have proxies telling you 1000 years ago was warmer than today together with melting penguins, alpine paths, stone tools and tree stumps now showing up and carbon dated to around the same period, that there is science brother!
What say you nyolci?

Nelson
January 11, 2021 1:12 pm

So I went back to the C&L 2012 paper to look at the proxies used. They give a table of 91 proxies considered. What I find weird is just how many of them start at 1500, hence are not used. In fact, all but one or two either start at year 1 or 1500. Also, when you look at figure 3, you have to scratch your head and wonder what is the purposes of trying to combine such a diverse set of proxies into a single proxy. If there is a story to tell its in the differences among proxies through time. I view the paper as a complete waste of time.

Dave Fair
Reply to  Andy May
January 11, 2021 2:39 pm

“Lies, damned lies and statistics.” Mark Twain.

Nelson
Reply to  Andy May
January 11, 2021 3:00 pm

Andy, What exactly are they trying to do? They start with a cross sectional time series data base that preserves the unique properties of each time series and contains unique site specific data and then they mash them together in an ad hoc way to obtain a single time series that has no meaning. Most of the individual proxies look nothing like the final mess. Variability of what increased? is my point. Variability of something that has no meaning. Who cares.

Rory Forbes
Reply to  Andy May
January 11, 2021 5:36 pm

Soon and Baliunas have been gaslighted by all and sundry of the hit squad ever since and deemed beyond the pale of corporate “climate science”.

Mann and the others using regression are polishing a turd, but it’s still a turd.

I’ve noticed that if they are not trying to “polish” them, they’re devising for new methods to pick them up by the “clean end”.

Graemethecat
Reply to  Rory Forbes
January 12, 2021 12:59 am

The vicious character assassination meted out to Soon and Baliunas demonstrates conclusively that this business has absolutely nothing to do with actual science.

Tom Abbott
Reply to  Andy May
January 13, 2021 6:07 pm

“The only reason I believe it is in the ballpark, is the match to historical records and the borehole temperature data.”

Hockey Stick temperature data?

Jim Gorman
Reply to  Nelson
January 11, 2021 3:51 pm

Stationarity in time series is a well known issue and has substantial methods to identify it and sometimes make them compatible. But not always.

Chris Hanley
January 11, 2021 1:18 pm

Not being a scientist let alone a statistician if I were to attempt a global temperature reconstruction I would first decide what was a reliable global temperature proxy then collect sample measurements at random from the entire class, I suspect nothing meaningful would result.

Mr. Lee
January 11, 2021 1:53 pm

Can anybody show me the graph of a single weather station thermometer that has been around since, say 1850, which would clearly depict a dire situation?

Graemethecat
Reply to  Mr. Lee
January 12, 2021 1:00 am

Short answer: No.

ATheoK
January 11, 2021 4:03 pm

Even for comparison purposes, Mann’s abomination is not a viable reconstruction. Those that imitate or simulate Mann fall into the same/similar traps.

From: https://arxiv.org/ftp/arxiv/papers/1204/1204.5871.pdf
“For this study, it is also of interest that one recent reconstruction (Christiansen and Ljungqvist 2012, CL12) includes a high percentage of east Asian proxies.

Contrasting to the possible orbital effects in high latitudes, there is no clear indication for a biasing effect of east Asian proxies.

However, in interpreting east Asian climate proxies some peculiarities have to be considered as for example the importance of the Tibetan Plateau as a source of elevated atmospheric heating and the relation of the (east) Asian summer monsoon to Pacific decadal variability (e.g. Chang 2000) and tropical Pacific SST-variability (e.g. Wang 2000)”

fred250
Reply to  ATheoK
January 11, 2021 7:39 pm

Oddly, East Asian proxies show strong cooling.

One can only assume the proxies gets tortured to comply. The Mannian /marxist way.

comment image

comment image

The ccean to the east of Asia (Western Pacific Warm Pool) have also cooled significantly during the Holocene

comment image

Last edited 2 months ago by fred250
Jim Gorman
Reply to  fred250
January 12, 2021 10:29 am

I must say, you are on the right track. This is what a friend and I have discovered also. If you try to find offsetting temperature rise in other areas, you simply can’t. This is what the GAT folks simply won’t address although I have asked a couple of them.

Their usual dismissal is that temps are correlated out to 1200 km. When you point out that they are correlated only by season they reference a study.

Tim Gorman
Reply to  Jim Gorman
January 12, 2021 3:13 pm

The temperatures in St Paul and Kansas City correlate pretty well, especially if you adjust for seasonal variation. But when you do that you totally lose the the difference in climate between the two locations. That’s what happens when you try to depend on anomalies from a local historic base. You lose what the actual climate is!

Clyde Spencer
January 11, 2021 5:16 pm

“He doesn’t have one result, but many, then he compares them to one another.”

“When you have many ‘standards,’ you don’t have a standard.” Spencer (1990)

Pat from kerbob
January 11, 2021 5:18 pm

I’ve seen several of these proxy arguments lately including one on Roy Spencer’s blog

I’m always amazed at the lengths the hockey team will go to to defend their stick as so many angels on a pinhead when there is evidence of forests under retreating glaciers and tree lines far higher up mountains or north into the tundra being clear unambiguous facts that it has been much warmer during the Holocene than it is today.

One called Entropy man stated that the proxies conclude that it is 0.4c warmer now than the Holocene optimum.
Pretty specific.

How do you argue with such people?

Last edited 2 months ago by Pat from kerbob
Mike
Reply to  Pat from kerbob
January 11, 2021 8:10 pm

How do you argue with such people?”

Show them this…


anicent tree.JPG
Joel O'Bryan
January 11, 2021 5:35 pm

Andy wrote, “These reconstructions cannot be used to compare current warming to the pre-industrial era.”

These reconstructions are based on treemometers, and tree rings as proxies for temperatures going back hundreds of years is a F#$&ing joke and a fraud on science.

fred250
Reply to  Joel O'Bryan
January 11, 2021 7:27 pm

So many things affect tree growth

Temperature is just one small thing

CO2, water, surrounding trees, wildlife, other local conditions etc etc

To even “pretend” that treeometers have any use at all at a temperature guide is a petty uch like believuing in a Mills and Boon romance novel.

Which d’nyholist would believe if it was a Mann and gloom novel.

Graemethecat
Reply to  fred250
January 12, 2021 1:03 am

Your point is so obvious even a child of 10 would understand it, but not Michael Mann nor A-Holist, it seems.

TonyG
Reply to  fred250
January 12, 2021 9:59 am

“So many things affect tree growth”

That’s why I’ve never understood the idea of tree rings as proxies for much of anything. WAY too many variables to say it’s this one specific thing

January 11, 2021 9:44 pm

Good video:

THE SHIFTING GOALPOSTS OF CANADA’S PUBLIC OFFICIALS
https://youtu.be/NFm3FpEmdUo
By Anthony Furey, Columnist/Oped editor for Sun papers/Postmedia

Chris Hanley
January 11, 2021 11:07 pm

Trends in bristle cone pine tree ring width compared to tree widths from ten other sites in US:
http://www.climatedata.info/proxies/tree-rings/files/stacks_image_9787.png
The website menu also shows tree ring proxies from NH and SH showing no apparent trends 1600 – 2000.

fred250
January 12, 2021 12:18 am

Interesting

https://climateaudit.org/2005/08/28/bristlecone-dc13-the-nail-in-the-coffin/

Seems that tree rings in bristle cone pines (one of mickey mann’s faves)

are a facet of water use efficiency due to rising CO2 🙂

Nothing to do with temperature.!

Any attempt to argue that bristlecones are a temperature proxy on scientific grounds (something that has been conspicuously absent from any response by realclimate or their associates) would need to adjust for non-climatic changes in dC13 ratios and water use efficiency.

OOPS . mickey mann goofed big time. !!

Last edited 2 months ago by fred250
Greg
January 12, 2021 12:46 am

So, using regression to build a proxy temperature record to “prove” recent instrumental measured warming is anomalous is disingenuous.

This is apples and oranges. It is totally illegitimate science to compare incomparable data or graft instrumental records onto the end of proxies. This has been basis of Mike’s Nature Trick and P.E. Jones’ even more dishonest version of it which was distributed world wide on the WMO year 2000 report.

Last edited 2 months ago by Greg
January 12, 2021 1:30 am

I really learned many new things from your content.

Thanks

ThinkingScientist
January 12, 2021 4:53 am

A few points to add:

  1. The “hide the decline” issue was the fact that the most recent (post-1960s) tree ring responses were noted to be negatively correlated with temperature and went against the narrative. This was why they were deleted from the graphs (eg front cover of WMO report) and modern temps overlaid. This raises the significant problem that the tree ring vs temperature response curve is almost certainly complex/multivalued and therefore past temperature cannot be reconstructed from tree ring data.
  2. The (updated) reconstruction of Loehle & McCulloch (2008) is interesting because, as Craig Loehle pointed out at ClimateAudit, it (a) only included proxies from peer reviewed publications and (b) does not include any tree ring proxies.
  3. Proxies based on chemical/physical responses such as isotope ratios seem to me inherently more reliable than biological responses which have multiple complex inter-related factors – eg trees don’t just respond to temperature, they also respond to water stress and rainfall, nutrients, wind, CO2 etc.
TonyG
Reply to  ThinkingScientist
January 12, 2021 10:00 am

“the most recent (post-1960s) tree ring responses were noted to be negatively correlated with temperature”

Which suggests to me that they’re completely useless for that sort of measurement.

Gerald Machnee
January 12, 2021 6:48 am

***With regard to the area covered, Moberg only has one proxy south of 30°N. Mann uses more proxies, but very few of his Northern Hemisphere proxies are south of 30°N.***

Mann may have uised more proxies, but his “chart” is heavily weighted to one Bristlecone Pine in western USA. Steve McIntyre destroyed Mann’s “science” in Climateaudit.
When I see anything by Mann I stop reading.

Editor
January 12, 2021 4:30 pm

Mann’s 2008 reconstruction is junk. It’s based on the inverted Tiljander proxies and bristlecone pines, and uses a technique that mines even random red data for hockeysticks. See here for details.

w.

Reply to  Willis Eschenbach
January 12, 2021 4:37 pm

And here’s a correlation analysis of the study … bad scientist, no cookies.

w.

%d bloggers like this: