The IPCC’s attribution methodology is fundamentally flawed

Reposted from Dr. Judith Curry’s Climate Etc.

by Ross McKitrick

One day after the IPCC released the AR6 I published a paper in Climate Dynamics showing that their “Optimal Fingerprinting” methodology on which they have long relied for attributing climate change to greenhouse gases is seriously flawed and its results are unreliable and largely meaningless. Some of the errors would be obvious to anyone trained in regression analysis, and the fact that they went unnoticed for 20 years despite the method being so heavily used does not reflect well on climatology as an empirical discipline.

My paper is a critique of “Checking for model consistency in optimal fingerprinting” by Myles Allen and Simon Tett, which was published in Climate Dynamics in 1999 and to which I refer as AT99. Their attribution methodology was instantly embraced and promoted by the IPCC in the 2001 Third Assessment Report (coincident with their embrace and promotion of the Mann hockey stick). The IPCC promotion continues today: see AR6 Section 3.2.1. It has been used in dozens and possibly hundreds of studies over the years. Wherever you begin in the Optimal Fingerprinting literature (example), all paths lead back to AT99, often via Allen and Stott (2003). So its errors and deficiencies matter acutely.

The abstract of my paper reads as follows:

“Allen and Tett (1999, herein AT99) introduced a Generalized Least Squares (GLS) regression methodology for decomposing patterns of climate change for attribution purposes and proposed the “Residual Consistency Test” (RCT) to check the GLS specification. Their methodology has been widely used and highly influential ever since, in part because subsequent authors have relied upon their claim that their GLS model satisfies the conditions of the Gauss-Markov (GM) Theorem, thereby yielding unbiased and efficient estimators. But AT99 stated the GM Theorem incorrectly, omitting a critical condition altogether, their GLS method cannot satisfy the GM conditions, and their variance estimator is inconsistent by construction. Additionally, they did not formally state the null hypothesis of the RCT nor identify which of the GM conditions it tests, nor did they prove its distribution and critical values, rendering it uninformative as a specification test. The continuing influence of AT99 two decades later means these issues should be corrected.  I identify 6 conditions needing to be shown for the AT99 method to be valid.”

The Allen and Tett paper had merit as an attempt to make operational some ideas emerging from an engineering (signal processing) paradigm for the purpose of analyzing climate data. The errors they made come from being experts in one thing but not another, and the review process in both climate journals and IPCC reports is notorious for not involving people with relevant statistical expertise (despite the reliance on statistical methods). If someone trained in econometrics had refereed their paper 20 years ago the problems would have immediately been spotted, the methodology would have been heavily modified or abandoned and a lot of papers since then would probably never have been published (or would have, but with different conclusions—I suspect most would have failed to report “attribution”).

Optimal Fingerprinting

AT99 made a number of contributions. They took note of previous proposals for estimating the greenhouse “signal” in observed climate data and showed that they were equivalent to a statistical technique called Generalized Least Squares (GLS). They then argued that, by construction, their GLS model satisfies the Gauss-Markov (GM) conditions, which according to an important theorem in statistics means it yields unbiased and efficient parameter estimates. (“Unbiased” means the expected value of an estimator equals the true value. “Efficient” means all the available sample information is used, so the estimator has the minimum variance possible.) If an estimator satisfies the GM conditions, it is said to be “BLUE”—the Best (minimum variance) Linear Unbiased Estimator; or the best option out of the entire class of estimators that can be expressed as a linear function of the dependent variable. AT99 claimed that their estimator satisfies the GM conditions and therefore is BLUE, a claim repeated and relied upon subsequently by other authors in the field. They also introduced a “Residual Consistency” (RC) test which they said could be used to assess the validity of the fingerprinting regression model.

Unfortunately these claims are untrue. Their method is not a conventional GLS model. It does not, and cannot, satisfy the GM conditions and in particular it violates an important condition for unbiasedness. And rejection or non-rejection of the RC test tells us nothing about whether the results of an optimal fingerprinting regression are valid.

AT99 and the IPCC

AT99 was heavily promoted in the 2001 IPCC Third Assessment Report (TAR Chapter 12, Box 12.1, Section 12.4.3 and Appendix 12.1) and has been referenced in every IPCC Assessment Report since. TAR Appendix 12.1 was headlined “Optimal Detection is Regression” and began

The detection technique that has been used in most “optimal detection” studies performed to date has several equivalent representations (Hegerl and North, 1997; Zwiers, 1999). It has recently been recognised that it can be cast as a multiple regression problem with respect to generalised least squares (Allen and Tett, 1999; see also Hasselmann, 1993, 1997)

The growing level of confidence regarding attribution of climate change to GHG’s expressed by the IPCC and others over the past two decades rests principally on the many studies that employ the AT99 method, including the RC test. The methodology is still in wide use, albeit with a couple of minor changes that don’t address the flaws identified in my critique. (Total Least Squares or TLS, for instance, introduces new biases and problems which I analyze elsewhere; and regularization methods to obtain a matrix inverse do not fix the underlying theoretical flaws). There have been a small number of attribution papers using other methods, including ones which the TAR mentioned. “Temporal” or time series analyses have their own flaws which I will address separately (put briefly, regressing I(0) temperatures on I(1) forcings creates obvious problems of interpretation).

The Gauss-Markov (GM) Theorem

As with regression methods generally, everything in this discussion centres on the GM Theorem. There are two GM conditions that a regression model needs to satisfy to be BLUE. The first, called homoskedasticity, is that the error variances must be constant across the sample. The second, called conditional independence, is that the expected values of the error terms must be independent of the explanatory variables. If homoskedasticity fails, least squares coefficients will still be unbiased but their variance estimates will be biased. If conditional independence fails, least squares coefficients and their variances will be biased and inconsistent, and the regression model output is unreliable. (“Inconsistent” means the coefficient distribution does not converge on the right answer even as the sample size goes to infinite.)

I teach the GM theorem every year in introductory econometrics. (As an aside, that means I am aware of the ways I have oversimplified the presentation, but you can refer to the paper and its sources for the formal version). It comes up near the beginning of an introductory course in regression analysis. It is not an obscure or advanced concept, it is the foundation of regression modeling techniques. Much of econometrics consists of testing for and remedying violations of the GM conditions.

The AT99 Method

(It is not essential to understand this paragraph, but it helps for what follows.) Optimal Fingerprinting works by regressing observed climate data onto simulated analogues from climate models which are constructed to include or omit specific forcings. The regression coefficients thus provide the basis for causal inference regarding the forcing, and estimation of the magnitude of each factor’s influence. Authors prior to AT99 argued that failure of the homoskedasticity condition might thwart signal detection, so they proposed transforming the observations by premultiplying them by a matrix P which is constructed as the matrix root of the inverse of a “climate noise” matrix C, itself computed using the covariances from preindustrial control runs of climate models. But because C is not of full rank its inverse does not exist, so P can instead be computed using a Moore-Penrose pseudo inverse, selecting a rank which in practice is far smaller than the number of observations in the regression model itself.

The Main Error in AT99

AT99 asserted that the signal detection regression model applying the P matrix weights is homoscedastic by construction, therefore it satisfies the GM conditions, therefore its estimates are unbiased and efficient (BLUE). Even if their model yields homoscedastic errors (which is not guaranteed) their statement is obviously incorrect: they left out the conditional independence assumption. Neither AT99 nor—as far as I have seen—anyone in the climate detection field has ever mentioned the conditional independence assumption nor discussed how to test it nor the consequences should it fail.

And fail it does—routinely in regression modeling; and when it fails the results can be spectacularly wrong, including wrong signs and meaningless magnitudes. But you won’t know that unless you test for specific violations. In the first version of my paper (written in summer 2019) I criticized the AT99 derivation and then ran a suite of AT99-style optimal fingerprinting regressions using 9 different climate models and showed they routinely fail standard conditional independence tests. And when I implemented some standard remedies, the greenhouse gas signal was no longer detectable. I sent that draft to Allen and Tett in late summer 2019 and asked for their comments, which they undertook to provide. But hearing none after several months I submitted it to the Journal of Climate, requesting Allen and Tett be asked to review it. Tett provided a constructive (signed) review, as did two other anonymous reviewers, one of whom was clearly an econometrician (another might have been Allen but it was anonymous so I don’t know). After several rounds the paper was rejected. Although Tett and the econometrician supported publication the other reviewer and the editor did not like my proposed alternative methodology. But none of the reviewers disputed my critique of AT99’s handling of the GM theorem. So I carved that part out and sent it in winter 2021 to Climate Dynamics, which accepted it after 3 rounds of review.

Other Problems

In my paper I list five assumptions which are necessary for the AT99 model to yield BLUE coefficients, not all of which AT99 stated. All 5 fail by construction. I also list 6 conditions that need to be proven for the AT99 method to be valid. In the absence of such proofs there is no basis for claiming the results of the AT99 method are unbiased or consistent, and the results of the AT99 method (including use of the RC test) should not be considered reliable as regards the effect of GHG’s on the climate.

One point I make is that the assumption that an estimator of C provides a valid estimate of the error covariances means the AT99 method cannot be used to test a null hypothesis that greenhouse gases have no effect on the climate. Why not? Because an elementary principle of hypothesis testing is that the distribution of a test statistic under the assumption that the null hypothesis is true cannot be conditional on the null hypothesis being false. The use of a climate model to generate the homoscedasticity weights requires the researcher to assume the weights are a true representation of climate processes and dynamics. The climate model embeds the assumption that greenhouse gases have a significant climate impact. Or, equivalently, that natural processes alone cannot generate a large class of observed events in the climate, whereas greenhouse gases can. It is therefore not possible to use the climate model-generated weights to construct a test of the assumption that natural processes alone could generate the class of observed events in the climate.

Another less-obvious problem is the assumption that use of the Moore-Penrose pseudo inverse has no implications for claiming the result satisfies the GM conditions. But the reduction of rank of the resulting covariance matrix estimator means it is biased and inconsistent and the GM conditions automatically fail. As I explain in the paper, there is a simple and well-known alternative to using P matrix weights—use of White’s (1980) heteroskedasticity-consistent covariance matrix estimator, which has long been known to yield consistent variance estimates. It was already 20 years old and in use everywhere (other than climatology apparently) by the time of AT99, yet they opted instead for a method that is much harder to use and yields biased and inconsistent results.

The RC Test

AT99 claimed that a test statistic formed using the signal detection regression residuals and the C matrix from an independent climate model follows a centered chi-squared distribution, and if such a test score is small relative to the 95% chi-squared critical value, the model is validated. More specifically, the null hypothesis is not rejected.

But what is the null hypothesis? Astonishingly it was never written out mathematically in the paper. All AT99 provided was a vague group of statements about noise patterns, ending with a far-reaching claim that if the test doesn’t reject, “then we have no explicit reason to distrust uncertainty estimates based on our analysis.” As a result, researchers have treated the RC test as encompassing every possible specification error, including ones that have no rational connection to it, erroneously treating non-rejection as comprehensive validation of the signal detection regression model specification.

This is incomprehensible to me. If in 1999 someone had submitted a paper to even a low-rank economics journal proposing a specification test in the way that AT99 did, it would have been annihilated at review. They didn’t state the null hypothesis mathematically or list the assumptions necessary to prove its distribution (even asymptotically, let alone exactly), they provided no analysis of its power against alternatives nor did they state any alternative hypotheses in any form so readers have no idea what rejection or non-rejection implies. Specifically, they established no link between the RC test and the GM conditions. I provide in the paper a simple description of a case in which the AT99 model might be biased and inconsistent by construction, yet the RC test would never reject. And supposing that the RC test does reject, which GM condition therefore fails? Nothing in their paper explains that. It’s the only specification test used in the fingerprinting literature and it is utterly meaningless.

The Review Process

When I submitted my paper to CD I asked that Allen and Tett be given a chance to provide a reply which would be reviewed along with it. As far as I know this did not happen, instead my paper was reviewed in isolation. When I was notified of its acceptance in late July I sent them a copy with an offer to delay publication until they had a chance to prepare a response, if they wished to do so. I did not hear back from either of them so I proceeded to edit and approve the proofs. I then wrote them again, offering to delay further if they wanted to produce a reply. This time Tett wrote back with some supportive comments about my earlier paper and he encouraged me just to go ahead and publish my comment. I hope they will provide a response at some point, but in the meantime my critique has passed peer review and is unchallenged.

Guessing at Potential Objections

1. Yes but look at all the papers over the years that have successfully applied the AT99 method and detected a role for GHGs. Answer: the fact that a flawed methodology is used hundreds of times does not make the methodology reliable, it just means a lot of flawed results have been published. And the failure to spot the problems means that the people working in the signal detection/Optimal Fingerprinting literature aren’t well-trained in GLS methods. People have assumed, falsely, that the AT99 method yields “BLUE” – i.e. unbiased and efficient – estimates. Maybe some of the past results were correct. The problem is that the basis on which people said so is invalid, so no one knows.

2. Yes but people have used other methods that also detect a causal role for greenhouse gases. Answer: I know. But in past IPCC reports they have acknowledged those methods are weaker as regards proving causality, and they rely even more explicitly on the assumption that climate models are perfect. And the methods based on time series analysis have not adequately grappled with the problem of mismatched integration orders between forcings and observed temperatures. I have some new coauthored work on this in process.

3. Yes but this is just theoretical nitpicking, and I haven’t proven the previously-published results are false. Answer: What I have proven is that the basis for confidence in them is non-existent. AT99 correctly highlighted the importance of the GM theorem but messed up its application. In other work (which will appear in due course) I have found that common signal detection results, even in recent data sets, don’t survive remedying the failures of the GM conditions. If anyone thinks my arguments are mere nitpicking and believes the AT99 method is fundamentally sound, I have listed the six conditions needing to be proven to support such a claim. Good luck.

I am aware that AT99 was followed by Allen and Stott (2003) which proposed TLS for handling errors-in-variables. This doesn’t alleviate any of the problems I have raised herein. And in a separate paper I argue that TLS over-corrects, imparting an upward bias as well as causing severe inefficiency. I am presenting a paper at this year’s climate econometrics conference discussing these results.

Implications

The AR6 Summary paragraph A.1 upgrades IPCC confidence in attribution to “Unequivocal” and the press release boasts of “major advances in the science of attribution.” In reality, for the past 20 years, the climatology profession has been oblivious to the errors in AT99, and untroubled by the complete absence of specification testing in the subsequent fingerprinting literature. These problems mean there is no basis for treating past attribution results based on the AT99 method as robust or valid. The conclusions might by chance have been correct, or totally inaccurate; but without correcting the methodology and applying standard tests for failures of the GM conditions it is mere conjecture to say more than that.

4.8 45 votes
Article Rating
101 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Steve Case
August 19, 2021 6:21 am

The AR6 Summary paragraph A.1 upgrades IPCC confidence in attribution to “Unequivocal”

Paragraph A.1.7 from the above link boiled down tells us:

The rate of sea level rise was: 
1.3 mm yr between 1901 and 1971 (70 years)
1.9 mm yr between 1971 and 2006 (35 years)
3.7 mm yr between 2006 and 2018 (12 years)
Human influence was the main driver of these 
increases since 1971.

The first two rates for 1901-1971 and 1971-2006 agree with the tide gauge record. The 12 year rate of 3.7 mm/yr is total bullshit.

Besides that who puts any stock in comparing 70 year, 35 year and 12 year time series with one another. Can you say apples and oranges?

Clyde Spencer
Reply to  Steve Case
August 19, 2021 7:25 am

Waldorf salad with orange juice.

Steve Case
Reply to  Clyde Spencer
August 19, 2021 7:38 am

I suppose I should have posted the source: Permanent Service for Mean Sea Level

Last edited 2 months ago by Steve Case
ThinkingScientist
Reply to  Steve Case
August 19, 2021 7:37 am

As Willis has demonstrated at WUWT, the sea level acceleration is manufactured. The flaws in the claim can be conveniently summarised as:

  1. Tide gauge and satellite rates in the same period are very different and do not have overlapping error bars. Therefore they cannot be measuring the same thing and it is invalid to merge them. Doing so is a splice and smooth which attributes a data measurement problem to be an acceleration.
  2. The satellite data are from discrete time period low orbit satellites with time overlap. No individual satellite records shows acceleration and the later satellites show different linear rates of sea level rise to the earlier satellites. Offending data in the overlap intervals has been deleted and the different rates smoothed together to create an impression of acceleration from what appears to be most likely a data calibration issue.

I will be writing to my MP with a simple description and some pictures on the above lines.

Steve Case
Reply to  ThinkingScientist
August 19, 2021 8:41 am

” No individual satellite records shows acceleration…”

Yes they do, it just isn’t the obscene 0.097/yr² claimed by C-SLRG .
The PSMSL Tide gauges for those ~40 stations that have reasonable data
back to 1901 show about 0.01mm/yr².

Last edited 2 months ago by Steve Case
Clyde Spencer
Reply to  Steve Case
August 19, 2021 5:37 pm

0.01mm/yr²?

What is the uncertainty for the individual stations? Do all of them show the same acceleration? Do they show acceleration for the entire 120 years, or only for recent years as claimed by satellite data analysts?

Steve Case
Reply to  Clyde Spencer
August 19, 2021 8:02 pm

Recent years since 1993 the values for acceleration are all over the map and as far as I’m concerned, it’s a fools errand to try to sift out the noise and make sense of it. Longer periods for 64 stations with decent data show:

Acceleration.,,.mm/yr²
Median………..0.0098
Average………0.0092
MIN…………..-0.0228
MAX…………..0.0566

Acceleration aside, it’s the rate of sea level rise since 2006 (why did the IPCC choose 2006?) that’s the issue. Simple rate determination via Excel’s slope function is straight forward, for the 1901-1971 and the 1971-2006 time periods, the tide gauges agree with the IPCC claim. However, the IPCC’s Claim of 3.7 mm/yr for 2006-2018 is double what the tide gauges say for that period (1.78mm/yr).

Steve Case
Reply to  Steve Case
August 20, 2021 7:26 am

Should have been rounded off:

Acceleration ….mm/yr²
Median……….. 0.001
Average………..0.009
MIN…………… -0.02
MAX…………….0.06

Steve Case
Reply to  Steve Case
August 20, 2021 1:39 pm

CORRECTION: you just have to step in it every now and then, a bigger sample of tide gauges ~350 with nearly complete data since 1993 show a rate of 3.3 mm/year not 1.8. Acceleration is still all over the map.

AndyHce
Reply to  Clyde Spencer
August 19, 2021 10:50 pm

Isn’t the accuracy of the most modern technology tidal gauges +/- 2.5 cm?

Clyde Spencer
Reply to  AndyHce
August 21, 2021 7:27 pm

That sounds about right. However, Kip Hansen is one of the ‘locals’ who has probably spent the most time and effort on this issue.

Van Doren
Reply to  AndyHce
August 22, 2021 8:32 am

In which case, the correct sea level rise value is 0±30mm/y. (Errors are always rounded up to a single digit, if it isn’t 1).

Jim Breeding
Reply to  ThinkingScientist
August 19, 2021 9:01 am

Why write to your MP? They will neither understand or care what you are telling them.

ThinkingScientist
Reply to  Jim Breeding
August 20, 2021 5:57 am

You cannot get the Climate Change Act 2008 reversed unless you engage with your MP. They need to be constantly told that they have been misled and that science is not on their side.

It takes time and patience.

How do you think AGW activists got politicians to embark on such stupid policies? By saying nothing?

Pat Frank
Reply to  ThinkingScientist
August 19, 2021 9:08 am

Incompetence or dishonesty, which?

Fraizer
Reply to  Pat Frank
August 19, 2021 9:45 am

Embrace the word BOTH

Steve Case
Reply to  Pat Frank
August 19, 2021 9:57 am

They need to get loads of media coverage, and making up dramatic scary scenarios, in this case about sea level, does the trick. Being effective in scaring the bejesus out of the public is the goal, honesty is not.

Thomas P Gannett
Reply to  Steve Case
August 22, 2021 7:01 pm

As H L Mencken noted (my paraphrase) It is the sole object of practical politics to keep the public alarm, etc look it up,

TonyG
Reply to  Thomas P Gannett
August 23, 2021 6:48 am

I don’t think even Mencken could have envisioned the level to which they would take it.

Zig Zag Wanderer
Reply to  ThinkingScientist
August 19, 2021 1:46 pm

Doing so is a splice and smooth which attributes a data measurement problem to be an acceleration.

Can you say “hide the decline”? Or in this case “hide the lack of acceleration”.

garboard
Reply to  Steve Case
August 19, 2021 11:38 am

the usual NASA switcheroo from empirical tide gauge measurements to computer adjusted satellite measurements from 800 miles away which NASA says are only accurate to THREE CENTIMETERS ( one inch according to josh willis ) , and which cannot measure sea level height along shorelines . if you spent billions of dollars on obtaining worthless data wouldnt you try to obscure the facts ?

Steve Case
Reply to  garboard
August 19, 2021 1:44 pm

 ( one inch according to josh willis )

Here he is on You Tube’s Ask Climate Scientist

hiskorr
Reply to  garboard
August 19, 2021 1:52 pm

But, everyone knows that if you take a whole bunch of measurements accurate to the nearest inch (degree) and average them together, you get numbers accurate to the nearest 0.001 inch (degree). s/

Glen
Reply to  hiskorr
August 20, 2021 12:07 pm

Nick, is that you? /s

Vincent Causey
Reply to  Steve Case
August 20, 2021 12:28 am

I believe that the high figures includes isostatic rebound, in which lowering of the ocean basins is taken into account. What they are saying, if I understand correctly, is that although the mean sea level is only rising 2mm per year, if the ocean floor wasn’t sinking as well, the true rate of sea level rise would be 3.9mm per year. Unfortunately this is never made clear to the public, or even science reporters in the MSM. Or if they know, they are keeping quiet about it.

Coach Springer
August 19, 2021 6:32 am

So much for “unequivocal”.

August 19, 2021 6:45 am

They forgot auroras:

Jupiter’s Hot Temperatures are caused by Auroras (i.e. Solar Activity) — this discovery has MAJOR implications for Earth’s Climate Models

The IPCC didn’t account for this recent bombshell of a discovery — a big rethink of their climate models is in order…
At an average distance of 778 million kilometres (484 million miles) from the Sun, Jupiter should be cold.
Based solely on the amount of sunlight reaching the planet, the upper atmosphere, in theory, should be no warmer than a frigid -73 Celsius (-100F); however, Jupiter, in reality, averages out at a scorching 426C (800F).
This has prompted head scratching for the past five decades.

Global upper-atmospheric heating on Jupiter by the polar aurorae
Jupiter’s upper atmosphere is considerably hotter than expected from the amount of sunlight that it receives1,2,3. Processes that couple the magnetosphere to the atmosphere give rise to intense auroral emissions and enormous deposition of energy in the magnetic polar regions, so it has been presumed that redistribution of this energy could heat the rest of the planet4,5,6. Instead, most thermospheric global circulation models demonstrate that auroral energy is trapped at high latitudes by the strong winds on this rapidly rotating planet3,5,7,8,9,10. Consequently, other possible heat sources have continued to be studied, such as heating by gravity waves and acoustic waves emanating from the lower atmosphere2,11,12,13. Each mechanism would imprint a unique signature on the global Jovian temperature gradients, thus revealing the dominant heat source, but a lack of planet-wide, high-resolution data has meant that these gradients have not been determined. Here we report infrared spectroscopy of Jupiter with a spatial resolution of 2 degrees in longitude and latitude, extending from pole to equator. We find that temperatures decrease steadily from the auroral polar regions to the equator. Furthermore, during a period of enhanced activity possibly driven by a solar wind compression, a high-temperature planetary-scale structure was observed that may be propagating from the aurora. These observations indicate that Jupiter’s upper atmosphere is predominantly heated by the redistribution of auroral energy.

Last edited 2 months ago by Krishna Gans
Reply to  Krishna Gans
August 19, 2021 7:27 am

Did they ever send weather balloons into auroras ?

Tom Abbott
Reply to  Krishna Gans
August 19, 2021 10:09 am

One would think it was doing something similar here at Earth.

Reply to  Tom Abbott
August 19, 2021 10:17 am

Imaginable…..
That’s why I asked for wearher balloons in auroras, measuring not only magnetic fields.

Gary Pearse
Reply to  Krishna Gans
August 19, 2021 11:50 am

And there is no heating coming from the planet?

Pamela Matlack-Klein
August 19, 2021 6:59 am

I hope it is not my imagination but there does seem to be a lot more push-back against the IPCC garbage since the release of their latest work of fiction.

Joao Martins
Reply to  Pamela Matlack-Klein
August 19, 2021 7:33 am

There is a limit after which fiction no longer is an art and becomes a nuisance that is even refractory to satire.

Mark Pawelek
Reply to  Pamela Matlack-Klein
August 19, 2021 8:27 am

More people are speaking up for climate realism and pushing back against extremism. This could be due:
(1) More extremism; such as Extinction Rebellion, BLM, Antifa, US Dems
(2) More media propaganda on climate ‘crisis’ and ’emergency’
(3) Extension of fake science in politics, e.g. to COVID
(4) Zero-Carbon proposals by Tories, US Dems

I think it’s (4)

M Courtney
Reply to  Pamela Matlack-Klein
August 19, 2021 1:45 pm

Nope. You are just in your own social media bubble. The other side feel they are excelling too.
The only difference is that the intensity is mounting up prior to the coming COP.

On the bright side, this is exactly the same as all the other COPs and they always end up with no-one taking AGW seriously.

fretslider
August 19, 2021 7:03 am

Attribution is a legal instrument designed for the courts.

“Today, scientists can say with great accuracy that specific events were caused or made much more likely by the climate crisis, and can attribute specific damages to the human actions involved in changing the climate. Scientists can also estimate how much certain companies, which are very large emitters, have contributed to make such events more likely.”

New climate science could cause wave of litigation against businesses – study | Environment | The Guardian

It isn’t science at all, attribution is the process of giving credit for something, like crediting all the people who were involved in making a film or a record….. 

Last edited 2 months ago by fretslider
Reply to  fretslider
August 19, 2021 7:05 am

In postmodern times it is science – unfortunately 🙁

dennisambler
Reply to  fretslider
August 20, 2021 5:22 am

Myles Allen has been pushing “climate litigation” for some time. In 2003 he told the BBC that
“The vast numbers affected by the effects of climate change, such as flooding, drought and forest fires, mean that potentially people, organisations and even countries could be seeking compensation for the damage caused. “It’s not a question we could stand up and survive in a court of law at the moment, but it’s the sort of question we should be working towards scientifically,”

“Some of it might be down to things you’d have trouble suing – like the Sun – so you obviously need to work how particularly human influence has contributed to the overall change in risk,” the scientist, who has worked with the UN’s Intergovernmental Panel on Climate Change (IPCC), said. “But once you’ve done that, then we as scientists can essentially hand the problem over to the lawyers, for them to assess whether the change in risk is enough for the courts to decide that a settlement could be made.”
 
http://news.bbc.co.uk/1/hi/sci/tech/2910017.stm

He was present at the 2012 meeting at La Jolla, when the Union of Concerned Scientists, led by Peter Frumhoff, constructed a strategy to bring prosecutions against fossil fuel companies in the manner of the tobacco class action. A co-strategist was Naomi Oreskes.

 They produced a “Climate Accountability” report, http://www.climateaccountability.org/pdf/Climate%20Accountability%20Rpt%20Oct12.pdf
 
“Myles Allen, a climate scientist at Oxford University, suggested that while it is laudable to single out the 400 Kivalina villagers, all 7 billion inhabitants of the planet are victims of climate change.”

MarkW
August 19, 2021 7:08 am

This is starting to become a pattern in climate “science”.
Invent novel statistical technique. Exclude anyone who actually knows anything about statistics from the reviewers.

Reminds me of Mann and the hockey stick.

Last edited 2 months ago by MarkW
2hotel9
August 19, 2021 7:20 am

Their methodology is fundamentally doing precisely what they want it to do, pushing their leftist political agenda.

Clyde Spencer
August 19, 2021 7:23 am

The conclusions might by chance have been correct, …

The greatest sin in science is to be right for the wrong reason because then it is only luck, and not a demonstration that the phenomenon is understood.

Bruce Cobb
August 19, 2021 7:28 am

What we need is climate fraud fingerprinting.

Reply to  Bruce Cobb
August 19, 2021 8:54 am

and FBI perp walks into Federal Court houses.

Kevin A
Reply to  Joel O’Bryan
August 19, 2021 10:43 am

First we need the IBI or Independent Bureau of Investigation since it appears the FBI is not ‘fixable’ due to bias.
I’ve always wondered how CO2 became evil and now Ross McKitrick has answered the method used to make it evil and he has pointed out that it is impossible to have ignored this yet it was/is.

Carlo, Monte
Reply to  Kevin A
August 19, 2021 11:16 am

FBI and US Dept of Justice are both thoroughly corrupt and cannot be trusted.

Charlie
Reply to  Bruce Cobb
August 19, 2021 10:13 am

A strictly for the money from gullible wealthy people fraud.

An environmental scientist who was jailed for his role in Britain’s biggest ever tax fraud has had 10 years added onto his prison sentence after failing to pay back millions of pounds.

Oxbridge graduate fraudster gets another 10 years in prison for not paying back £11m stolen funds | Daily Mail Online

H. D. Hoese
August 19, 2021 7:29 am

“It has recently been recognised that it can be cast as a multiple regression problem with respect to generalised least squares (Allen and Tett, 1999; see also Hasselmann, 1993, 1997)”

From Pianka, EVOLUTIONARY ECOLOGY. 1988. 4th edition, pp. 179-80.
“For scientific understanding to progress rapidly and efficiently, a logical framework of refutable hypotheses, complete with alternatives, is most useful. However, while such a single factor approach may work quite satisfactorily for systems exhibiting simple causality, it has proven to be distressingly ineffective in dealing with ecological problems where multiple causality is at work. Once again, one of the major dilemmas in ecology seems to be finding effective ways to deal with multiple causality.”

I used this as a text and we used to send graduate students to the statistics department. They knew about such better than we, ecosystems are even more complex, but also depend on climate.

Steve Case
Reply to  H. D. Hoese
August 19, 2021 7:43 am

 “…ecosystems are even more complex, but also depend on climate.”

Ecosystems (forests for example) can be managed, climate not so much.

Climate believer
Reply to  Steve Case
August 19, 2021 9:35 am

It blows my mind that people, (especially the young), still believe that a government could possibly “fix” the climate.

Steve Case
Reply to  Climate believer
August 19, 2021 10:01 am

Baseball games, show trials, elections, etc, can be fixed, climate not so much.

Duane
August 19, 2021 7:29 am

This post went far beyond my college education in statistical science … but I am very glad that actual science is being done now. As opposed to assertions based upon baseless claims.

AGW is Not Science
August 19, 2021 7:32 am

The climate model embeds the assumption that greenhouse gases have a significant climate impact. Or, equivalently, that natural processes alone cannot generate a large class of observed events in the climate, whereas greenhouse gases can. It is therefore not possible to use the climate model-generated weights to construct a test of the assumption that natural processes alone could generate the class of observed events in the climate.

And there we have it in a nutshell.

Their assumptions are NOT facts, evidence, data, observations, or science.

Their so-called “science” is nothing but these incorrect assumptions.

AGW is Not Science
Reply to  AGW is Not Science
August 19, 2021 7:36 am

P.S. A better illustration for this post than the bucket full of holes would be a bucket with no bottom with water just pouring straight through.

Tom Abbott
Reply to  AGW is Not Science
August 19, 2021 10:15 am

Unsubstantiated assumptions and assertions is what alarmist climate science is made of. They try to disguise this by using “CONfidence” levels. We are now supposed to take their opinions as facts.

Jordan
Reply to  AGW is Not Science
August 19, 2021 12:08 pm

In other words, it is not possible to use climate models to test whether their own assumptions are valid.
Great work by DrM. We need people like him to cut away at the nonsense being spread around by the amateurs who love to call themselves “scientists”.

ThinkingScientist
August 19, 2021 7:37 am

Outstanding work Ross!

I shall be summarising/simplifying this for my MP.

August 19, 2021 8:03 am

I had no idea that climate models were the source data for attrbution. It truly is worse than we thought.

AT99 cannot be correct, regardless of methodology, because it is physically impossible for the error term to converge. Regardless of the time horizon.

Otherwise, the model runs would be predictions not projections. It is the failure of the error term to converge that is the mathematical basis for this distinction.

Using regression analysis to try and avoid this problem is mathematical nonsense. In effect AT99 is using a mathematical technicality much the same way a lawyer uses a legal loophole to excuse a murdered found holding a smoking gun over a dead body.

The error term doesn’t converge thus your regression analysis cannot converge on the correct answer. Period. Regardless of method.

You can apply all the tests you want that show it will converge, but all you have done is fool yourself. Almost certainly due to naive application of the test.

Jordan
Reply to  Ferdberple
August 19, 2021 12:12 pm

“all you have done is fool yourself” Correct. How often will it be necessary to expose the mistakes of people who will insist on trying to prove a positive.

Jim Gorman
Reply to  Ferdberple
August 19, 2021 4:15 pm

Somehow the academics teaching statistics have totally ignored physical science, measurement uncertainty, and uncertainty in model design. The new standard is how accurately you can calculate the mean value, regardless of the uncertainty of the data being used.

TheLastDemocrat
Reply to  Jim Gorman
August 19, 2021 5:23 pm

Look – this is a serious problem.
A student attempting to complete her dissertation sought me out for review and advice.
She had modeled the cost effectiveness of a new treatment if that treatment was globally adopted over the status quo.

The cost savings she calculated were close to the USA GDP. I said, “I don’t know how your analysis is wrong, but I know it is wrong because it cannot be true that switching to a less costly treatment can save that much money.”

It took her mind a while to accept this.

She did eventually find her modeling problem and fixed it.

DMacKenzie
August 19, 2021 8:15 am

McKitrick spanks the CC fraudsters yet again…

Peta of Newark
August 19, 2021 8:17 am

and if the assumption that CO2 is The Root Cause is wrong.
Where does that put all this lovely number crunching?
What if the supposed Cause & Effect are in fact Effect & Effect from an entirely different cause?

Maybesomething else‘ or ‘something different‘ or ‘something overlooked‘ or ‘something wilfully ignored‘ was simultaneously causing all the observed Climate Effects?

You know me – you what that ‘something‘ is.

Right under your feet.
Just for starters, its where we (should) get our Vitamin B12 from – and because we don’t, we go mad (Alzheimer’s) and find ourselves expiring in droves from the likes of Covid
Because the ‘things’ that made the Vitamin are all now Daed & Gone – floating around the sky in CO2 Heaven

The story of War Of The Worlds springs to mind – ‘somebody’ overlooked ‘something’ there too didn’t they?
and Not A Happy Ending for the overlookers neither…

Meab
Reply to  Peta of Newark
August 19, 2021 9:39 am

Another ignorant post, Peta. Human dietary sources of B-12 are from animal products. That’s why Vegans need to take B-12 supplements. Diets that include meat, shellfish, and salmon are not deficient in B-12. Covid mortality, Alzheimer’s, and the climate all have nothing to do with your INSANE theory about nutrient free dirt. GIVE IT UP.

August 19, 2021 8:19 am

In effect, the attribution studies are no different than taking all the model runs and drawing an average down the middle. That average is your regression.

But that regression has no special power. It is an average of multiple projections, each with their own error, none of which are truly independent. RMS has no magical power when your error is diverging to infinity.

No matter what you do with infinity, you still have infinity.

Last edited 2 months ago by ferdberple
Reply to  Ferdberple
August 19, 2021 9:04 am

ferd,
Attribution studies look at hindcasts not future projections.
Roos wrote, “ Optimal Fingerprinting works by regressing observed climate data onto simulated analogues from climate models which are constructed to include or omit specific forcings.”

The observed climate data is the past, and then they try to figure what the temps changes would have been in model outputs with and without rising CO2. The differences (residuals) and their statistical properties are what they make their claims on.

Reply to  Joel O’Bryan
August 19, 2021 10:59 pm

See my post below where i anticipated rhe hindcast objection.

Jordan
Reply to  Ferdberple
August 19, 2021 12:17 pm

True. At what point did we get Reliable = average(Unreliable), where individual “realisations” are considered to be unreliable (not predictions).
Further, as somebody recently asked, why does the IPCC need 48 models in CMIP6. In an echo of Einstein, surely it would only take 1 model!

Zig Zag Wanderer
Reply to  Ferdberple
August 19, 2021 1:58 pm

No matter what you do with infinity, you still have infinity.

Try multiplying it by zero…

Jordan
Reply to  Zig Zag Wanderer
August 19, 2021 2:20 pm

Is multiplying by zero the same as dividing by infinity?

Zig Zag Wanderer
Reply to  Jordan
August 19, 2021 4:09 pm

Is multiplying by zero the same as dividing by infinity?

No.

However, multiplying infinity by zero IS the same as dividing zero by infinity. This is because infinity is not actually a number, so both are impossible.

Reply to  Zig Zag Wanderer
August 19, 2021 11:08 pm

0 * infinity
= (1/infinity) * infinity
= 1

0 * infinity
= (2/infinity) * infinity
= 2

Repeat for 3,4,…n….infinity-1

0 * infinity has infinite different answers. You still have infinity.

Zig Zag Wanderer
Reply to  Ferdberple
August 19, 2021 11:33 pm

But just imagine what you get with infinity * i

August 19, 2021 8:48 am

What the IPCC call attribution is actually called sensitivity analysis. They are studying how sensitive the model outputs are to changes in the inputs.

The IPCC and climate science has turned this on it head. The models project 1C of warming due to co2. And since 1C of warming is observed, the IPCC says 100% of thewarming is due to co2.

But the IPCC fails to mention that the models were hindcast under the assumption that natural variability is low.

What the IPCC attribution studies are actually showing is the underlying assumptions in the climate models. Definitely not how much climate change is due to humans.

The models start by assuming hunans are responsible. So no big surprise that a rwgression analysis of the model outputs shows humans are responsible.

August 19, 2021 8:50 am

It doesn’t take a PhD in statistics to realize the fundamental problem with model-based attribution studies that assume in the model (via hand tuning parameters to get outputs to expectation) what they are testing and then find it.

Steve Case
Reply to  Joel O’Bryan
August 19, 2021 10:05 am

Same as stuffing the ballot box and on a recount getting the same result.

Rud Istvan
August 19, 2021 9:14 am

Back when I earned my econometrics degree, we were taught that heteroskedasticity can be corrected for in economic time series by normalizing. That is NOT possible in climate attribution studies based on models that are provably wrong. See guest post The Trouble with Climate Models for why.

Reply to  Rud Istvan
August 19, 2021 11:14 pm

Agreed. Correcting for error variance heteroskedasticity is improving the reliability of the fit. It is not improving the reliability of the underlying data.

In effect it is the mathematical equivalent of putting lipstick on a pig.

BCBill
August 19, 2021 9:25 am

Wow! Absolutely wow! I look forward to reading the paper.

Tom Abbott
Reply to  BCBill
August 19, 2021 10:20 am

They’ve been doing it wrong for 20 years!

No doubt, they will hurriedly correct these mistakes.

Or will they?

Last edited 2 months ago by Tom Abbott
BCBillis
Reply to  Tom Abbott
August 19, 2021 5:32 pm

Haha. Riiiiiight!

BCBillis
Reply to  BCBill
August 19, 2021 5:31 pm

I have read the paper now and the appropriate comment is still “Wow!”. What a masterful analysis. This should change everything but we have come to expect the “ignore it response” to damning criticism of AGW methodology. It is hard to imagine how this work could be ignored but I am betting that we are about to find out.

commieBob
August 19, 2021 10:20 am

We have these wonderful tools, Matlab being one. The trouble is that you can dump in a dataset and run a zillion different methods on it without particularly knowing what you’re doing. The chances are that something you try will produce results that look significant. The fly in the ointment comes when an actual statistician looks at your work. 🙂

Alan M
Reply to  commieBob
August 20, 2021 6:25 am

commieBob, this reminds me of the early days of computer based mineral resource estimation. Junior geologists (they could understand the login) would gather a few bits of information and then push the ” generate a resource number ” button and believe the result. I have even seen resource estimates with high grade resources above ground level – at least they are easy to mine.

Eric Vieira
August 19, 2021 10:38 am

They could just as well have said “a little bird told me…” and it would have not been less relevant, but really much cheaper…

Jordan
Reply to  Eric Vieira
August 19, 2021 12:24 pm

Cheaper! We see what you did there Eric. 🙂

H.R.
Reply to  Eric Vieira
August 19, 2021 2:28 pm

And cheeper, as well.

J Mac
August 19, 2021 10:48 am

WOW! Excellent work, Dr. McKitrick!
Your paper guts the attribution of greenhouse gasses causing climate change!

Glen
August 19, 2021 10:57 am

The math was created to produce the result.

MarkW2
August 19, 2021 11:03 am

Does it help at all to use bootstrapping or Monte Carlo simulations? I’ve certainly used bootstrapping very effectively but don’t know how it might help here, if at all.

I know from personal experience that the results from attribution studies have to be considered with a very critical eye. It astonishes me how so-called ‘scientists’ can make the claims they do at the levels of accuracy claimed.

If you are going to use them, OK, but make absolutely sure people understand the likely errors. This is something I NEVER ever see in climate science.

Carlo, Monte
Reply to  MarkW2
August 19, 2021 11:25 am

My amateur understanding is that a Monte Carlo simulation needs a mathematical model that reasonably reproduces reality. Otherwise they are just a waste of many CPU cycles.

Clyde Spencer
Reply to  Carlo, Monte
August 19, 2021 5:49 pm

Logically, there can only be one ‘best’ model output. If you average that with all the models that are wrong, you move away from the correct answer.

Now, what if that one ‘best’ model was just luck?

Gary Pearse
August 19, 2021 11:04 am

Regressing observations to climate model outputs that have proven to run too hot, it would seem to me the sign of the attribution, itself IS a supported null hypothesis. No?

Steve Z
August 19, 2021 1:53 pm

The whole argument about AT99 breaks down because they are basing the statistical analysis on computer (global climate) models, where the influences of certain parameters are selectively taken out, which assumes that the computer models are valid. Since most of the models have grossly over-estimated the real temperature rise, they cannot be considered valid, so any conclusions about “attribution” based on them are flawed.

Of course, the people pushing “attribution theory” want to “attribute” as much climate change as possible to increased CO2, because then they can punish emitters of CO2 with taxes or penalties, which (partially) go into their pockets. If they “attribute” climate change to some aspect of Mother Nature, how do you make money by punishing a natural process?

WXcycles
August 19, 2021 2:56 pm

A very timely refutation too.

tygrus
August 19, 2021 4:46 pm

Sounds like the mentalist/magician trick of: pick a random number & keep it secret, do a series of multiply, divide, addion, subtraction and the final answer can be guessed by the mentalist/magician. With enough obfuscation you can get the answer you want with a series of procedures without most people understanding you engineered the answer to match your desired result.

August 19, 2021 11:51 pm

See guest post The Trouble with Climate Models for why
======
Rud, your “Trouble” post brings up an interesting point. Loss of precision errors with each itteration of the model.

This is what cannot be corrected via Regression analysis.

Think of image compression. There are two type. Lossless and lossy. The climate models are a form of lossy compression. They don’t contain a full description of the original climate.

Each time you time slice / iterate the model you are performing a lossy compression. The image becomes blurrier and blurrier.

No correction for the regression fit can correct this. The problem is not simply the error variance. Rather the problem is that the image itself is washing out with time. The result is all error and no data.

Tom
August 20, 2021 2:39 am

I am in no position to question the validity or even evaluate what Mr. McKitrick wrote. I suspect that if this were purely an academic and scientific matter, his inside baseball argument would carry the day, but its not. This is largely a political problem now, and has been for a long time. As long as we continue on this warming trend, however modest, the alarmists are in control, sorry to say. I don’t know how long it will take for the alarmists to lose sway, but I’m pretty sure I won’t live to see it.

Hari Seldon
August 20, 2021 9:14 am

Let me allow some general remarks to the article and to the topic from another view: Before changing to the industry I was involved in research on the fields of DSP (Digital Signal Processing) and control theory. I have been a regular reader of WUWT for about one year. Having looked at the different charts in the WUWT articles with high variability remembers me always to charts describing control systems. The other issue: Some charts (for example from one of the last articles from Mr. Eschenbach with the roman and medieval warming periods and the CO2-level) suggest that there would be some periodical mechanisms controlling the climate system of the earth. And this article discusses an application of the signal theory to analyze climate events. So what about trying to apply the well known methods of the control theory and DSP to analyze climate data? Some examples:

  1. Searching for signal components or periodic signals. This is a quite routine job for example in military applications. The data (samples from the measured signal(s)) will be transformed into another space/metric, and the transformed signals will be analysed. There are a lot of transformations for different purposes from DFT (dicrete Fourier transformation) to Wigner-transformation, etc. There exist also a lot of almost standardised tools for searching embedded signals. I am almost sure, that many results from the sonar and radar theory could be adapted and applied also to analyse climate data.
  2. It would be also interesting trying to use some methods of the control theory (after some adaptation) to analyse the variances in the climate data, and to model the climate as a control system.
c1ue
August 20, 2021 9:59 am

This is really interesting. Steve McIntyre routinely skewers statistical modeling errors and outright fraud at climateaudit – it appears the original authors of AT99 simply made a mistake and were never called on it.
The question, of course, is whether this matters to the climate change gravy train.

Roger Tilbury
August 22, 2021 12:55 am

I have no reason to doubt the excellent Ross McKitrick, but I would question why it has taken more than 20 years to spot this egregious error ?

William Haas
August 22, 2021 8:27 pm

The AGW conjecture is based on only partial science and is full of holes. For example, the AGW conjecture depends upon the existence of a radiant greenhouse effect caused by trace gases in the Earth’s atmosphere with LWIR absorption bands. The AGW conjecture claims that CO2 in particular is a heat trapping gas but the AGW conjecture neglects the fact that with gases all good absorbers are also good radiators. If left alone, all LWIR photons that CO2 absorbs are also radiated away for a net energy gain of zero. The so called greenhouse gases do not trap heat. A real greenhouse is not kept warm because of the action of a radiant greenhouse effect. Instead it is a convective greenhouse effect that keeps a greenhouse warm. So too on Earth where instead of glass we have the heat capacity of the atmosphere and gravity that provides a convective greenhouse effect that accounts for all of the observed insulating effects of the Earth’s atmosphere. A radiant greenhouse effect has not been observed in a real greenhouse, in the Earth’s atmosphere, or on any planet in the solar system with a thick atmosphere. The radiant greenhouse effect is nothing but science fiction so hence the AGW conjecture is nothing but science fiction as well

If CO2 really caused warming then the increase in CO2 over the past 30 years should have caused at least measurable increase in the dry lapse rate in the troposphere but that has not happened. H2O is suppose to be the primary so called greenhouse gas and molecule per molecule H2O is a stronger IR absorber than is CO2. So if adding more H2O to the atmosphere caused warming one would expect that the wet lapse rate would be greater than the dry lapse rate but the opposite is true. Adding more of the primary greenhouse gas to the atmosphere caused cooling, not warming.

%d bloggers like this: