Lovejoy's 99% 'confidence' vs. measurement uncertainty

By Christopher Monckton of Brenchley

It is time to be angry at the gruesome failure of peer review that allows publication of papers, such as the recent effusion of Professor Lovejoy of McGill University, which, in the gushing, widely-circulated press release that seems to accompany every mephitically ectoplasmic emanation from the Forces of Darkness these days, billed it thus:

“Statistical analysis rules out natural-warming hypothesis with more than 99 percent certainty.”

One thing anyone who studies any kind of physics knows is that claiming results to three standard deviations, or 99% confidence, requires – at minimum – that the data underlying the claim are exceptionally precise and trustworthy and, in particular, that the measurement error is minuscule.

Here is the Lovejoy paper’s proposition:

“Let us … make the hypothesis that anthropogenic forcings are indeed dominant (skeptics may be assured that this hypothesis will be tested and indeed quantified in the following analysis). If this is true, then it is plausible that they do not significantly affect the type or amplitude of the natural variability, so that a simple model may suffice:

clip_image002 (1)

ΔTglobet is the measured mean global temperature anomaly, ΔTantht is the deterministic anthropogenic contribution, ΔTnatt is the (stochastic) natural variability (including the responses to the natural forcings), and Δεt is the measurement error. The last can be estimated from the differences between the various observed global series and their means; it is nearly independent of time scale [Lovejoy et al., 2013a] and sufficiently small (≈ ±0.03 K) that we ignore it.”

Just how likely is it that we can measure global mean surface temperature over time either as an absolute value or as an anomaly to a precision of less than 1/30 Cº? It cannot be done. Yet it was essential to Lovejoy’s fiction that he should pretend it could be done, for otherwise his laughable attempt to claim 99% certainty for yet another me-too, can-I-have-another-grant-please result using speculative modeling would have visibly failed at the first fence.

Some of the tamperings that have depressed temperature anomalies in the 1920s and 1930s to make warming this century seem worse than it really was are a great deal larger than a thirtieth of a Celsius degree.

Fig. 1 shows a notorious instance from New Zealand, courtesy of Bryan Leyland:

clip_image004

Figure 1. Annual New Zealand national mean surface temperature anomalies, 1990-2008, from NIWA, showing a warming rate of 0.3 Cº/century before “adjustment” and 1 Cº/century afterward. This “adjustment” is 23 times the Lovejoy measurement error.

 

clip_image006clip_image008

Figure 2: Tampering with the U.S. temperature record. The GISS record from 1990-2008 (right panel) shows 1934 0.1 Cº lower and 1998 0.3 Cº higher than the same record in its original 1999 version (left panel). This tampering, calculated to increase the apparent warming trend over the 20th century, is more than 13 times the tiny measurement error mentioned by Lovejoy. The startling changes to the dataset between the 1999 and 2008 versions, first noticed by Steven Goddard, are clearly seen if the two slides are repeatedly shown one after the other as a blink comparator.

Fig. 2 shows the effect of tampering with the temperature record at both ends of the 20th century to sex up the warming rate. The practice is surprisingly widespread. There are similar examples from many records in several countries.

But what is quantified, because Professor Jones’ HadCRUT4 temperature series explicitly states it, is the magnitude of the combined measurement, coverage, and bias uncertainties in the data.

Measurement uncertainty arises because measurements are taken in different places under various conditions by different methods. Anthony Watts’ exposure of the poor siting of hundreds of U.S. temperature stations showed up how severe the problem is, with thermometers on airport taxiways, in car parks, by air-conditioning vents, close to sewage works, and so on.

(corrected paragraph) His campaign was so successful that the US climate community were shamed into shutting down or repositioning several poorly-sited temperature monitoring stations. Nevertheless, a network of several hundred ideally-sited stations with standardized equipment and reporting procedures, the Climate Reference Network, tends to show less warming than the older US Historical Climate Network.

That record showed – not greatly to skeptics’ surprise – a rate of warming noticeably slower than the shambolic legacy record. The new record was quietly shunted into a siding, seldom to be heard of again. It pointed to an inconvenient truth: some unknown but significant fraction of 20th-century global warming arose from old-fashioned measurement uncertainty.

Coverage uncertainty arises from the fact that temperature stations are not evenly spaced either spatially or temporally. There has been a startling decline in the number of temperature stations reporting to the global network: there were 6000 a couple of decades ago, but now there are closer to 1500.

Bias uncertainty arises from the fact that, as the improved network demonstrated all too painfully, the old network tends to be closer to human habitation than is ideal.

clip_image010

Figure 3. The monthly HadCRUT4 global temperature anomalies (dark blue) and least-squares trend (thick bright blue line), with the combined measurement, coverage, and bias uncertainties shown. Positive anomalies are green; negative are red.

Fig. 3 shows the HadCRUT4 anomalies since 1880, with the combined anomalies also shown. At present, the combined uncertainties are ±0.15 Cº, or almost a sixth of a Celsius degree up or down, over an interval of 0.3 Cº in total. This value, too, is an order of magnitude greater than the unrealistically tiny measurement error allowed for in Lovejoy’s equation (1).

The effect of the uncertainties is that for 18 years 2 months the HadCRUT4 global-temperature trend falls entirely within the zone of uncertainty (Fig. 4). Accordingly, we cannot tell even with 95% confidence whether any global warming at all has occurred since January 1996.

clip_image012

Figure 4. The HadCRUT4 monthly global mean surface temperature anomalies and trend, January 1996 to February 2014, with the zone of uncertainty (pale blue). Because the trend-line falls entirely within the zone of uncertainty, we cannot be even 95% confident that any global warming occurred over the entire 218-month period.

Now, if you and I know all this, do you suppose the peer reviewers did not know it? The measurement error was crucial to the thesis of the Lovejoy paper, yet the reviewers allowed him to get away with saying it was only 0.03 Cº when the oldest of the global datasets, and the one favored by the IPCC, actually publishes, every monthy, combined uncertainties that are ten times larger.

Let us be blunt. Not least because of those uncertainties, compounded by data tampering all over the world, it is impossible to determine climate sensitivity either to the claimed precision of 0.01 Cº or to 99% confidence from the temperature data.

For this reason alone, the headline conclusion in the fawning press release about the “99% certainty” that climate sensitivity is similar to the IPCC’s estimate is baseless. The order-of-magnitude error about the measurement uncertainties is enough on its own to doom the paper. There is a lot else wrong with it, but that is another story.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
5 1 vote
Article Rating
268 Comments
Inline Feedbacks
View all comments
Smoking Frog
April 12, 2014 2:52 pm

Nancy C says:
April 12, 2014 at 4:37 am
Correct me if I’m wrong, but if there were only 100 things the world could possibly do, then no matter what it did, there would be, on average, a 99% chance against that happening. If there’s a 99% chance against something happening, then it must have been caused by humans. Since the total number of things the world could possibly do is much much larger than 100, pretty much everything is caused by humans. Right?
Sure, you’re wrong. The 99% chance is only true if all 100 things are equally probable. The rest of what you say is wrong, too. Even if all things “the world could possibly do” could be caused by humans, this wouldn’t have any implication for which ones could be caused by humans or for whether any of them that could be caused by humans actually was caused by humans.

Smoking Frog
April 12, 2014 2:57 pm

CORRECTION
Sure, you’re wrong. The 99% chance is only true if all 100 things are equally probable. The rest of what you say is wrong, too. Even if all things “the world could possibly do” could be caused by humans, this wouldn’t have any implication for which ones could be caused by humans or for whether any of them that could be caused by humans actually was caused by humans.

Delete “for which ones could be caused by humans.” The sentence should be:
Even if all things “the world could possibly do” could be caused by humans, this wouldn’t have any implication for whether any of them that could be caused by humans actually was caused by humans.

pottereaton
April 12, 2014 2:59 pm

William Briggs has a hilarious take on this paper:

Lovejoy Update To show you how low climatological discourse has sunk, in the new paper in Climate Dynamics Shaun Lovejoy (a name which we are now entitled to doubt) wrote out a trivially simple model of global temperature change and after which inserted the parenthetical words “skeptics may be assured that this hypothesis will be tested and indeed quantified in the following analysis”. In published comments he also fixated on the word “deniers.” If there is anybody left who says climate science is no different than politics, raise his hand. Anybody? Anybody?
His model, which is frankly absurd, is to say the change in global temperatures is a straight linear combination of the change in “anthropogenic contributions” to temperature plus the change in “natural variability” of temperature plus the change in “measurement error” of temperature. (Hilariously, he claims measurement error is of the order +/- 0.03 degrees Celsius; yes, three-hundredths of a degree: I despair, I despair.)
His conclusion is to “reject”, at the gosh-oh-gee level of 99.9%, that the change of “anthropogenic contributions” to temperature is 0.
Can you see it? The gross error, I mean. His model assumes the changes in “anthropogenic contributions” to temperature and then he had to supply those changes via the data he used (fossil fuel use was implanted as a proxy for actual temperature change; I weep, I weep). Was there thus any chance of rejecting the data he added as “non-significant”?
Is there any proof that his model is a useful representation of the actual atmosphere? None at all. But, hey, I may be wrong. I therefore challenge Lovejoy to use his model to predict future temperatures. If it’s any good, it will be able to skillfully do so. I’m willing to bet good money it can’t.

http://wmbriggs.com/blog/?p=8061

Bob F
April 12, 2014 3:18 pm

So if the historical temperature record is ever adjusted by more than 0.03 K again, doesn’t that invalidate his basic assumption? And indirectly prove that the variation is natural?

April 12, 2014 3:47 pm

Bob F:
You ask at April 12, 2014 at 3:18 pm

So if the historical temperature record is ever adjusted by more than 0.03 K again, doesn’t that invalidate his basic assumption? And indirectly prove that the variation is natural?

The answer to your first question is ‘yes’,
and
the answer to your second question is ‘no’.
Lovejoy’s analysis is meaningless because it assesses something which has no agreed definition so it can be – and is – often changed Please see my above post at April 12, 2014 at 3:08 am: this link jumps to it
http://wattsupwiththat.com/2014/04/11/lovejoys-99-confidence-vs-measurement-uncertainty/#comment-1611518
Richard

garymount
April 12, 2014 3:48 pm

How to do a proper global temperature analysis.
Use a historical weather recreation software. This uses similar computer code that is used to predict future weather. Known temperatures as recorded by instruments or proxies help guide the progression of the weather recreation through time and space. Advantages are; known physics are used to bound the range of possible weather / climate conditions. This helps to provide an error range in further analysis for example.
Additionally, urban heat island effect computer code is used to model temperatures surrounding climate motoring instruments.
This is a tiny excerpt from the sub core project code named Bastardi, part of the much larger group of projects comprised within the code named Wattson Project. Sorry, no further details at this time. I would like skeptics to know though that something wonderful is being developed to help our cause, but much work is still to be done. Fortunately the financing is in place.
ggm

tz2026
April 12, 2014 3:54 pm

Climate Change we can believe in.
Climate Change for the better.
Weren’t the voters in Egypt and Iraq under Saddam 95% certain the incumbents were the right choice?

April 12, 2014 4:38 pm

Please note a correction to the original text: it now says that the effect of Anthony’s campaign of inspecting the sites of U.S. weather stations led to closure of several poorly-sited stations, but that nevertheless the ideally-sited US Climate Reference Network (which in fact predated Anthony’s campaign, and was not a consequence of it as I had incorrectly stated) continues to show warming at a lesser rate than the legacy network.

Jeff Alberts
April 12, 2014 6:01 pm

David L. says:
April 12, 2014 at 2:37 am
There’s an 8 F difference throughout the year between my back yard and a friends back yard that lives only 5 miles away.

The straight-line distance between where I work (Mt Vernon, WA) and where I live (Oak Harbor, WA) is about 13 miles. I’ve personally experienced temp differences as high as 27F between the two places only 30 minutes apart. In the summer, on sunny days, the difference is greatest. The average difference in Summer is about 15F. On cloudy days and in Fall and Winter the difference is much less.

April 12, 2014 6:44 pm

John Mason’s Graphic of Greenland Ice Core
Such a cool graphic I saved it on my desktop.
My take on Dr Lovejoy’s paper is different from that of Lord Monckton. The data represent time-series, so spurious correlation must be excluded.
Autocorrelation is a problem with time-series data and must somehow be dealt with. The ARIMA model may be appropriate since it specifically considers autoregressive relationships. Stationarity of the variables is a problem that econometricians have learned to control before coming to conclusions about correlation among variables. Econometricians deal with this problem using cointegration rather than correlation.
In the case of climate, a group of Israelis found that, “global temperature and solar irradiance are stationary in 1st differences whereas greenhouse gases and aerosol forcings are stationary in 2nd differences. We show that although these anthropogenic forcings share a common stochastic trend, this trend is empirically independent of the stochastic trend in temperature and solar irradiance.”
The group asked the question: Does CO2 concentration polynomially cointegrate with global temperature during the period 1880–2007 and thus support the anthropogenic interpretation of global warming during this period?
The Israeli group concluded, “We have shown that anthropogenic forcings do not polynomially cointegrate with global temperature and solar irradiance. Therefore, data for 1880–2007 do not support the anthropogenic interpretation of global warming during this period.”
My opinion, based on the statistical methodology used by Dr Lovejoy, is the same as Professor Wegman gave in reference to Dr Micheal Mann’s work leading to the so-called “Hockeystick”. Physical scientists should seek the assistance of professional statisticians when they wish to apply statistical models to data. Otherwise they risk the pitfalls of dubious results, not least spurious correlation.
Reference: Beenstock, Reingewertz, and Paldor Polynomial cointegration tests of anthropogenic impact on global warming, Earth Syst. Dynam. Discuss., 3, 561–596, 2012.
URL: http://www.earth-syst-dynam-discuss.net/3/561/2012/esdd-3-561-2012.html

lee
April 13, 2014 1:02 am

Jan Lindström says:
April 12, 2014 at 7:58 am
Well, after scanning through the Lovejoy paper this came up: “In this paper we have argued that since ≈1880, anthropogenic warming has dominated the natural variability to such an extent that straightforward empirical estimates of the total warming can be made”.
empirical estimates? based on guesstimations?

Non Nomen
April 13, 2014 2:59 am

“Now, if you and I know all this, do you suppose the peer reviewers did not know it?” (Lord Monckton)
Beer review, Pal Review ? Well, Mylord, the villains have struck again.
It is time to let this badly reviewing academic pseudo-aristocracy hear the sound of
le “Ça Ira”:
“Les aristocrats à la lanterne…” Just to come closer to some illumination, of course.
It’s time for a revolution in peer-review, I think.

Concerned Citizen
April 13, 2014 4:50 am

Hey guys, I know you’re having a lot of fun all agreeing with one another in here, but a lot of your arguments suffer from pretty fundamental flaws. I’m just going to drop you a line so you have something else to band-disagree on.
As a foreword, and with full disclosure, I am currently finishing an honors physics degree at McGill university. I took an electricity and magnetism class with Lovejoy, and while I found his lecturing style terribly dry, I have nothing but respect for him as a researcher. A colleague and friend of mine is collaborating on a paper with him (in the area of multifractal cloud formation) and I assure you, he happily offers first author to anybody who contributed more than him and frequently rejects/revises figures which don’t meet his scientific standards.
To the accusation that discarding the error term is invalid:
This was discussed in the article proper, and again in one of the referenced papers. Needless to say it is much better justified than this article/rebuttal accuses it of being. Physicists, of all scientists, take quoting errors VERY seriously. Not least of which non-linear physicists whose specialty is being able to distinguish stochastic data from error.
To the accusations that Lovejoy can’t have written most of his papers:
The two most published mathematicians are Erdos (who published over 1500) and Euler (whose volume of work exceeds Erdos) – ostensibly the two best respected. You’re going to have to come up with a much better reason than “there are too many of them” to disregard his work. In particular, Erdos collaborated on almost every one of his papers, yet nobody accuses his work of being invalid because of it.
To the accusations of cherry-picked data:
Cherry-picking climate data is much more widespread in the circles of climate change denial. This article/rebuttal, for example, contains figures and conclusions which appear in a vast minority of print, many of which simply exclude errors or the underlying analysis/explanation in a cheapened attempt to make a point.
Quote:
“The plus or minus 0.15C confidence interval of the UN-IPCC accepted HadCRUT4 data set proves Lovejoy’s cherry picked, prox-tology analysis and confidence interval of 0.01C is wrong.”
The recent leveling off of temperatures and the post war drop were both addressed in the article by Lovejoy and are consistent with his analysis. To those who didn’t understand the technical jargon in the article, his “fat-tailed” probability distributions really do represent the worst-case scenario (these distributions can included cases with infinite variance).
Quote:
““…In that simple statement is the key to science. It does not make any difference how beautiful your guess is, it does not make any difference how smart you are, who made the guess, or what his name is — if it disagrees with experiment, it is wrong.”
Magma, do you disagree with Feynman’s statement on the key to science?”
This type of argument is commonly known as a straw man. Instead of visiting the original debate, you assert without evidence that it is equivalent to an argument you expect to be able to win (in this case, whether Feynman’s statement is correct). The same can be said of the smartass with the Einstein quote.
Quote:
“This seems to be a rehash of that old standby, temperature regressed against CO2 forcing for the period 1880 onwards. Except that Lovejoy does pay attention to the residuals from the regression – key to any statistical analysis. Remarkably however, it is argued (quoting the dreaded proxies) that these residuals represent all possible ‘natural variation’.”
This was addressed in the paper by Lovejoy, where he found the residuals from his CO2 fit were compatible with those from all natural variation. He argued that this was the case because of the tight correlation between economic activity and CO2 output.
There’s more, but that should be plenty for you to chew on.

April 13, 2014 5:14 am

The anonymously anonymous “Non Nomen” will forgive me for being less than enthusiastic about the notion of “les aristocrats a la lanterne!”.

Non Nomen
Reply to  Monckton of Brenchley
April 13, 2014 6:36 am

@Lord Mockton
It was the “academic pseudo-aristocracy” I was referring to **GG**.
I’m pretty satisfied that a genuine Viscount teaches these “Peers” manners.
Your obedient servant
Non Nomen

rogerknights
April 13, 2014 7:16 am

Maybe someone should turn a searching eye on the other 500 papers for which he did the statistical work. “There must be some droll stuff there.”

April 13, 2014 7:56 am

Nice analysis, Anthony. Manipulation of the data sets is fraud, but the real question is how do they get away with it. Also, I think a 500 year measurement period is too short to verify what is happening in a 134 year target zone.

April 13, 2014 8:47 am

Between 1880-1945 the temperature of the earth climbed substantially maybe half the total temperature climb we’ve seen in the whole period 1880-2014. This was because of a massive run up in temperature from 1910-1940. However every estimate of co2 I’ve seen shows virtually no significant increase in co2 concentration in the atmosphere (at most 10%) compared to the 40% increase since 1945. Therefore how does lovejoy explain this unexplainable rise before 1945.

April 13, 2014 9:03 am

Just two points about the data.
1) I don’t understand why Monckton divided my equation by Δt because the measurement error ε(t) has statistics that are essentially independent of Δt as shown by fig. 1 (bottom curve) of the paper referenced (Lovejoy, S., D. Schertzer, D. Varon, 2013: How scaling fluctuation analyses change our view of the climate and its models (Reply to R. Pielke sr.: Interactive comment on “Do GCM’s predict the climate… or macroweather?” by S. Lovejoy et al.), Earth Syst. Dynam. Discuss.,3, C1–C12, http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/ESDD.comment.14.1.13.pdf).
In the figure, I simply took 4 globally averaged surface temperature series from 1880 (NOAA, NASA, HAdcrut3, Twentieth Century Reanalysis (20CR)) and computed the standard deviations of their differences as a function of time scale. They all agree with each other to ±0.03K up to ≈ 100 year scales. Since one of them (the 20CR series) used no station temperatures whatsoever (only station pressure data and monthly Sea Surface Temperatures), any biases from manipulation of temperature station data must be small.
2) The multiproxies all agree well with each other up to about 100-200 year scales (see e.g. fig. 9, 10 of Lovejoy, S., D. Schertzer, 2012: Low frequency weather and the emergence of the Climate. Extreme Events and Natural Hazards: The Complexity Perspective, Eds. A. S. Sharma, A. Bunde, D. Baker, V. P. Dimri, AGU monograph, pp 231-254, doi:10.1029/2011GM001087, http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/AGU.monograph.2011GM001087-SH-Lovejoy.pdf, see also ch. 11 in Lovejoy, S., D. Schertzer, 2013: The Weather and Climate: emergent laws and multifractal cascades, 496pp, Cambridge U. Press). The multiproxies disagree with each other at longer scales, but this is irrelevant, only the statistics at 125 year scales are important.
Another way of saying this is that the actual (absolute) paleotemperatures are irrelevant, only the 125 year temperature changes are relevant.
You can have your medieval warming (and other warmings if you like, and they can be very big, bigger than the 2013 temperature if you must!), but this is not relevant to the analysis.
-Forces of Darkness

Reply to  Shaun Lovejoy
April 13, 2014 9:11 am

Professor Lovejoy, welcome! Mr. Monckton will be along soon to respond.

Gail Combs
April 13, 2014 9:48 am

Magma says: April 11, 2014 at 9:16 pm
“A short comparison
S. Lovejoy: Physics PhD; climate, meteorological and statistical expert; 500+ publications
C. Monckton: Classics BA; nil; 1 (not peer reviewed)”
>>>>>>>>>>>>>>>
Thanks for another great reason to push the defunding of ALL science and Universities.

If this is the type of drivel that academia now puts out we can save ourselves billions of dollars by yanking that funding. Given the huge government debts and the fact the IMF is now pushing for a 10% global wealth confiscation to put the government debts back to pre-2008 levels we need to come up with the places to cut funding. So again thank you for pointing out a very obvious area where western governments can cut across the board with little or no actual lost.
The International Monetary Fund Lays The Groundwork For Global Wealth Confiscation

John Whitman
April 13, 2014 9:56 am

Intellectually surrounded Lovejoy is. Mercy in critical applied reasoning show not. (Yoda style grammar is fun)
Intellectually surrounded with no mercy by at least these four:
1) pottereaton (comment above at April 12, 2014 at 2:59 pm) points out William Briggs’ statistical finding of Lovejoy’s inane circular reasoning => http://wmbriggs.com/blog/?p=8061
2) both Jan Lindström (comment above at April 12, 2014 at 7:33 am) and Frederick Colbourne (comment above at April 12, 2014 at 6:44 pm) point out the Beenstock (et al 2012) cointegrating statistical analysis that Lovejoy’s naïve statistical approach cannot compete => http://www.earth-syst-dynam-discuss.net/3/561/2012/esdd-3-561-2012.html
3) Christopher Monckton’s lead post on the certainty of Lovejoy’s lack of proper accounting for uncertainties.
4) The ‘eyes in the sky’ analysis by UAH and RSS give Lovejoy’s conclusions no love.
Lovejoy surrounded is. Viable intellectual defense Lovejoy has not.
John

Crispin in Waterloo
April 13, 2014 11:22 am

I am forced to conclude that the author knowingly misrepresented what can be concluded from the investigation, and that the reviewers are incompetent. The publication should have it reviewed again and ultimately withdrawn for over-reaching.

April 13, 2014 12:02 pm

“Please note a correction to the original text: it now says that the effect of Anthony’s campaign of inspecting the sites of U.S. weather stations led to closure of several poorly-sited stations, but that nevertheless the ideally-sited US Climate Reference Network (which in fact predated Anthony’s campaign, and was not a consequence of it as I had incorrectly stated) continues to show warming at a lesser rate than the legacy network.”
No it doesnt.
CRN rates of warming match the rest of the network quite well.
Further, the newest most accurate satillite data (AIRS) also matches the “corrupt” network quite well.

April 13, 2014 12:04 pm
April 13, 2014 12:12 pm

Tony
“Come on Mosh
We need you to turn up and explain again how it is OK to change past temperatures by using an algorithm.”
It’s pretty simple.
1. You can never avoid using algorithms or theories. All thought rests on assumptions.
2. All you have are records. Records require interpretation. they do not speak for themselves
example: You have a written record that claims the temperature was -198C.
You check other records and find that other records show -19.8C
You apply an algorithm that assumes the -198C was an error with the decimal shifted.
3. You do the best job you can in QA and document what you do.
basically, no record can be taken at face value. A good skeptic questions everything.
A bad skeptic puts his trust in things that are demonstrably wrong because he likes the answer.
Here is another fact
As we recover more and more records.. as we digitize old records from south america and africa, and canada, we find that the past was colder than Hansen and Jones though it was.
Go figure. more data, better answers. Keeping an open mind and a skeptical mind.. we find less wrong answers.

April 13, 2014 12:21 pm

“At least Mosher finally admitted BEST alters data. Everyone knew that, but really Mosher, why should anyone take you seriously?”
Huh.
You dont quite get it.
1. All data is ingested from open sources. Un altered.
2. QA is applied. If a station reports -198C whlle its neighbor reports -19.8 C. we flag
the data as suspect.
3. Stations are merged where the metadata and data says they are the same station.
4. We compute the field using methods promoted and championed by skeptics
What is the field? The field is the BEST PREDICTION of what the temperature was at that location. This is called the expectation.
The raw data of a station will differ from the expection. But the expectation is the best statistical prediction of the temperature at that location GIVEN the raw data.
If a researcher wants to use the expectation as opposed to the raw data, we provide time series of the stations in two forms: raw form and adjusted to match the regional expectation.
You can use either one depending on your scientific leanings.
Since we used methods that skeptics told us would be the best methods for finding the true temperature, most skeptics should use the expectation. Why? well because that is what the method they demanded produces.

1 4 5 6 7 8 11