Note: On Thursday of this week, NOAA/NCDC will attempt to rewrite the surface temperature record yet again, making even more “adjustments” to the data to achieve a desired effect. This story by Mr. Core is worth noting in the context of data spin that we are about to be subjected to – Anthony Watts
Grandma hangs up the phone, beaming. She has just talked with her daughter-in-law, Gabrielle, who had said, “Final report cards are out, and Gavin has straight A’s!” So, Grandma hurries over to see this remarkable report card for herself.
Sitting down at the kitchen table with Gabrielle and Gavin, Grandma opens the report card expectantly — though she has noticed a sheepish look on her grandson’s face. She looks the report card over. And over. And over. Instead seeing of all A’s, she’s seeing three A’s, two B’s, one C, and a D.
Puzzled, and with a sheepish look on her own face now, she hesitatingly asks her daughter-in-law, “Didn’t you tell me… Dear… that Gavin got straight A’s?”
“Yes, I did”, Gabrielle replies.
Noting the look of confusion on her mother-in-law’s face, she continues, “Here. Let me explain.”
“There are three A’s on the report card. You see them” — she points — “here, here, and here. So, we know he gets A’s.”
“Now, this first B, here,” Gabrielle continues. “You must understand that Gavin didn’t like that class. If he had liked that class, he would have put in more effort, and he would have gotten an A. So, that B should really be an A.”
Grandma sits quietly.
“And this other B, here,” Gabrielle says. “You must understand that the teacher just had it in for Gavin. If he had had a different teacher, he would have gotten an A. So, that B, too, should really be an A.”
Grandma sits quietly.
“Now, the C,” her daughter-in-law continues. “You must understand that Gavin didn’t like the class, and the teacher had it in for him, too. If Gavin had liked the class, and if he had had a different teacher, he would have gotten an A. So the C should really be an A.”
Grandma sits quietly.
“Now, the D,” says Gabrielle. “Gavin liked that class, and he had a good teacher, too. But three of his friends got an A in this class; they also got A’s in the very same three classes that Gavin’s report card has A’s in. So, the D should really be an A.”
“And that’s why I told you that Gavin has straight A’s.”
Grandma sits quietly.
Then, Gavin’s sister walks into the room. A sheepish look comes over her face when her mother asks Grandma, “Would you like me to explain how Gavrilla really won at the track meet?”
Grandma leaves quietly.
E. L. Core has a B.S. in Mathematics and Computer Science and is an associate editor at the Catholic Lane website, catholiclane.com. His series “Uncommon Core: Climate Sanity” is forthcoming later this year.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

No worries. Grade inflation by teachers is nothing compared to grade inflation by parents. If your child is not doing well, then, that means you as a parent are not doing well. So, you inflate your child’s grades to inflate your own ego, which in turn allows you to discount what the teacher said about studying, showing your work, and turning in your home work.
Perhaps, but the parent can do lots and lots and lots and to help the kid, but the kid still has to be the one taking the test, and if he/she does not do well despite plenty of parental guidance, then what say you? Off subj I know….
If you want to make “adjustments” to the raw data, please follow these guidelines;
1) Maintain all of the raw data, and provide it upon request.
2) Specifically state the reason(s) for and amount(s) of adjustment(s) for every individual data piece adjusted. I’m looking for methodology here.
3) Provide the adjusted data upon request.
4) Require any work based on the adjusted data to clearly state, up front, that “adjusted data was used.”
The problem is there is no ‘raw data’. There are many fragments of data but they cannot simple be merged into a single dataset and called ‘raw data’.
For 1 the distribution of weather stations is not an even distribution. To give them all the same weighting is clearly wrong. Then there are problems such as weather stations that are moved, changes in the time of day the data is collected and changes to equipment. The homogenisation of the data is essential if you are to produce any meaningful data.
Congratulations John, your application to Warmista Liars Academy has been accepted.
Nonsense John, you take a measurement and you publish it.
You can even take a measurement do some QA (providing all of the rules), add a QA flag and release it all as raw data.
It’s not hard to do.
This isn’t true either.
What you’re describing isn’t data, it’s product, it’s like going to the organic food store, asking for cheese and being handed a can of cheeze-whiz. I like cheese-whiz, but it ain’t the kind of cheese you should get in an organic food store. We want the unprocessed raw data, fragments, warts and all, so we can tell what they did to make their product, and we want to see if we can actually reproduce their product. Until we can reproduce the underlying product, all of this AGW is just blue-smoke and mirrors
“…if you are to produce any meaningful data.” What is “produced” is not “data”, it is manufactured product. The raw data are those individual measurements of temperature together with the descriptors of location and condition of the instrument and time of reading. All operations on each individual datum should be recorded and available for review to assure that “homogenisation” is not just another word for “corruption”.
And still, this process creates a meaningless number with so much uncertainty (which is never reported) so as to show that nothing statistically important is happening with temperature. Anyone who thinks that temperature change measured in the second decimal place is in any way significant seems to me to be absolutely anti-science.
The one part I object to almost as much as the utter meaninglessness of a single number from a set of thermometers describing the Earth’s temperature at a given time, is using thermometers geographically spread far apart and “homogenizing” them into a virtual temperature somewhere else. Utter BS. Which is clear because these adjustments keep sequentially moving adjusted temperatures further in one direction (sure sign of a bias) with each new adjustment. This phraud is covered well elsewhere and is going to get even more attention in the near future, so I’ll not rehash in this comment.
At least satellite data are almost uniformly spread already and I can almost accept something like a single number.
Bruce
Gridding (what you describe) is important. Homogenization, however, is crap.
The thing about weighting is it is subjective, it requires human judgement as to how and how much to weight this station over that station, this factor over that factor. If only raw data is provided, that is certainly problematic, it would allow different people to make different judgments or put the readings in different contexts. And if that happens then the temperature record really becomes a circus. Or I should say gets exposed as a circus.
Luckily we do not require a people to make a judgments as to what a gallon of petrol represents, or a pound of flour.
“Luckily we do not require a people to make a judgments as to what a gallon of petrol represents, or a pound of flour.”
And you’re confused as to what a station measures?
Infilling and homogenization increases the uncertainty, and few (if any) of the temp series offer a realistic value.
Yeah…I think I get it now. You have to add “meaning” otherwise we would get the wrong idea…!
OK John, I see where you are comning from. But how do you explain the ‘adjustments’ overwhelmingly being upwards? Sorry, sport but that is jus a bit too cute for me. I think I will just refer to the satellite/balloon data. It seems to be so… what is the right word? Robust? Reliable? Unadjusted?
Maybe you are quite correct and it is just not possible to construct ( contrive?) a meaningful number from such a raggeday assed set of instrument data. So where does that leave us? B,lind faith with its eyes gouged out?
Actually I agree with you John. Data has to be processed. Measurements are subject to variables, its a fact. And the better we understand those variables the clearer the image. I’m sorry that you’re experiencing a “pile on” after your comment. It reminds me of a similar comment made by Lindzen during a debate after his opponent chided him for not trusting the data. The implication here was raw data.
owenvsthegenius
Nobody “piled on”. Some refuted and others ridiculed nonsense.
But you say
Please explain how you think data being “processed” is preferable to citing the error bounds of each datum. And an explanation of how that preference can be equated with the scientific method would also be appreciated.
Thanking you in advance.
Richard
Richard, raw data and the error bounds should accompany adjustments. It aught to be clear what processing took place and why. Adjusents are a moving target. As to the scientific method…we are always attempting to contextualize data. In the case of a LIDAR scan, a 3D modeler has to withdraw noise from the scan, fill in blanks ( snow, rain can cause these) the data has to be processed and layers applied to create a working rig. The LIDAR scan is not as useful on its own. This is not to say “toss out the raw”, adjustments have to made clear so they can be verifiable and justifiable, otherwise the adjustments are not useful
All true, all irrelevant
The data is the data. End of story. We don’t what the data “should” be – if we did, we wouldn’t need the data.
When you homogenise, show the original data and show what you did. Allow people to critique it, try and replicate it, show if you are wrong.
Changing the underlying data because you know what it should say is simply wrong.
Correct, but for example, the global mean temperature is not data in the first place since you can’t measure it. You can have raw data from stations or satellites (measured in the three spatial and one time dimension), from which you construct in a way or another the thing you purport to represent the global mean temperature, whatever it actually means.
This question is surprisingly philosophical. I don’t think adjustment is wrong at all, but care should be taken in what you call the end result. Do you call it a measurement, or model? Do you realise the model is vulnerable to your conceptions and is, in fact, an interpretation or even opinion rather than data?
What I’m trying to say is that people are willing to take a number for its face value, let it be with error bars or not, when the trouble is NOT with the missing error bar, but rather what the number actually represents. People believe too much in numbers. It is very difficult to understand the number can be anything as the text around it actually defines its purported meaning. And this I say as a mere M.Sc. from distance; I guess both scientists and laymen promote numbers too high compared to the legend.
‘Your table is precise, yes, but your legend fails to tell accurately what it really represents.’
owenvsthegenius
Thankyou for your explanation of your point which I requested.
However, as Tony Hammond says, your explanation is “all true but irrelevant”.
I asked
You have not said in what – if any – way data being “processed” is preferable to citing the error bounds of each datum. And your discussion of attempts to “contextualize” data from other fields has no relevance of any kind.
Furthermore, you say
In climastrology – which is the subject we are discussing – the raw data ARE thrown away so according to you the “adjustments” to temperature data “are not useful”.
The bottom line is as Tony Hammond says
Richard
Richard, I agree completely with Tony. I would add, that in many cases the raw + error bounds are preferable to adjusted. In fact the only instance where “tossing out the raw” is acceptable is when the raw cannot be read. This might occur when the data is so rubust that it requires computational power + software + engineering that the layman can’t access. To clarify; the data has to be processed to be read. Otherwise raw should always accompany adjusted. And oftentimes the adjusted is little better than than an opportunity to display bias. Without verification the adjustments are not useful, furthermore in the case of “climate science”, with so much at stake, raw data should not be proprietary. We should be careful giving authority to inferences.
As to contextualizing; I used an example from a (related) field to illustrate an idea. I’m sure you get the point
Richard, I almost forgot. Error bounds, yes they are a must. Without these we have nothing. Just a thought, how do we arrive at the margins for error? Please relate your answer to the topic
Hugh
I agree with you when you say
but we part company when you say
The number cannot represent anything unless it includes its error bars. Absent the error bars the number could be representing any value between minus infinity and plus infinity: an indication that could be of any value is not an indication of any actual value.
This was the purpose of my original question to owenvsthegenius; viz.
In the absence of error bars any “adjustment” to the value has no real effect because it does not alter the fact that the value has no defined accuracy and precision whether or not it is adjusted.
Richard
owenvsthegenius
It seems we may be converging on some sort of agreement.
You say
All my comments have been related to the topic.
I refer you to the comment from Hugh and my subsequent answer to him.
Also, we cannot provide error estimates because
(a) temperature is an intrinsic property so cannot have a valid average according to physics
but
(b) average global temperature anomaly is calculated by ‘climate science’ although there is no definition of it (each team that computes it uses its own definition of it and changes that definition almost every month)
and
(c) there cannot be an agreed error estimate for a parameter that has no agreed definition
additionally and importantly
(d) there is no possibility of a calibration reference for average global temperature anomaly however it is defined.
For a more full assessment of these issues please see this item especially its Appendix B.
Richard
John, if there is a trend to global temperatures, then it must of necessity appear in the data from all individual stations. Otherwise, the trend cannot possibly construed as a “global” trend. If regional trends overwhelm a global signal, then it seems reasonable that the global signal is not likely a significant one. Most of the other changes such as reading times should introduce a step change and should have a clear signal that can be removed without trouble. Before “adjusting” for station moves it would be reasonable to determine IF the station moved. There are numerous adjustments for “relocated” stations that never were moved. There are some notorious instances where the “move” was attributed due to a change in the rounding of latitude and longitude. In other instances, “moves” or instrumentation changes were assumed and corrected for, even though no such changes had occurred. Rather than rely on modeling assumptions about what kind of “signal” such changes should yield, it would be better to simply DO THE WORK to determine whether the change is a legitimate one or spurious. More importantly, when one is corrected about whether a station has moved or experienced instrumentation changes, then the “adjustments” need to be revoked. More importantly, if changes are going to be made to historical records, preservation of the original records is vital to any attempt to replicate or correct a combined (“merged”) data set.
5) Put out a press statement to say that you have done 1 – 4.
6) Make sure the storage computer system crashes so that everything is lost.
Who said I was a cynic?
@Oldseadog:
The computer need not crash. Simply have it operated and housed at the Clinton Data Center… all needed data will be kept and only the “private” data will evaporate…
E.M.Smith
Can I use that method to make salt out of seawater?
Crispin in Waterloo
June 1, 2015 at 5:28 pm
No. It only works to make seawater out of salt.
The problem with (2) is that the adjustments are made in a long, old piece of Fortran code, I suspect, without adequate comments and no documentation. And it has been modified and added to over the years.
So here’s my question. Every time that there is an adjustment, shouldn’t that increase the uncertainty of the actual measurement? Say that a max temperature of 90 +/- 0.5 degrees was measured, but after adjustments is now 89 degrees. The uncertainty has to be at least +/- 0.75 degrees now, doesn’t it?
If you plotted the adjusted temperatures with adjusted error bars, would these adjustments really change anything?
The claim is that the adjustment and homogenization process reduces the measurement errors and increases the measurement precision. You can’t make this stuff up.
Well at least it is good for a laugh and maybe a bit of poetry:
A rose is a rose is a rose, but global temperature is as fleeting as the wind.
“You can’t make this stuff up”. No I couldn’t. My imagination just isn’t strong enough. Climate Change “Scientists” seem to have mastered the technique though.
The C and D were outliers, so should be discarded.
Nah, just homogenized them
Clearly.
They weren’t ‘outliers’, they were ‘outright liars’ and had to be discarded.
I have a model….
Gee whiz. How could they hope to obfuscate if they had to do that?
I would think the adjustments they made to the NCDC data would be to bring the overall result closer to the most accurate system we have for measuring global temperatures, the RSS. They sold us on spending billions of dollars on the satellite based RSS because it would be so much more accurate than the measurements taken on land by the NCDC and others, now not only is RSS data less referenced than land collected (and adjusted) NCDC data, the NCDC data is continually adjusted to INCREASE the divergence between NCDC and the more accurate RSS data. NCDC data is a sales pitch, it is not scientific data, it’s primary purpose is to justify the budgets for the NCDC by exaggerating the amount of global warming going on and increase the sense of urgency for budgets that support climate monitoring and climate studies. If the purpose of the NCDC data was to be as accurate as possible they would be adjusting it to more closely match the RSS data.
They don’t accept BEST database?
Nah only WORST which is why it needs adjusting and homogenization
BEST’s WORST fear:
Watts’
Ordinary
Response
Stems
Tide
Some of Gavin’s teachers were “deniers” the rest were politically correct.
Thank you, Anthony.
When we married many, many years ago my bride was a size 12. She is still a size 12, and sometimes a size 10, but I can assure you her weight is not the same as when we married. Size adjustment anyone?
Lenbilen, we were not married, ever.
No, just the Universal Law of Marriage at work.
Perhaps a density increase. Same size different weight . . . density change.
John
Same weight, just redistributed.
I always hide mine in another dimension… Hey, maybe that’s where the enviro-climo’s stashed the missing heat! It’s got to be somewhere! [*shock*]
That’s “vanity sizing.” Mail order sellers, and other sellers, realize that clothing that is too large for a customer won’t be as readily returned as clothing that is too small. They want to avoid returns. So vanity sizing gives them a favorable margin of error.
Looks like you have perplexed the resident lefty on the site at long last Anthony. What is Gavrilla, and what on earth is a ‘track meet’? I get the joke about adjustments, but the final line makes no sense in cold windy Wales!
Reply: Gavrilla is the female cousin of Gavin, and a Track meet is the US Colloquilism for a Track and Field Contest, you know, foot races, hurdles, shot puts, pole vault etc. I’ve recently been working with some instructors in the UK and boy do I understand your occasional confusion. ~ mod.
Gavin and Gavrilla are brother and sister. Once I decided on “Gavin”, I wanted a very similar feminine name, and I settled on “Gavrilla”.
How about Gavina
I like Glavina, but Gavina is much better than Gadvilla – which is what her name would quickly degrade to other youngsters.
How about Gavalene, rhymes with gasolene.
Brian A, I can’t think of any possible anagram of that proposed name that could be used cruelly.
How three independent groups plus individuals like Stokes can come to the about same relative value each month means that there must be an unbelievable coordination of dishonesty. Based on Stokes values, which we can see did to day, what do you think it is, that everyone puts their thumbs on the scale once they see his?
This is just a little batty.
BTW, it looks like May is going to come in close to February this year. Get ready for more records.
No, it just takes them all doing the approximate same “best practices” to the raw data, infilling and homogenization based on a normalized area and latitude based temp trend they just need to read the same papers.
One of the reasons I do neither, I wanted to see what the actual data said.
I hereby nominate Gavin for a nobel prize!
(Just calibrating my sarc meter)
I like it! 🙂
The children sitting 5 desks away got A’s so Gavin’s grades were homogenized and adjusted up to A’s as the lower grades were obviously incorrectly recorded.
No that’s not how it works, Gavin has been copying off Gavrilla’s test for 2/3rds of the semester, finally Teacher Grannywings catches Gavin cheating and moves him 5 chairs away, so Gavin starts getting D’s instead of A’s! Principal Dufus, noted that Gavrilla and Gavin have historically gotten same scores and now they aren’t so He adjusts Gavins score upward without telling Miss Grannywings.
Later Dufus accepts that the A’s were incorrect in this cherry-picked case, but tells that the school’s mean over the period of 1850-2000 did not change because some A’s given in 1920’s had been reinterpreted as D’s.
In real life, Gavin’s notes would be given as anomalies to a 30 year baseline somewhere in a past century, though. Which you compute by taking data from that period and adjust it downwards.
Obviously Gavin was sitting on the CO2 enhanced side of the room so it was too hot to concentrate
Well, this whole biased temperature data “Adjustment” business is just another example of Noble Cause Corruption. Data manipulation for a “Good Purpose” can’t be a sin…
Two days ago, even a former Swiss Minister and President, Moritz Leuenberger, did admit that he plainly lied to the public in connection with a CO2 reduction law. Here’s the Swiss newspaper report about that confession:
http://www.tagblatt.ch/ostschweiz/thurgau/kantonthurgau/tz-tg/Die-ganze-Wahrheit-haelt-gar-niemand-aus;art123841,4242625
The crucial quote of Moritz Leuenberger in this article is as follows:
«Der Klimagipfel in Kopenhagen kurz vor der Abstimmung zur Reduktion des CO2-Ausstosses war desaströs», gibt Leuenberger jetzt zu. Doch damals habe er dies absichtlich nicht den Medien gesagt und somit gelogen, damit die Schweizer dafür stimmen würden. Leuenberger: «Jetzt glaube ich, die Lüge ist legitim, wenn sie etwas Gutes bewirkt.»
(English translation: “The climate summit in Copenhagen shortly before a (Swiss) referendum about the reduction of CO2 emission was disastrous. But then he had not told this the media deliberately and therefore lied consequently, in order to make the Swiss people vote for it (the CO2 reduction law)”. Leuenberger said further, “Now I believe, a lie is legitimate if it will cause a good result.”)
Thus we see by this example quite plainly that “Noble Cause Corruption” is very real in Politics today!
The big problem with this kind of behavior is that such “well-meaning zealots” only believe – but don’t actually know – whether their dishonest crusades will really help mankind…
Just think, only 500 years ago, the majority of people believed that burning witches would be a very “Noble Cause” indeed. And today, Mr. Leuenberger and the majority of the misguided public believe that the vital and desert-greening plant-food CO2 is the new diabolical witch that must be hunted down…
Anyone want to bet that they raise the past temps and lower the present temps?
Well. This is only a partially sarcastic comment. Most of this comment is not sarcasm.
/ partial sarcasm on . . .
I think NOAA/NCDC will justify rewriting the surface temperature record via more data ‘adjustments’ by insinuating that the IPCC endorsed GCMs (models) must be right. Thus, they will maintain that it is reasonable to significantly adjust the temperature data up to be more in agreement with the unquestionable models.
. . . partial sarcasm off /
John
“Who controls the past controls the future; who controls the present controls the past.” George Orwell’s Nineteen Eighty-Four.’
Could you get a finer demonstration of this in action than ‘adjustments’ of past temperatures?
Of course it could be just ‘lucky chance ‘ that all adjustments fall in favour of ‘the cause ‘ but with that type of luck you think they spend more time on the tables at Las Vegas
When you heap poor pratice unto what is already in many ways poor data you cannot ‘magical’ turn it into good data no matter how much you ‘believe’
Maybe today George Orwell would write,
Check out my Orwellian spoof on global warming at http://www.uncommondescent.com/intelligent-design/winston-smith-loves-big-brother-even-more-now-that-he-has-returned-to-the-fold-and-discovered-global-warming/
Mike Brakey shows how NOAA adjusts and adjusts and adjusts and………..
http://notrickszone.com/2015/06/01/bombshell-comprehensive-analysis-reveals-noaa-wrongfully-applying-master-algorithm-to-whitewash-temperature-history/#sthash.HXiWrksf.dpbs
I seem to remember there was a problem: UK Met Office couldn’t reproduce their “homogenized” data or maybe it was some other office in the UK.
ELCore,
And then there is the rest of your wonderful story . . .
Grandma then picks up her mobile phone and dials her son.
When her son answers she says, “Your wife is thinking in an odd way since she finished being an expert reviewer for the IPCC’s AR5.”
Her son says, “Mom, her odd way of thinking started before that while she was getting a Masters Degree in Climate Science from Penn State University.”
John
And that folks is how the UN, the US-EPA, NOAA, NASA and academic atmospheric sciences do their rsearch and work!!!
The lack of controlled process or methodology for adjustments at NOAA is a tragic-comedy.
But the constant adjustments does point out the difficulty of the concept of “Global Temperature”. I still insist it is a low confidence measurement because at this time it is extremely difficult thing to define and measure. There is progress being made and satellites give us hope of some day arriving at the decisive definition of “Global Temperature”. Currently the whole business is sketchy, but throwing the Tarot cards I predict in about 100 years we’ll get it. My Tarot cards are computer modeled so you can rest assured in their accuracy.
Crowley deck or Waite?
They know it’s a “low confidence” measurement:
“For the global mean, the most trusted models produce a value of roughly 14°C, i.e. 57.2°F, [for global average temperature 1951-1980] but it may easily be anywhere between 56 and 58°F and regionally, let alone locally, the situation is even worse.”
http://data.giss.nasa.gov/gistemp/abs_temp.html
They just don’t put information like that in their press releases.
In defense of NOAA/NCDC, someone has to be in last place on climate science credibility. An argument could be made that NASA GISS is probably in last place on temperature dataset credibility with NOAA/NCDC only slightly more credible.
John