NASA: Global Warming Observations Need a Further 19% UPWARD Adjustment

Adjustocene_scr

Guest essay by Eric Worrall

NASA researcher Mark Richardson has completed a study which compares historical observations with climate model output, and has concluded that historical observations have to be adjusted, to reconcile them with the climate models.

The JPL Press Release;

A new NASA-led study finds that almost one-fifth of the global warming that has occurred in the past 150 years has been missed by historical records due to quirks in how global temperatures were recorded. The study explains why projections of future climate based solely on historical records estimate lower rates of warming than predictions from climate models.

The study applied the quirks in the historical records to climate model output and then performed the same calculations on both the models and the observations to make the first true apples-to-apples comparison of warming rates. With this modification, the models and observations largely agree on expected near-term global warming. The results were published in the journal Nature Climate Change. Mark Richardson of NASA’s Jet Propulsion Laboratory, Pasadena, California, is the lead author.

The Arctic is warming faster than the rest of Earth, but there are fewer historic temperature readings from there than from lower latitudes because it is so inaccessible. A data set with fewer Arctic temperature measurements naturally shows less warming than a climate model that fully represents the Arctic.

Because it isn’t possible to add more measurements from the past, the researchers instead set up the climate models to mimic the limited coverage in the historical records.

The new study also accounted for two other issues. First, the historical data mix air and water temperatures, whereas model results refer to air temperatures only. This quirk also skews the historical record toward the cool side, because water warms less than air. The final issue is that there was considerably more Arctic sea ice when temperature records began in the 1860s, and early observers recorded air temperatures over nearby land areas for the sea-ice-covered regions. As the ice melted, later observers switched to water temperatures instead. That also pushed down the reported temperature change.

Scientists have known about these quirks for some time, but this is the first study to calculate their impact. “They’re quite small on their own, but they add up in the same direction,” Richardson said. “We were surprised that they added up to such a big effect.”

These quirks hide around 19 percent of global air-temperature warming since the 1860s. That’s enough that calculations generated from historical records alone were cooler than about 90 percent of the results from the climate models that the Intergovernmental Panel on Climate Change (IPCC) uses for its authoritative assessment reports. In the apples-to-apples comparison, the historical temperature calculation was close to the middle of the range of calculations from the IPCC’s suite of models.

Any research that compares modeled and observed long-term temperature records could suffer from the same problems, Richardson said. “Researchers should be clear about how they use temperature records, to make sure that comparisons are fair. It had seemed like real-world data hinted that future global warming would be a bit less than models said. This mostly disappears in a fair comparison.

NASA uses the vantage point of space to increase our understanding of our home planet, improve lives and safeguard our future. NASA develops new ways to observe and study Earth’s interconnected natural systems with long-term data records. The agency freely shares this unique knowledge and works with institutions around the world to gain new insights into how our planet is changing.

For more information about NASA’s Earth science activities, visit:

http://www.nasa.gov/earth

Read more: http://www.jpl.nasa.gov/news/news.php?feature=6576

The abstract of the study;

Reconciled climate response estimates from climate models and the energy budget of Earth

Climate risks increase with mean global temperature, so knowledge about the amount of future global warming should better inform risk assessments for policymakers. Expected near-term warming is encapsulated by the transient climate response (TCR), formally defined as the warming following 70 years of 1% per year increases in atmospheric CO2 concentration, by which point atmospheric CO2 has doubled. Studies based on Earth’s historical energy budget have typically estimated lower values of TCR than climate models, suggesting that some models could overestimate future warming2. However, energy-budget estimates rely on historical temperature records that are geographically incomplete and blend air temperatures over land and sea ice with water temperatures over open oceans. We show that there is no evidence that climate models overestimate TCR when their output is processed in the same way as the HadCRUT4 observation-based temperature record3, 4. Models suggest that air-temperature warming is 24% greater than observed by HadCRUT4 over 1861–2009 because slower-warming regions are preferentially sampled and water warms less than air5. Correcting for these biases and accounting for wider uncertainties in radiative forcing based on recent evidence, we infer an observation-based best estimate for TCR of 1.66 °C, with a 5–95% range of 1.0–3.3 °C, consistent with the climate models considered in the IPCC 5th Assessment Report.

Read more (paywalled): http://www.nature.com/nclimate/journal/vaop/ncurrent/full/nclimate3066.html

Frankly I don’t know why the NASA team persist with trying to justify their increasingly ridiculous adjustments to real world observations – they seem to be receiving all the information they think they need from their computer models.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
318 Comments
Inline Feedbacks
View all comments
David S
July 24, 2016 10:19 am

If the models don’t agree with reality then reality must be wrong. So the solution is to ignore this reality monkey business.
(I think the absurdity of this comment is sufficiently apparent that I can skip the /sarc tag.)

July 24, 2016 10:26 am

So it is an “apples to apples” comparison because it is comparing computer program output to computer program output.

Hans
Reply to  nhill
July 24, 2016 1:05 pm

Bingo
Eliminate all data entirely
Stroke of genius
Why didn’t anyone think of this before?

DDP
July 24, 2016 10:50 am

“The final issue is that there was considerably more Arctic sea ice when temperature records began in the 1860s”
Considerably more? Are they now re-writing history to include aerial observation from aircraft and satellites in the 19th century, because there is absolutely no way of knowing there was ‘considerably more’ ice in a region that annually loses most of it’s mass every summer.
“The Arctic is warming faster than the rest of Earth, but there are fewer historic temperature readings from there than from lower latitudes because it is so inaccessible.”
So if it is so inaccessible, how do you now there was ‘considerably more’ ice coverage? Go home, you’re drunk. Again.

Sleepalot
Reply to  DDP
July 25, 2016 3:44 am

There were sailing ships roaming the NW Passage in the 1850’s.

July 24, 2016 10:54 am

It a matter of the laws of physics.
Whenever the earth’s temperatures rises the data expands, thus warming appear to be artificially enhanced, which of course it isn’t
Whenever the earth’ temperature falls, the temperature data as everything else shrinks (except the Arctic ice, of course) , hence cooling has to be reduced appropriately.
don’t understand what the fuss is all about.

Steve Fraser
Reply to  vukcevic
July 24, 2016 1:35 pm

I love that!

Greg Woods
July 24, 2016 10:57 am

This is just another step towards legislating science, much like common law was changed into statutory law….

BLAMMO
July 24, 2016 10:57 am

The obvious problem with the scientific method is empirical observation. If we can only compensate for that, we’ll finally get it right.

Clyde Spencer
July 24, 2016 10:57 am

Another way to approach this problem is to acknowledge that there isn’t enough high-quality historical data for the Arctic to say anything reliable about the long-term trend of Arctic temperatures. Thus, the model outputs should be reported for only mid-latitude, NH and compare the model history and predictions with the high-quality temperature records for the same area. If the models don’t agree with the high-quality data, then the models need to be adjusted until they do agree.
Another thing that should be examined is why there is such a difference between the outputs of the various models. Whichever, model(s) appear to be providing the best agreement with the temperature records should be used as an example of how to correct the other models.
It seems to me that the optimal situation would be where a single run of a single model would provide us with a trustworthy prediction of the future. Ensembles of multiple runs would suggest to me that the standard error varies directly with the number of runs. That is, the more runs necessary to make a prediction, the less reliable that prediction is! Logically, there can only be one best prediction from among a host of predictions. Averaging all runs together simply reduces the accuracy of the whole prediction and reduces the reliability of the prediction for any particular point in time.
I agree with the comments that many of the current practicing climate scientists don’t appear to understand the Scientific Method.

Steve Fraser
July 24, 2016 10:58 am

Hmmm. Prior temp records were too low? Does that mean NOAA, NASA And HadCRU have been incompetent in their work? I am shocked!

John Coleman
July 24, 2016 11:07 am

Even scientists who work with no bias or agenda have little chance of producing accurate global temperature data for any year or even decades prior to 1950. Remember our only thermometers pre 1950 were tubes of mercury. Remember than 2/3rds of the Earth is ocean and we had almost no air temperatures over the water. Remember Arctic regions were essentially unmeasured. Remember much of the land mass of Earth was primitively populated and thermometers were few and far between. Remember that even many National Weather Bureau readings in the United States and equivalent readings from other civilized nations were sometimes made on the top of multi story buildings in the downtown districts of cities. Remember there were few weather reporting stations in rural areas. Remember there were no or few readings from high mountain elevations.
It is silly to assume that a scientist today can construct a way to accurately determine temperatures in the scientifically primitive world. Tree rings, ice cores, carbon dating, adjustments and assumptions can, at best, be accurate within a few degrees. Comparing such data with todays data and then reporting on differences of a couple of degrees or even tenths of a degree is sheer folly, in my opinion.
And those of us who have looked at the current data sets in details know they have been structured by scientists with bias and agendas. Today the data sets continue to lack fair and balanced data from cities and rural, from coastal and inland, from mid latitudes and high latitudes, from low elevations and high elevations. Even this satellite data is challenged by refinements in satellites and combining old and new satellite data. But, as of now, the satellite data is far and away the best we have.
Scientists who tell us they have worked through all of these problems to come up with valid data that extends back more than sixty years leave me very skeptical. I would be skeptical even if they were proving that no AGW was occurring. lol

Carla
Reply to  John Coleman
July 24, 2016 12:08 pm

John Coleman July 24, 2016 at 11:07 am
——————————————————-
ditto very well said

gnomish
Reply to  John Coleman
July 24, 2016 7:11 pm

and remember that a global temperature average is as meaningful as a global average telephone number.
nature.com has been offline since this article was posted, btw…

Clyde Spencer
Reply to  gnomish
July 24, 2016 8:02 pm

gnomish,
Your claim makes for an interesting sound bite, but telephone numbers are not a measurement of a physical property. Your statement is really a non sequitur.

gnomish
Reply to  gnomish
July 25, 2016 11:39 am

it’s in interesting sound bite for a reason, yo
taking the temperature in death valley, usa and adding it to the temperature in vostok. antarctica and then dividing by 2 is not a real property of anything.
see how that works? you can make meaningless pie with arithmetic and dazzle your numerologist friends/
the average human being has one ovary and one testicle. bite that soundly.

Richard M
July 24, 2016 11:18 am

This could save a lot of money. No need for any more surface stations or satellites. Just run the models and they will provide the data. I wonder what the people who collect that data would say?

Robert Westfall
July 24, 2016 11:18 am

This is not the first time that NASA has submitted to political pressure. Both the Apollo fire and the Challenger disaster were the result of political expedience. In both cases the engineers objections were over ruled and lives were lost.

Gary Hladik
July 24, 2016 11:25 am

Don’t the IPCC’s climate models fail at a regional level at least as badly as they do globally? If they don’t work when Arctic data are not an issue, then how can Richardson have any confidence they work when the Arctic is included?

Reply to  Gary Hladik
July 24, 2016 12:58 pm

Because funding.

Steven Hales
July 24, 2016 11:28 am

The errors of observation were clustered around time of observation and known instrument errors. If they are mainly random then their effects would be randomly distributed and would be largely eliminated by calculating anomalies. If the assumption was they were non-random then the search for a trend in errors could be spurious and any correction would inject a false trend.

RBom
July 24, 2016 11:28 am

Dogmatocene.
The era of Government mandated (funded) Religious Beliefs.
No ha

Bruce Cobb
July 24, 2016 11:31 am

Warmist law: “Things are always worse than we thought, and we are always surprised that they are.”

Eugene WR Gallun
Reply to  Bruce Cobb
July 24, 2016 12:54 pm

Bruce Cobb — and we are always PLEASED that they are — Eugene WR Gallun

JohnKnight
Reply to  Eugene WR Gallun
July 25, 2016 8:04 pm

Consensus = pleasantly surprised . . and Siants slithers on . .

Tom Anderson
July 24, 2016 11:50 am

The Democrats will pound global warming from now to the election. There could easily be a disaster a day till November.
This is only the beginning.

Pat Kelly
July 24, 2016 11:55 am

I can’t think of a better way to verify your modeling than to use it to demonstrate how past direct observations are flawed – GENIUS!

Svend Ferdinandsen
July 24, 2016 11:59 am

How could low coverage give less warming? They have in modern times reduced the number of thermometers, so would that also influence.
“it is so inaccessible. A data set with fewer Arctic temperature measurements naturally shows less warming than a climate model that fully represents the Arctic.”
By using anomalies they can extend a single thermometer out to 1000km, and by the way the average take care of the density of measurements.

TonyL
July 24, 2016 12:12 pm

The story of arctic exploration is a fascinating one. Many of the scientific expeditions were well equipped to make accurate observations. It would be foolish to discount those observations because “models”.
One topic which recurred a couple of times is this: Whaling ships would report vast reductions in sea ice, with shrinking glaciers, and huge tracts of open ocean which previously had been impassable sea ice. These reports would reignite speculation that the arctic would soon become more inhabitable. The reports would also cause another round on expeditions to find the fabled “Northwest Passage”. (Remember, the panama canal did not exist. A shortcut between the Atlantic and Pacific would have been hugely important)
The stories of the Franklin expedition and the Resolute expedition sent to find them give great insight into the tremendous changes the arctic can undergo. Today, we have the story of HMS Resolute and the Resolute desks.
Care to guess who sits at a Resolute desk today? (Google is your friend, sort of)
But we can not consider any of this because “models”.

July 24, 2016 12:22 pm

Data being adjusted to match models.
This is the death of climate science.
Like a cancer this death will spread to other scientific fields.
Unless it gets cut out.

Reply to  ptolemy2
July 25, 2016 5:34 pm

Certainly needs to be cut out – that’s why I call CAGWarmists GangGreen.

ShrNfr
July 24, 2016 12:35 pm

“I don’t know what you mean by ‘global warming,’ ” Alice said.
Humpty Dumpty smiled contemptuously. “Of course you don’t—till I tell you. I meant ‘there’s a nice model for you!’ ”
“But ‘global warming’ doesn’t mean ‘a model’,” Alice objected.
“When I use science,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.”
“The question is,” said Alice, “whether you can make data mean so many different things.”
“The question is,” said Humpty Dumpty, “which is to be master—that’s all.”
Alice was too much puzzled to say anything, so after a minute Humpty Dumpty began again. “They’ve a temper, some of them—particularly ocean measurements, they’re the proudest—stevenson screens you can do anything with, but not ocean temperatures—however, I can manage the whole lot! Impenetrability! That’s what I say!”

svbeachhouse
July 24, 2016 12:42 pm

This is Obama’s “Ministry of Truth” in full high gear!
Remember what Orwell said, “Those who control the present, control the past. Those who control the past….control the future.”
It’s in our face, but we seemingly can do nothing about it but keep pointing it out.
“Newspeak” is here.

Tom in Florida
Reply to  svbeachhouse
July 24, 2016 5:31 pm

Perhaps it should be changed to “Those who control the models, control the past.”

July 24, 2016 12:42 pm

Fundamental misunderstanding of the paper.
Its not about adjusting the data.
its about comparing APPLES and APPLES
Issue number 1.
Suppose Bob Tisdale wants to compare the DATA about SST with the model projections?
Does he
A) compare the data for SST with with the model outputs for the temperature of AIR over the ocean?
B) Compare the data for SST with the model outputs for SST?
Answer?
B.
Suppose Christy wants to compare his data about temperatures at TLT with models.
Does he
A) compare measured TLT with modelled TLT?
B) compare measured TLT with modeled skin temperature?
Answer
A
Suppose you want to compare Global temperature INDEXS with modelled output.
In the past everyone took a short cut. Even the IPCC
Global temperature indexes are a combination of SST and SAT.
In the past everyone just went to the CMIP data vault or KNMI and pulled down modelled SAT (tas in the GCM world )
So in the past they compared
SST+SAT versus SAT
Well, that’s wrong. its always been wrong. You need apples and apples
The next issue is masking
Do you
A) Compare the model results AT THE :LOCATIONS where you have data?
B) compare the averages of the data with the average of the model?
Answer A
The actual problem is not the data. the problem is the models overeestimate the warming in the arctic while they simultaneously underestimate the ice loss.
So, they are not talking about adjustments to data. Its about comparing apples with apples.

Eugene WR Gallun
Reply to  Steven Mosher
July 24, 2016 1:10 pm

Steven Mosher —
Waving your magic hands over an orange and claiming to have turned it into an apple and then comparing it to a real apple is not how science is done.
Eugene WR Gallun

Marcus
Reply to  Steven Mosher
July 24, 2016 1:10 pm

“A new NASA-led study finds that almost one-fifth of the global warming that has occurred in the past 150 years has been missed by historical records due to quirks in how global temperatures were recorded.”…
Nothing about “comparisons”…He clearly states the problem is “quirks in how global temperatures were recorded.”..

Reply to  Steven Mosher
July 24, 2016 2:38 pm

“Fundamental misunderstanding of the paper.
Its not about adjusting the data.
its about comparing APPLES and APPLES”

Mosh is right, and the headline is wrong. There is no mention of adjusting data in either press release or abstract. In fact what the study does is to process the model output in the same way as HADCRUT4 is compiled, rather than vice versa. Then they find that the models agree with HAD4.
It’s not all that new. Cowtan, Hausfather et al (2015) made the comparison of indices with blended land/sea model output. And Cowtan and Way showed the effect of properly weighting polar region results in HADCRUT on the global index. Finally, the problem of the transition of over-ice temperature to over-water is familiar to anyone who has tried to compile a global average. It’s real.

Tom in Florida
Reply to  Nick Stokes
July 24, 2016 5:33 pm

It’s only real if “global average” had any real meaning.

Kurt
Reply to  Nick Stokes
July 24, 2016 5:47 pm

Mosh is wrong and the headline is right.
The relevant quote from the press release accompanying the paper is that “one-fifth of the global warming that has occurred in the past 150 years has been missed by historical records.” Clearly, the conclusion the paper is trying to draw is that the data is wrong and the models are right, hence the trend in the data needs to be adjusted upwards by at least 19%.” You’re trying to have your cake and eat it too. You first focus on the methodology of the paper, which as you say, massaged the model output to match the observed warming trend that is much lower than the models. But by irrationally assuming that this somehow validates the trends shown by the non-adjusted output of models, the paper advocates for an adjustment of the observational data to match the models. You then say, “No, no . . . the paper isn’t adjusting the data at all, it’s adjusting the model output.”

Reply to  Nick Stokes
July 24, 2016 6:23 pm

“hence the trend in the data needs to be adjusted upwards by at least 19%”
The headline says the observations need to be adjusted by 19% (of what?). That is in no way supported by press release or abstract. You are paraphrasing as “trend in the data” needs to be adjusted. You actually mean “trend deduced from the data”. And that is quite different from adjusting the data. What they point out is that if you use the same deduction process for the models as used for HAD4, you get a 19% lower trend. So yes, it’s reasonable to say that means that if the data was analysed in the same way as the models, the trend would be 19% higher. They are suggesting a better way of deducing the global trend from data, not adjusting the data itself.

gnomish
Reply to  Nick Stokes
July 24, 2016 7:19 pm

has anybody read this paper?
because
Nature.com is DOWN for everyone.
It is not just you. The server is not responding…
http://www.isitdownrightnow.com/nature.com.html
and so it has been since this article was posted.

Gerald Machnee
Reply to  Nick Stokes
July 24, 2016 7:29 pm

Nick Stokes:
**They are suggesting a better way of deducing the global trend from data, not adjusting the data itself.**
As usual, Nick is going around in circles. They are already adjusting the data. What they are implying here is that they are not adjusting it enough. Yes, they said they are 19 percent off. They can now justify adjusting some more. See my solution down a bit lower. We can shut down all stations for 100 % accuracy.

Reply to  Nick Stokes
July 24, 2016 8:38 pm

“They are already adjusting the data. What they are implying here is that they are not adjusting it enough.”
Actually, none of the authors is responsible for handling (or adjusting) a temperature database. And no observed temperatures were modified in the study. One author, Kevin Cowtan, has an account here. They simply took GCM output, and summed it as if it were only known at the locations where HADCRUT had measurements. Then they compared with published HADCRUT. They observed the diminished warming.
They make no suggestion that observations should be modified, and indeed that would not be a useful remedy for this particular issue.

Kurt
Reply to  Nick Stokes
July 24, 2016 8:39 pm

“What they point out is that if you use the same deduction process for the models as used for HAD4, you get a 19% lower trend. So yes, it’s reasonable to say that means that if the data was analysed in the same way as the models, the trend would be 19% higher. They are suggesting a better way of deducing the global trend from data, not adjusting the data itself.”
You’re engaging in sophistry to avoid the whole point of the press release, and presumably the study if accurately represented by the press release (and I’ve seen some press releases that don’t). Specifically, you’re confusing the procedure they used to validate the models (changing the model output) with the proposed solution (change the observational data to match the models).
First, there’s no meaningful distinction between “adjusting the data” and “a better way of deducing the global trend from the data.” The data is what it is, and the trend in the data is solely defined by the numbers comprising the data. You can’t separate the two and pretend that you can alter the trend of the data without altering the data itself. Whether you go through and adjust each entry in the data, every third entry in the data, or ignore the details altogether and cut to the chase by doing some hand-waving “add 19%” to the trend line of the data to match what you think the trend line should have been, you’re still substantively changing the data. You’re just quibbling that the procedure doesn’t bother to go through and adjust all of the numbers so that the real mathematical trend of the data is the new and improved trend.
Second, how precisely would you analyze the data in the same way as the models? The data are merely temperature readings from fixed locations at certain times. The models attempt to simulate how the climate works and produce theoretical temperatures that don’t match the data. The assumption of the study is that the models are right and the data is wrong, on the argument that you can get the modeled temperatures to produce the lower trend of the data if you change the modeled temperatures using some conversion loosely based on some “quirks” in the historical data. Note that the argument of the paper, in true Orwellian fashion, is that the observational data somehow validates a modeled trend 19% higher than it’s actual trend. (I’m not arguing that the model trend is necessarily wrong – I’m just observing the silliness of arguing that observational data can validate something so far off from what it objectively shows).
Just because the paper starts from the modeled temperatures and gets to the lower trend of the observational data does not indicate any technique that would start from the observational data and manipulate it so that its trend is equal to the average modeled trend. And if you were able to do this (and maybe the actual paper behind the paywall does suggest such a technique), certainly then you would have to concede that such a procedure would change the observational data. It would have to.
But because the press release provides no such details and just offers a shortcut of adding 19% to the observational trend, you’re using the fact that they don’t expressly spell out a technique that moves from the raw observational to adjusted data that has a trend as high as the models suggest, so as to simultaneously argue that the paper doesn’t change the observational data BUT that the observational data nonetheless verifies a modeled trend 19% higher than the actual trend of the actual observational data.
It doesn’t matter whether the study’s methodology, in its validation step, changes the model output to match the observational data, or changes the observational data to match the modeled output. Since the paper clearly comes down on the side of adjusting the meaning of the observational data to quantitatively match the theoretical results of the mathematical model, and does so on the false implication that the model’s higher trend has been validated by the observations, the conclusion of the paper is that the observational data should be changed. That they haven’t spelled out the procedure to do so, yet, doesn’t change this basic truth.

Reply to  Nick Stokes
July 24, 2016 8:56 pm

“Specifically, you’re confusing the procedure they used to validate the models (changing the model output) with the proposed solution (change the observational data to match the models).”
It isn’t the proposed solution. Where do you see it proposed?
“The data is what it is, and the trend in the data is solely defined by the numbers comprising the data. You can’t separate the two and pretend that you can alter the trend of the data without altering the data itself.”
You seem to have no idea of how a global trend is calculated. You have a whole lot of sampled temperatures over space (Earth surface) at a whole lot of times – about 2000 months since 1850. You have to first compute an average over space from the samples at each point in time. There is no one method for doing that. HADCRUT makes a grid and forms cell averages where there are points with data, and averages those omitting cells without data. Added to this is the business of calculating anomalies. There is no “solely defined” number; there are different ways of doing it and I think few would claim that the HADCRUT method is obviously the best. When you have averaged in space you have to do a trend estimate in time. That is more standard, though even then you might argue for weighting with respect to error estimate etc.
All of this calculation happens after you decide on what the temperatures actually are, and involves no adjustment of those temperatures.

Reply to  Nick Stokes
July 24, 2016 9:09 pm

That’s all fine and I concede the points made by Steven Mosher and Nick Stokes.
However, what I can not accept, is that climatologists hadn’t thought of this until now!
It is imbecilic to even entertain the notion promoted by this paper, that the erroneous comparisons made to date, were all just an oversight.
The graphic use of overcooked model output served the purpose of frightening the world. But the growing discrepancy between the models and data products such as CRUD4 was becoming a political liability. Hence the dissembling climbdown and assimilating clawback. The whole thing is a disgustingly shameless simulation of the real. The paper is nothing more than a perverted simulacrum of science. It is propaganda of the purest kind.

The simulacrum is never that which conceals the truth – it is the truth which conceals that there is none.

Reply to  Nick Stokes
July 24, 2016 9:42 pm

“However, what I can not accept, is that climatologists hadn’t thought of this until now!”
They have. From the press release:
“Scientists have known about these quirks for some time, but this is the first study to calculate their impact.”
People have looked at the forward direction – how to best calculate an average from the observed data. And that is the ultimate problem. This paper looks at the reverse problem, which quantifies the bias, so it is useful. But it doesn’t solve the problem of how to average the observed data while avoiding the bias.

Dr. S. Jeevananda Reddy
Reply to  Nick Stokes
July 24, 2016 10:38 pm

Nick Stokes – “HADCRUT makes a grid and forms cell averages where there are points with data, and averages those omitting cells without data.” — If the grid or grids represent a particular Climate System and or General Circulation pattern, the whole process of averaging is an erroneous one only. Based on the CS and GCP, through extrapolation and interpolation the empty grid must be filled to present better quality data. Here the major facto on land data is urban and rural conditions. This must be covered.
Dr. S. Jeevananda Reddy

Reply to  Nick Stokes
July 24, 2016 10:53 pm

“through extrapolation and interpolation the empty grid must be filled to present better quality data”
Yes, I think they should. Discussed here.

Kurt
Reply to  Nick Stokes
July 24, 2016 11:57 pm

Nick – you seem to have proven the whole point of my post. I’m fully aware of how temperature data is spatially gridded before it’s temporally plotted. But we are talking about the temporal trend, here. So all those excruciating details of the procedures used to derive the spatially-averaged gridded “data” simply demonstrates all the possible ways that climate scientists can, in the future, manipulate the temporally plotted “data” to change the trend to get it 19% higher – as if there’s a meaningful difference between writing new average numbers comprising your gridded “data” set and deciding new ways of spatially blending raw temperature readings to get to a desired gridded and averaged temporal “data” set that produces a least square error trend line that matches the models.
As implied by the quotations used around the word “data” in the preceding paragraph, you’re also selectively adopting an all-too conveniently stunted definition of the word “data.” Gridded and averaged observed temperature “data” is ubiquitously plotted in graphs from say 1880 to the present or some smaller interval, with a trend line through it. You’d have us believe that the points in the graph, and the 5-year running means, and the trend lines, don’t qualify as “data” because they represent some blended average of raw temperature sensor numbers, and only the initial step of changing these raw numbers via infilling or bias adjustments qualifies as “adjusting data.” That’s silly. If the plotted, annual or monthly points in these graphs qualify as “data” – and they do – then what I said in my earlier post was absolutely correct – “the trend in the data is solely defined by the numbers comprising the data. You can’t separate the two and pretend that you can alter the trend of the data without altering the data itself.” Whatever procedure used to massage the data prior to plotting it on the graph doesn’t strip it of it’s characteristic as observational data.
You adopt this crabbed interpretation of “changing data” as a semantic deception to avoid the plain implication of the press release for the paper – that the temporal trends shown by the gridded and averaged observational data are wrong and should be adjusted upwards to match those of the models, which are supposedly correct. Yes, data is changed when you arbitrarily adjust raw temperature readings by infilling, by correcting for TOB, etc. but that doesn’t mean that data is NOT changed when you subjectively select a technique of spatially averaging those temperature readings to plot the data on a graph, or even by just pretending that the trend through the plotted data is something different by adding a number to it’s slope. It’s an inferential change, in this later case, but you’re still changing the data by pretending it says something it doesn’t.
And the title of the post, which you say is inaccurate, reads “NASA: Global Warming Observations Need a Further 19% UPWARD Adjustment.” Is this title supported by the press release? Well in the opening sentence, the press release states that a “new NASA-led study finds that almost one-fifth of the global warming that has occurred in the past 150 years has been missed by historical records.” That’s their words – not mine. As is the language that “these quirks [in the historical record] hide around 19 percent of global air-temperature warming since the 1860s.” These quotes certainly imply that historical data needs an upward adjustment in its trend line. You’re clearly ignoring the plain meaning of the word “need” in the Title. And note that they’re not complaining about methods, they’re complaining about the deficiencies in the data
Again, think through your position for consistency. The whole thrust of this paper is that the historical data is supposedly inaccurate because it has quirks that “hide” 19% of the “real” trend shown by the models. The paper supposedly demonstrates that the trend shown by the models is the “real” trend by validating the model using the historical temperature record WITHOUT MODIFYING IT. If the paper suggests as you acknowledge, a “better way” of deducing a time-plotted trend from the historical record different than the existing time-plotted trend, wouldn’t this of necessity mean that the plotted observational data – through which the trend line that supposedly validated the computer models was drawn through – would have to be changed if that same data were to have a 19% steeper trend line through it?

Reply to  Nick Stokes
July 25, 2016 1:46 am

It’s true that the press release reflects a journalist’s shaky grasp of the paper. So when he says, for example, that warming was “missed by historical records”, that is a muddled rendition of the main issue in the paper, which is the limited coverage of the HADCRUT set. But you can’t correct that coverage by adjusting observations, and there is no suggestion of such adjustment in the abstract, nor in Cowtan’s extended summary.
” If the plotted, annual or monthly points in these graphs qualify as “data” – and they do”
Well, they aren’t observations. And they aren’t things that you would, in normal usage, adjust. They are computed from actual monthly station (and other) data by a complicated spatial process during which many decisions are made. And these decisions can be reviewed and improved. More relevantly for this paper, their consistency (model average vs index) can be checked.
The paper is in fact about sensitivity, and in particular TCR. And what they do do is stated in the last sentence of the abstract:
“Correcting for these biases and accounting for wider uncertainties in radiative forcing based on recent evidence, we infer an observation-based best estimate for TCR of 1.66 °C, with a 5–95% range of 1.0–3.3 °C, consistent with the climate models considered in the IPCC 5th Assessment Report.”
IOW, they identify a bias and correct (adjust) their estimate of TCR. This is a long way from adjusting observations.

Hugs
Reply to  Nick Stokes
July 25, 2016 3:40 am

Nick, thanks for stopping by.
Eric said
‘NASA researcher Mark Richardson has completed a study which compares historical observations with climate model output, and has concluded that historical observations have to be adjusted, to reconcile them with the climate models.’
This is not right, the observations are not adjusted, just how model run is compared to them.

Reply to  Nick Stokes
July 25, 2016 10:18 am

Why of course Mosh is right, Mosh and Stokes are always right according to Stokes and Mosh. There I just did what this NASA study did.

Leonard Lane
Reply to  Nick Stokes
July 26, 2016 11:22 pm

So it is OK to take the Arctic with little data, claim it reduces warmth of modeled data anomalies for the entire planet, then use it to estimate something with a failed climate model anomaly, write it up, and call it the truth?
Hmm, fundamental NASA research techniques huh?

Reply to  Steven Mosher
July 24, 2016 11:37 pm

Steven Mosher July 24, 2016 at 12:42 pm
”So, they are not talking about adjustments to data. Its about comparing apples with apples.”
Not quite. They’re comparing apples to what may be apples. It is not established yet that they are apples. It is only a possibility. That means that the warming rate discrepancy on its own may not invalidate the models. But, failure to invalidate does not mean the models are validated.
Nick Stokes July 24, 2016 at 6:23 pm
”They are suggesting a better way of deducing the global trend from data, not adjusting the data itself.”
Again, no. They are suggesting one possibility for which the models could be accurate even though the data disagree with them. It is a long leap of faith from there to claim that this establishes a better way to deduce the global trend.
The models are only a possibility. The data do not uniquely fit them out of all the possible actual dynamics. Ergo, you can draw no firm conclusions based on mere consistency over a finite region of time and space.

Reply to  Bartemis
July 25, 2016 12:20 am

“They are suggesting one possibility for which the models could be accurate even though the data disagree with them.”
Saying the models disagree with data is meaningless on its own. They are a mass of disparate numbers. It only becomes meaningful if you deduce from each statistics, like trend, which are supposed to be comparable. Apples. And that is what the issue is here – what statistics are comparable. Richardson et al showed that if you use the reduced coverage of HADCRUT for both, you get agreement. That doesn’t prove that either is right, but it gives confidence.

Kurt
Reply to  Bartemis
July 25, 2016 1:12 am

“Richardson et al showed that if you use the reduced coverage of HADCRUT for both, you get agreement. That doesn’t prove that either is right, but it gives confidence.”
Richardson et al., at least in the University of York web post you linked to above, didn’t say anything at all about confidence in which was right, In fact they ducked the question by simply saying that the observed data and the model data measure different things, and that since TCR was defined as relating to air temperature, that the model data should be used as a representation for TCR and the observed data used for some other metric, TBD later.
Also, the apples-to-apples comparison showed agreement with the lower trend. We should at least be able to conclude that the observational data does not agree with higher trend of the model. It may not refute the higher model trend, but does not agree with it. Richardon et al. seems to make this assessment:
“We draw the weaker conclusion that the historical record offers no reason to doubt the estimates of climate sensitivity from climate models.”
We don’t have the full paper, but this seems like another example, of a government agency grossly exaggerating the published conclusions of a scientific article. Shocking, I know.

Reply to  Bartemis
July 25, 2016 4:15 am

“We should at least be able to conclude that the observational data does not agree with higher trend of the model.”
No, what it shows is that when reduced to the lower coverage of HADCRUT 4 (main effect) and with an SST estimate for ocean (and ice border estimated), the model results agree with HADCRUT. The reasonable expectation is that if HADCRUT’s limitations could be overcome and it was extended to the complete coverage and consistent air measure of the models, H4 would match the higher trend. But we don’t know how to do that, so it is not proved.

Reply to  Bartemis
July 25, 2016 1:41 pm

“That doesn’t prove that either is right, but it gives confidence.”
Only if you are predisposed to it. In reality, all it shows is effectively the agreement of a curve fit over a finite interval. Extrapolating a curve fit beyond the region of the fit is always fraught with peril.

Science or Fiction
July 24, 2016 1:00 pm

“The agency freely shares this unique knowledge and works with institutions around the world to gain new insights into how our planet is changing.”
And:
“Read more (paywalled): http://www.nature.com/nclimate/journal/vaop/ncurrent/full/nclimate3066.html
The quirks at NASA wouldn´t discover an inconsistency even if it was sitting on their nose.

gnomish
Reply to  Science or Fiction
July 25, 2016 10:40 am

and unpaywalled. this is here. this is now. this is a freakshow, baby, anyhow:
richardson2016 Reconciled climate response estimates from
climate models and the energy budget of Earth
https://www.sendspace.com/file/vhn1kk
P-}

gnomish
Reply to  gnomish
July 25, 2016 10:41 am

i mean- all these comments and nobody read it… omigosh…
get some.

Gamecock
July 24, 2016 1:00 pm

The future isn’t what it used to be.