NASA: Global Warming Observations Need a Further 19% UPWARD Adjustment

Adjustocene_scr

Guest essay by Eric Worrall

NASA researcher Mark Richardson has completed a study which compares historical observations with climate model output, and has concluded that historical observations have to be adjusted, to reconcile them with the climate models.

The JPL Press Release;

A new NASA-led study finds that almost one-fifth of the global warming that has occurred in the past 150 years has been missed by historical records due to quirks in how global temperatures were recorded. The study explains why projections of future climate based solely on historical records estimate lower rates of warming than predictions from climate models.

The study applied the quirks in the historical records to climate model output and then performed the same calculations on both the models and the observations to make the first true apples-to-apples comparison of warming rates. With this modification, the models and observations largely agree on expected near-term global warming. The results were published in the journal Nature Climate Change. Mark Richardson of NASA’s Jet Propulsion Laboratory, Pasadena, California, is the lead author.

The Arctic is warming faster than the rest of Earth, but there are fewer historic temperature readings from there than from lower latitudes because it is so inaccessible. A data set with fewer Arctic temperature measurements naturally shows less warming than a climate model that fully represents the Arctic.

Because it isn’t possible to add more measurements from the past, the researchers instead set up the climate models to mimic the limited coverage in the historical records.

The new study also accounted for two other issues. First, the historical data mix air and water temperatures, whereas model results refer to air temperatures only. This quirk also skews the historical record toward the cool side, because water warms less than air. The final issue is that there was considerably more Arctic sea ice when temperature records began in the 1860s, and early observers recorded air temperatures over nearby land areas for the sea-ice-covered regions. As the ice melted, later observers switched to water temperatures instead. That also pushed down the reported temperature change.

Scientists have known about these quirks for some time, but this is the first study to calculate their impact. “They’re quite small on their own, but they add up in the same direction,” Richardson said. “We were surprised that they added up to such a big effect.”

These quirks hide around 19 percent of global air-temperature warming since the 1860s. That’s enough that calculations generated from historical records alone were cooler than about 90 percent of the results from the climate models that the Intergovernmental Panel on Climate Change (IPCC) uses for its authoritative assessment reports. In the apples-to-apples comparison, the historical temperature calculation was close to the middle of the range of calculations from the IPCC’s suite of models.

Any research that compares modeled and observed long-term temperature records could suffer from the same problems, Richardson said. “Researchers should be clear about how they use temperature records, to make sure that comparisons are fair. It had seemed like real-world data hinted that future global warming would be a bit less than models said. This mostly disappears in a fair comparison.

NASA uses the vantage point of space to increase our understanding of our home planet, improve lives and safeguard our future. NASA develops new ways to observe and study Earth’s interconnected natural systems with long-term data records. The agency freely shares this unique knowledge and works with institutions around the world to gain new insights into how our planet is changing.

For more information about NASA’s Earth science activities, visit:

http://www.nasa.gov/earth

Read more: http://www.jpl.nasa.gov/news/news.php?feature=6576

The abstract of the study;

Reconciled climate response estimates from climate models and the energy budget of Earth

Climate risks increase with mean global temperature, so knowledge about the amount of future global warming should better inform risk assessments for policymakers. Expected near-term warming is encapsulated by the transient climate response (TCR), formally defined as the warming following 70 years of 1% per year increases in atmospheric CO2 concentration, by which point atmospheric CO2 has doubled. Studies based on Earth’s historical energy budget have typically estimated lower values of TCR than climate models, suggesting that some models could overestimate future warming2. However, energy-budget estimates rely on historical temperature records that are geographically incomplete and blend air temperatures over land and sea ice with water temperatures over open oceans. We show that there is no evidence that climate models overestimate TCR when their output is processed in the same way as the HadCRUT4 observation-based temperature record3, 4. Models suggest that air-temperature warming is 24% greater than observed by HadCRUT4 over 1861–2009 because slower-warming regions are preferentially sampled and water warms less than air5. Correcting for these biases and accounting for wider uncertainties in radiative forcing based on recent evidence, we infer an observation-based best estimate for TCR of 1.66 °C, with a 5–95% range of 1.0–3.3 °C, consistent with the climate models considered in the IPCC 5th Assessment Report.

Read more (paywalled): http://www.nature.com/nclimate/journal/vaop/ncurrent/full/nclimate3066.html

Frankly I don’t know why the NASA team persist with trying to justify their increasingly ridiculous adjustments to real world observations – they seem to be receiving all the information they think they need from their computer models.

0 0 votes
Article Rating
318 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Bitter&Twisted
July 24, 2016 8:01 am

They have no shame.

Phillip Bratby
Reply to  Bitter&Twisted
July 24, 2016 10:43 am

It’s not obvious why “they”, the JPL, are involved in climate studies. Anybody know what the climate has to do with jet propulsion?

Reply to  Phillip Bratby
July 24, 2016 10:48 am

because SHUTUP look a squirrel…

Editor
Reply to  Phillip Bratby
July 24, 2016 10:57 am

Money, over $200b so far, available to any and all who will support global warming alarmism. Thank you Al Gore. Part of the oermanent bureaucracy now.

dudey
Reply to  Phillip Bratby
July 24, 2016 11:16 am

JPL is a captive FFRDC (Federally Funded Research and Development Center) for NASA. See https://en.wikipedia.org/wiki/Federally_funded_research_and_development_centers
They provide technical services to NASA and exist for that purpose. That includes satellites and other items of interest to NASA.

Greg
Reply to  Phillip Bratby
July 24, 2016 12:11 pm

Oh, technical services like providing excuses as to why their models don’t work? Nice. This may not work out quite the way they hoped.
I recently published an article on the spurious idea of “averaging” water and air temperatures on Climate Etc. I’m glad it has caused some reflection ( even if they got the implications inside out ).
https://judithcurry.com/2016/02/10/are-land-sea-temperature-averages-meaningful/

These quirks hide around 19 percent of global air-temperature warming since the 1860s.

Now models are tuned primarily to reproduce the ( apparently defective ) water+air climate record of this period. So if the (air) warming was actually greater than their defective mushy, non scientific “averages” then the tuning they did to reproduce it will be wrong. Their models will not be sensitive enough , they were tricked into being less sensitive by being tuned to a mistakenly small rate of warming.
So now they need to fiddle the fudge factors to accurately reproduce air temperature record, not the mixed up, unscientific land+sea “averages”. They can then run their MORE sensitive models. Project them forwards from 1990 and come back and tell us if it works any better than the current plan.
Sorry guys, you need to think this through. You can’t have it both ways. You can’t tune to land+sea then start making excuses that we should only be looking air temps. Apples and oranges as you say.
The other problem this is the that the worst discrepancy is not with the surface land+sea mish-mash but with the lower tropo satellite retrievals. And UAH TLT is air temperatures only.
Nice try fellas, but no cookie.

Greg
Reply to  Phillip Bratby
July 24, 2016 12:17 pm

This may not work out quite the way they hoped.
I recently published an article on the spurious idea of “averaging” water and air temperatures on Climate Etc. I’m glad it has caused some reflection ( even if they got the implications inside out ).
https://judithcurry.com/2016/02/10/are-land-sea-temperature-averages-meaningful/

These quirks hide around 19 percent of global air-temperature warming since the 1860s.

Now models are tuned primarily to reproduce the ( apparently defective ) water+air climate record of this period. So if the (air) warming was actually greater than their defective mushy, non scientific “averages” then the tuning they did to reproduce it will be wrong. Their models will not be sensitive enough , they were tricked into being less sensitive by being tuned to a mistakenly small rate of warming.
So now they need to fiddle the fudge factors to accurately reproduce air temperature record, not the mixed up, unscientific land+sea “averages”. They can then run their MORE sensitive models. Project them forwards from 1990 and come back and tell us if it works any better than the current plan.
Sorry guys, you need to think this through. You can’t have it both ways. You can’t tune to land+sea then start making excuses that we should only be looking air temps. Apples and oranges as you say.
The other problem this is the that the worst discrepancy is not with the surface land+sea mish-mash but with the lower tropo satellite retrievals. And UAH TLT is air temperatures only.
Nice try fellas, but no cookie.

Bryan A
Reply to  Phillip Bratby
July 24, 2016 7:59 pm

Some interesting points

First, the historical data mix air and water temperatures, whereas model results refer to air temperatures only. This quirk also skews the historical record toward the cool side, because water warms less than air
IF the historical record is truly skewed towards the cool side then why was the temperature data also screwed with to cool the past? Wasn’t it already skewed enough to the cold side without being screwed and skewed?
Then we have

The final issue is that there was considerably more Arctic sea ice when temperature records began in the 1860s, and early observers recorded air temperatures over nearby land areas for the sea-ice-covered regions. As the ice melted, later observers switched to water temperatures instead. That also pushed down the reported temperature change.
So if more open water temperatures lead to lower temperature changes shouldn’t this indicate the temperature would be lower today given the current amount of open water?
And finally this
Per the quote above, how exactly do they Know there was “Considerably more Arctic Sea Ice” back in 1860? All of the arctic they would have known would be where ships had tried to travel.but certainly not the entire arctic region

JohnWho
Reply to  Bitter&Twisted
July 24, 2016 11:01 am


Bitter&Twisted
July 24, 2016 at 8:01 am
They have no shame.”

My thoughts exactly.
If the data doesn’t fit your conjecture, adjust the data.

Mjw
Reply to  JohnWho
July 25, 2016 1:42 am

Feynman was wrong.

catweazle666
Reply to  JohnWho
July 25, 2016 4:04 pm

“The data doesn’t matter. We’re not basing our recommendations on the data. We’re basing them on the climate models.”
~ Prof. Chris Folland ~ (Hadley Centre for Climate Prediction and Research)

Leonard Lane
Reply to  JohnWho
July 26, 2016 10:57 pm

And, NASA used taxpayer owned data to adjust, then taxpayer funded models to model, then taxpayer money to go back and adjust, and then taxpayer money to write us this tripe, and finally taxpayer money to publish it. This is an excellent example of the Ministry of Truth sucking up public money to change history and make it “truth”. How many thousands of other things has the Ministry or Truth made “true”. We will never know, but, we have lost a huge amount of our history, scientific data, and scientific truth. Sadly, they do it in hidden ways so that the real data etc. can never be restored. The old truth is disappeared and the new “truth” lives on.

Reply to  Bitter&Twisted
July 24, 2016 11:22 am

It really is awful.
“there are fewer historic temperature readings from there than from lower latitudes because it is so inaccessible… the researchers instead set up the climate models to mimic the limited coverage in the historical records.”
Mimic? Fabricate is the word. And how do these “researchers” validate their models? Here we go:
“With this modification, the models and observations largely agree on expected near-term global warming.”
What “observations”? The models now agree with the “researchers” expectations. Let’s not bandy words.
This is a travesty. That it was published in a journal that makes claims to be scientific is just horrible. Frighteningly manipulative. Blatantly biased.

ferdberple
Reply to  Bartleby
July 24, 2016 1:22 pm

Notice the study lacks double blind controls. Of course they will find what they expected to find. there is even a name for this:
The observer-expectancy effect (also called the experimenter-expectancy effect, expectancy bias, observer effect, or experimenter effect) is a form of reactivity in which a researcher’s cognitive bias causes them to subconsciously influence the participants of an experiment. Confirmation bias can lead to the experimenter interpreting results incorrectly because of the tendency to look for information that conforms to their hypothesis, and overlook information that argues against it.[1] It is a significant threat to a study’s internal validity, and is therefore typically controlled using a double-blind experimental design.
https://en.wikipedia.org/wiki/Observer-expectancy_effect

Editor
Reply to  Bartleby
July 24, 2016 1:31 pm

To me, it makes no sense. Before this study, they tuned the models to past temperatures, as part of the process of refining the models. To do that, they could not have used more historical measurements from the Arctic than they use now, because there weren’t any more. So to “set up the climate models to mimic the limited coverage in the historical records” is to do things exactly the same way as before. I find it very difficult to disagree with Bartleby: “This is a travesty. That it was published in a journal that makes claims to be scientific is just horrible. Frighteningly manipulative. Blatantly biased.”. I applaud Bartleby for the restraint shown, and the temperate language used.

Germinio
Reply to  Bartleby
July 24, 2016 2:43 pm

I think you are completely misreading the paper. As I understand it the authors have taken the
output from their global models and processed the data in the identical way to what other researchers have done with the measured historical temperature data and have found good
agreement. Note that in this case there is no adjustment to the raw data and similarly no adjustments to the modelled data but rather only in the method used to calculate an average temperature from a climate model.

Kurt
Reply to  Bartleby
July 24, 2016 4:10 pm

Geronimo – if you are correct, then they got the conclusion precisely backwards. When the model results are limited to a form that can be checked by observations in an apples-to-apples comparison, as they put it, the model results are not as alarming as what the IPCC and others try to scare us with. They’ve simply discovered the reason why models overstate the actual observed temperature trend. What they do with this is pull a bait-and switch. You’ll notice that the press release cleverly glosses over the implication that their new model output shows a lot less warming than the published IPCC models – that’s how they get their agreement with observations. Having established that the model output CAN be made to agree with real world observations by forcing the model to produce only the kind of output that can be verified by observations, they use that to buttress the IPCC”S “authoritative” assessment reports of impending calamity based on the assumptions of the model that can’t be verified by observations.
Take, for instance, the example given with respect to Arctic coverage. Here’s what they say: “The Arctic is warming faster than the rest of Earth, but there are fewer historic temperature readings from there than from lower latitudes because it is so inaccessible. A data set with fewer Arctic temperature measurements naturally shows less warming than a climate model that fully represents the Arctic.”
This is nonsense. It’s only true if the model shows more warming in the regions not represented by actual station data than the observed trend in the regions that do have station data. The true warming rate in remote regions having no temperature record is just as unknown today as it was yesterday. At the end of the day, all this “study” means is that the warming rate shown by the IPCC models relies on information that can’t be verified by observations, so we’re just going to have to take the IPCC models on faith, even though the historical data does not support a conclusion of an alarming warming rate.

RockyRoad
Reply to  Bartleby
July 25, 2016 11:09 am

It’s far worse than awful–it’s criminal. Criminals extort and lie to advance their cause, and this is exactly that.

catweazle666
Reply to  Bartleby
July 25, 2016 4:12 pm

“The Arctic is warming faster than the rest of Earth, but there are fewer historic temperature readings from there than from lower latitudes because it is so inaccessible. A data set with fewer Arctic temperature measurements naturally shows less warming than a climate model that fully represents the Arctic.”
Perhaps it’s just me that’s a trifle obtuse – but then, I’m not a climate “scientist”, I’m merely a somewhat ancient engineer.
That seems to assert that the more temperature measuring points there are, the higher the average temperature is going to be.

Robert of Ottawa
Reply to  Bitter&Twisted
July 24, 2016 5:04 pm

But they have loadsfunding as they produce what their political masters want.

Stephen Greene
Reply to  Bitter&Twisted
July 24, 2016 5:53 pm
george e. smith
Reply to  Bitter&Twisted
July 25, 2016 10:36 am

Well observations of the system will only match simulations of the model, when the model is an accurate model of the real system. It also is helpful; very helpful; excuse me it’s absolutely necessary that the observations of the system, do comprise a valid sample of a band limited system.
G

Sean
Reply to  Bitter&Twisted
July 25, 2016 12:56 pm

If this group tried to replicate NASA’s achievement of putting a man on the moon, I believe their results would only be a lot of dead astronauts.

Reply to  Sean
July 25, 2016 1:58 pm

Or if the Earth had 2 moons, they would have landed on the wrong one!

Denis Ables
Reply to  Bitter&Twisted
July 26, 2016 2:07 pm

My goodness! Aren’t NASA “new” concerns all about “not enough temperature measurements in the Arctic” (and various other uninhabited regions) justification for sending up weather satellites? Oops, we have two already, not to mention weather balloon data.
The satellites measurements presumably give equal weight to all these uninhabited areas.

grumpyKoz
Reply to  Bitter&Twisted
July 29, 2016 9:38 am

Well, the really smart people at NASA are still hunting down the facts that show how Muslims have provided massive value to the growth of the U.S., as directed by B. Hussein Obama.

Quinn the Eskimo
July 24, 2016 8:07 am

I’m sure it’s pure coincidence that the adjustments they make land “observations” in the middle of averaged model output.

Marcus
July 24, 2016 8:07 am

“These quirks hide around 19 percent of global air-temperature warming since the 1860s. ”
Ummm, didn’t they “adjust” the old temperatures LOWER and RAISE the more recent temperatures a few years back ???

Latitude
Reply to  Marcus
July 24, 2016 9:52 am

yep….
…and
The final issue is that there was considerably more Arctic sea ice when temperature records began in the 1860s….not

The Original Mike M
Reply to  Latitude
July 24, 2016 11:19 pm

They can fix all of this with a new theory that the freezing and boiling points of water have been increasing with time. No one noticed because we’ve been ignorantly using them to calibrate our thermometers.

Grimbeaconfire
Reply to  Latitude
July 25, 2016 2:18 am

Yeah, sounds like another prediction that is then assumed to be true simmilar to “no increase in outgoing radiation” or flooded pacific islands.

David T. LeBlanc
July 24, 2016 8:09 am

Isn’t this sort of back asswards science? Fudge the data to fit a model?
Is this the sort of think Bill Nye talks about?

Reasonable Skeptic
Reply to  David T. LeBlanc
July 24, 2016 9:01 am

Well not really, there is no evidence the models are wrong. Since none exists, they are right and the observations must be wrong, so the biases that make them wrong need to be found and removed.
Makes perfect sense if your funding depends on being right.

John Harmsworth
Reply to  Reasonable Skeptic
July 24, 2016 9:32 am

It actually seems that their funding is inversely correlated with the accuracy of their predictions.

Jon
Reply to  Reasonable Skeptic
July 25, 2016 12:08 am

Only a cynic would regard the almost incessant failure of the models to predict temperature as invalidating them.
I’ve realised where those predicted missing 50 million climate refugees are, they must be at the bottom of the ocean with the increased heat!

rocketride
Reply to  Reasonable Skeptic
July 25, 2016 6:06 am

No, makes perfect sense if you funding depends on telling them what they want to hear.

Reply to  David T. LeBlanc
July 24, 2016 10:06 am

Used to be—before the data was so uncooperative.

Reply to  David T. LeBlanc
July 24, 2016 10:32 am

“95% of the models agree. Therefore the observations must be wrong.”, Roy Spenser (Tongue planted firmly in cheek.)

Jimmy Edwards
Reply to  firetoice2014
July 24, 2016 12:08 pm

Actually it is 97% of the models that agree. This 97% modelling consensus proves that the historical records were deliberately mismeasured at the time to try and hide the horrors behind evil Man Made Global Warming. Thank God Mann et al. were too smart for them.

Reply to  Jimmy Edwards
July 24, 2016 12:20 pm

The 95% is part of a quote from Dr. Spenser. It was repeated verbatim.

AllyKat
Reply to  firetoice2014
July 25, 2016 4:26 pm

Spenser’s words needed to be adjusted to fit the models. 😉

Ernest Bush
Reply to  firetoice2014
July 25, 2016 9:38 pm

Ummmm…It’s Dr. Roy SpenCER. Try to get the name spelled right if you are going to quote him.

Reply to  Ernest Bush
July 27, 2016 5:58 am

Thanks. If I could type, I might be dangerous.

Windchaser
Reply to  David T. LeBlanc
July 24, 2016 11:17 am

Isn’t this sort of back asswards science? Fudge the data to fit a model?

Eh, it’d be backwards if that was what was going on, sure. But it’s not.
They’re saying that what we measured in real life are not exactly the same metrics as what had been reported from the models. We’d been partly comparing apples and oranges. For instance, the models can report their temperatures for the entire surface of the Earth, whereas in real life, our measurements in the Arctic have historically been a bit sparse. You may need to account for that, either by improving your measurements there, or by dropping that region out of what you report from the models.
So, what’s happening is just that they’re saying “hey, let’s make sure we’re doing a fair comparison”. And good comparisons are a good thing in science; you want to make sure that you’re being as true as possible to the data and what it represents.
You have to try pretty hard to skew this into bad science.

Reply to  Windchaser
July 24, 2016 11:39 am

“You have to try pretty hard to skew this into bad science.”
No, you don’t. Fabricating data is not good science. It’s a far cry from saying the initial measurements are bad and there’s no way to correct that. That’s good science. Bad science involves saying we don’t have any data so we’ll make some up, and look, it fits our model! Surprise surprise! Aren’t we just brilliant? Who’d have thunk it!?

marque2
Reply to  Windchaser
July 24, 2016 11:50 am

Then they should re-work the models to see if they match when only using the valid data sets. They aren’t doing that either. Just fabricating some data up north, and claiming the models match.

Eugene WR Gallun
Reply to  Windchaser
July 24, 2016 12:24 pm

Windchaser —
They are being true to the data???? THEY ARE CHANGING THE DATA TO MAKE IT FIT THE MODELS!!!!!
Man is the measure of all things that be
Post Normal Science
Is funding compliance
So I got the data in me!
Do you think we are living in the Matrix — where the computer model creates the world we think is real??????? The computer model determines what reality is???
Windchaser you ought to change your name to Fartseeker. You obviously enjoy that which smells to high heaven.
Eugene WR Gallun

Ironargonaut
Reply to  Windchaser
July 24, 2016 1:12 pm

How can you drop a region from a “global” model and still call it a gobal model?
What they are saying is tantamount to, we got bad data for the world except NY. So we put in only NY data and adjusted the model(really how do you adjust a model w/o making it a new model) and our NEW model matches the observed non-high-warming temperatures for NY, therefore the OLD model which uses the bad data for input and outputs a higher then observed trend for world is correct and the bad world temperature inputs should be adjusted to show more global warming.

Windchaser
Reply to  Windchaser
July 24, 2016 1:19 pm

Except, Bartleby, no one is fabricating data. They’re seeing what would happen if we applied the same mask to model temperatures that we have in the real-world data. And the result is that there’s about a 20% change, and much better agreement among the models and surface measurements.
See this bit in the press release: “Because it isn’t possible to add more measurements from the past, the researchers instead set up the climate models to mimic the limited coverage in the historical records.

Then they should re-work the models to see if they match when only using the valid data sets. They aren’t doing that either. Just fabricating some data up north, and claiming the models match.

…did you actually read the paper?
Look, you’ve got a difference between how two different measurements are made – one in real life, one in the models. You can fix that from either side, by changing what you measure in real life so that it’s closer to the model outputs, or doing the same for the model outputs. Or some mix of both.
But the important thing is that you compare apples-to-apples. That it’s a fair comparison. That if some area is left out of one set of data, it’s left out of the other one as well. At least if you intend to compare them.
And it turns out that when you do do a fair comparison, when you adjust for these differences in how the measurements are performed, there’s about a 20% change in the model outputs. And then they match the surface measurements.
No data is fabricated in this process; read the paper and you can see that. They just took the model outputs for temperature that actually corresponded to what we have from real-world measurements. Is there some scientific problem with that, with doing a fair comparison?

ferdberple
Reply to  Windchaser
July 24, 2016 1:19 pm

let’s make sure we’re doing a fair comparison
=================
you can’t because the records don’t exist. they are instead fabricating data to compare with the model, they even subconsciously admit their error:
“With this modification, the models and observations largely agree on expected near-term global warming.”
The observer-expectancy effect (also called the experimenter-expectancy effect, expectancy bias, observer effect, or experimenter effect) is a form of reactivity in which a researcher’s cognitive bias causes them to subconsciously influence the participants of an experiment. Confirmation bias can lead to the experimenter interpreting results incorrectly because of the tendency to look for information that conforms to their hypothesis, and overlook information that argues against it.[1] It is a significant threat to a study’s internal validity, and is therefore typically controlled using a double-blind experimental design.
https://en.wikipedia.org/wiki/Observer-expectancy_effect

Windchaser
Reply to  Windchaser
July 24, 2016 1:23 pm

They are being true to the data???? THEY ARE CHANGING THE DATA TO MAKE IT FIT THE MODELS!!!!!

No, they aren’t. Read the press release carefully. Where does it say that they’re changing the data?
It doesn’t. They only masked the model outputs, so that the models are “measuring” the same thing as our real-life measurements.
They literally tell you this. See this quote: “Because it isn’t possible to add more measurements from the past, the researchers instead set up the climate models to mimic the limited coverage in the historical records.

Kurt
Reply to  Windchaser
July 24, 2016 5:08 pm

Windchaser:
Yes, let’s no longer pay attention to the real data behind the curtain. Just listen to to the booming voice of our computer models.
You say “no one is fabricating data.” Of course they are. Say a scientist honestly thinks there is a relationship between X nutrient and cancer prevention, and one day just writes down what he thinks might happen to each of 500 individuals given a daily dose of the nutrient and 500 control subjects, i.e. he just posits the theoretical results of his idea. Subject 1 gets no cancer, subject 2 gets cancer after 15 years, subject 3 gets cancer in 9 years, etc. where the end result is that the imaginary dosage group has a 25% less incidence of cancer than the imaginary control group.
Then, the scientist says “Hey, why should I go through the time and expense of an epidemiological study. I’ll just pretend that what I wrote down was an actual study, calculate out all the statistics, and submit it.” Certainly, everyone would agree that this is a textbook case of fabrication, even if the scientist was absolutely sincere in his belief that X inhibits the incidence of cancer.
But what if the scientist got a computer to do the dirty work for him? Does it cease to be fabrication at that point? Say the scientist’s has a physical theory as to how the body reacts to substance X so as to reduce the risk of cancer by 25% and just programs a computer to implement this physical process, then calibrates the results to known background cancer rates. The computer spits out “runs” for 500 virtual people who took X and 500 virtual people who didn’t. The scientist collates the results with all the relevant statistics and submits it to a journal for publication being perfectly upfront about this being a computer simulation. Setting aside whether the scientist avoids the ethical consequences of fabrication by being upfront about the methodology – why would the first example be “fabricating data” but the second example not?
The IPCC computer runs do not produce “data.” They never have and they never will. Yet your garden-variety climate scientist treats it as data. They average the computer runs of a single model, despite the fact that this average has no real world significance at all (this average only telling you the expected result of another X computer runs of that model, as opposed to telling you anything about the real world). They average together the runs of different models (no meaning at all) so they can speculate on what the long-term consequences might be to the hitherto-unknown Western Pin-Striped Marmot that inhabits the remote rocky slopes of Mt. Rainier. They compute ridiculously meaningless “95% confidence intervals” etc. It’s all a joke. A functioning computer can only do precisely what it was told to do by its programmer – no more no less.
The study referenced in this post implicitly concedes that the results of the IPCC computer models, as presented by the IPCC, do not accurately reflect what the observations indicate. It also implicitly concedes that if the computer output of the IPCC models were modified to reflect only information that could be compared to actual observations, then like the observations, the computer models would ALSO show a warming rate much less than that predicted by the IPCC.
Any reasonable person would therefore conclude that there is no SCIENTIFIC evidence, i.e. evidence experimentally confirmed against observations, of dangerous warming. But instead the study, or at a minimum the press release accompanying the study, suggests the opposite conclusion – that because the component of the computer output that can be verified against actual observations matches the relatively low warming rate of the observations, that this somehow (illogically) quantitatively supports the higher warming trend of the non-adjusted “authoritative” IPCC model warming rates, even though this higher warming rate CAN’T be quantitatively verified against observations.
This is propaganda, masquerading as science.

Eugene WR Gallun
Reply to  Windchaser
July 24, 2016 6:16 pm

Windchaser —
Here is what they are doing. They claim that past temperature data is flawed, failing to show enough warming. They specify what they think those flaws are. Then they take their models and apply the same flaws to the model output. Suddenly their model output matches what the measured temp data predicts.
They claim that their models only match the measured data if they DELIBERATELY FLAW THEIR MODELS!
They start out with the assumption that their models are FLAWLESS and claim to demonstrate that only if their models are DELIBERATELY flawed to match the flaws they claim exist in the temperature record will both make the same predictions.
In their twisted minds It then follows that this proves their models truly are flawless because they only produce poor results if you deliberately flaw them.
But the model have never ever worked!! They are intrinsically flawed within themselves. The flaws in the models have nothing to do with the flaws they claim exist in past temperature data. THIS IS REALLY JUST SIMPLE MISDIRECTION. THEY MISDIRECT BY SAYING — LOOK AT THE FLAWS IN THE TEMPERATURE RECORD! DON’T YOU DARE LOOK AT THE DIFFERENT FLAWS IN OUR FLAWLESS MODELS!
DISGUSTING.
Eugene WR Gallun

Bob Boder
Reply to  Windchaser
July 25, 2016 11:15 am

Windchaser
Wow that was quick, caught already, by the way the new name is dumb and as I said before you have to change your writing style when you change your name.

Cinaed
Reply to  Windchaser
July 25, 2016 8:56 pm

Climatology is pseudoscience – or junk science. They adjust the data to fit the model.

gofigure560
Reply to  Windchaser
July 26, 2016 2:15 pm

or you could use satellite data instead, which treats the earth’s atmosphere reasonably consistently.

Analitik
July 24, 2016 8:09 am

Because it isn’t possible to add more measurements from the past, the researchers instead set up the climate models to mimic the limited coverage in the historical records.

Do these guys have any idea of how the scientific process is supposed to take place?

Reply to  Analitik
July 24, 2016 10:06 am

That’s a rhetorical question, right?

Pop Piasa
July 24, 2016 8:11 am

I guess we skeptics are all “quirks” too. “They’re quite small on their own, but they add up in the same direction,” “We were surprised that they added up to such a big effect.”

Jim G1
July 24, 2016 8:15 am

The corruption of the left knows no boundaries irrespective of the field of endeavor one examines.

Reply to  Jim G1
July 25, 2016 12:31 am

I know JPL are rocket scientists but that doesn’t necessarily make them of the left.
Assuming measurements are wrong so as the pet theory isn’t disproven is not a political decision.
It’s an ethical decision.

Louis Hooffstetter
Reply to  MCourtney
July 25, 2016 6:57 am

It’s an ethical decision to be unethical.

Louis Hooffstetter
Reply to  MCourtney
July 25, 2016 7:01 am

Yes, it is an ethical decision (to be unethical).

Geoff
July 24, 2016 8:16 am

Sounds reasonable. If the modelling isn’t working then change the physical historical data to that point needed agree with the models.
Those NASA guys have a sense if humour. Release an April Fools Day article 3 months after April Fools Day.

Windchaser
Reply to  Geoff
July 24, 2016 1:25 pm

Sounds reasonable. If the modelling isn’t working then change the physical historical data to that point needed agree with the models.

Where in the press release does it say they’re changing the physical historical data?
It doesn’t. Read it carefully. You’re being misled.
What they did was mask the model outputs, so that the areas used in the average was the same areas that we have in real life. In other words, they’re just checking that we’re measuring the same thing in a model/reality comparison.

Bob Boder
Reply to  Windchaser
July 25, 2016 11:16 am

troll alert

Leonard Lane
Reply to  Windchaser
July 26, 2016 11:05 pm

I think Ministry of Truth trooper rather than troll.

Pop Piasa
July 24, 2016 8:16 am

Isn’t this quirk thing just like the claim that the skeptic scientists who are retired are senile and neither recall the past properly or comprehend progressive science?

Pop Piasa
Reply to  Pop Piasa
July 24, 2016 8:19 am

Durnit… neither-nor.

Editor
July 24, 2016 8:24 am

The Arctic is no warmer now than the 1930s

John Harmsworth
Reply to  Paul Homewood
July 24, 2016 9:34 am

And what about the Antarctic?

Bindidon
Reply to  John Harmsworth
July 25, 2016 4:17 am

Well, according to RSS3.3 TLT it is in the average quite stable:
http://data.remss.com/msu/graphics/TLT/plots/RSS_TS_channel_TLT_Southern%20Polar_Land_And_Sea_v03_3.png
It gets a bit cooler at some places (Halley station for example), but a bit warmer on the Peninsula and within West Antarctica.

July 24, 2016 8:25 am

A data set with fewer Arctic temperature measurements naturally shows less warming than a climate model that fully represents the Arctic.

What is the science behind this statement?

A C Osborn
Reply to  mpcraig
July 24, 2016 8:32 am

There is none.
BEST have the same bullsh*t idea.

Michael Jankowski
Reply to  A C Osborn
July 24, 2016 9:04 am

Yes, Mosh posted here a few weeks ago that every estimate of Arctic warming underestimates Arctic warming, lol.

Sparks
Reply to  mpcraig
July 24, 2016 9:04 am

Artificial data is needed.

Reply to  Sparks
July 24, 2016 10:34 am

…to replace the real estimates.

Windchaser
Reply to  mpcraig
July 24, 2016 11:20 am

What is the science behind this statement?

– As ice area decreases (as it’s been doing), the surface absorbs more sunlight and it warms more. That’s a feedback.
– Cowtan and Way used satellite measurements to show that the Arctic is warming faster than the average.
– Actual ground measurements show more warming as you get closer to the Arctic, at least before they become too sparse to use.

AndyG55
Reply to  Windchaser
July 24, 2016 1:34 pm

roflmao..
Apart from the El Nino spike in Jan/Feb, there has been ZERO warming in the satellite record for the NoPol for some 20 years.comment image
And before the 1998 El Nino, it was actually COOLING !!comment image

Reply to  Windchaser
July 24, 2016 1:40 pm

The fact that the Arctic is warming faster than the rest of the plant is evidence that the warming is not caused by increased levels of CO2. Recent satellite images show CO2 does not vary by more than about 10 ppm from place to place at any given time. The greenhouse effect of CO2 is mostly masked by water vapor and water vapor is scarce at the poles, so the theory predicts more warming at the poles. But UAH temp data show that most of the warming is at the North Pole, and virtually none of it at the South Pole. This weighs agains CO2 being the primary cause of Arctic warming.
As for the apples to apples comment, it would be far more scientific to change the model output to fit the observed data set (average of ocean and land surface temps) rather than to use an untested model to infer that observations were flawed.

Bindidon
Reply to  Windchaser
July 25, 2016 1:38 am

AndyG55 on July 24, 2016 at 1:34 pm
What about slowly but surely stopping to manipulate your readers, AndyG55?
You perfectly know that the ENSO event El Niño is a part of the climate, and that you can’t extract it out of any temperature measurement. Why don’t you then extract La Niña or even volcano eruptions as well?
Stop splitting UAH’s temperature records all the time to cherry-pick what you need, and you then will have to accept that UAH’6.0beta5 TLT’s trend for the North Pole from jan 1979 till june 2016 is 0.246 ± 0.023 °C per decade.
And that the trend for this region measured by RSS3.3 TLT for the same period is of 0.339 ± 0.030 °C per decade:
http://data.remss.com/msu/graphics/TLT/plots/RSS_TS_channel_TLT_Northern%20Polar_Land_And_Sea_v03_3.png
That’s the truth, AndyG55; you can’t change it.

Pop Piasa
July 24, 2016 8:31 am

That’s progressive science. You use phrases like “naturally shows” as an authority implying that anyone who questions your claim would have to be a whack.

G. Karst
July 24, 2016 8:31 am

Isn’t this a clear admission that the models are clearly wrong and not fit for predictive purpose. Changing data to verify a model used to be regarded as academicfraud. All hypothesis can be validated using such reasoning. Can everything conceived – be true? GK

Paul Aubrin
Reply to  G. Karst
July 24, 2016 12:19 pm

They prove that what they call “temperature anomalies” is not data. It is what they decide.

Walter Sobchak
July 24, 2016 8:32 am

IIRC, there was some tortured soul in ths old, and unlamented, Soviet Union who said of his government, that the know the future with perfect clarity, but they cannot predict the past.

Justthinkin
July 24, 2016 8:34 am

These “scientists” can’t be charged with anything? Howzabout outright fraud and corruption? I mean. This guy admits it right in the “paper”!! Has science really sunk so low? To steal a phrase from SDA, this isn’t your grandmother’s science.

Reply to  Justthinkin
July 24, 2016 10:35 am

RICO, anyone?

nigelf
Reply to  firetoice2014
July 24, 2016 2:22 pm

I’m going to be so very happy when Mr. T cuts all this funding and reads them the riot act.
This is all about to come to a crashing end and it couldn’t be soon enough.

Latitude
July 24, 2016 8:34 am

well….at least they admitted they don’t have enough data to even do this

thingodonta
July 24, 2016 8:36 am

“A data set with fewer Arctic temperature measurements naturally shows less warming…”.
Not necessarily at all, since most stations in the Arctic are located in more accessible areas where ice has partly melted and/or in the process of melting, especially during summer; extrapolating these areas to the entire Arctic means one may introduce an ice-free/ice melting warming bias over areas that are permanently covered by ice.
This means, in effect, that having a dataset with more Arctic temperature measurements would actually show less warming, not more, since areas permanently covered by ice would show less warming than those that are partly or seasonally ice free. Naturally, those areas which become more ice-free during summer warm faster under a regional warming trend.
This major assumption undermines the entire article, in fact it is a major reason that ice ages end so rapidly; they key is the amount of ice/snow that melts in summer, extrapolating these areas to the whole Arctic is comparing apples to oranges, and therefore the article’s conclusions are invalid.

John Harmsworth
Reply to  thingodonta
July 24, 2016 9:37 am

If they had no measurements at all the Arctic would be on fire!

Reply to  thingodonta
July 24, 2016 11:51 am

“This means, in effect, that having a dataset with more Arctic temperature measurements would actually show less warming, not more”
And this is how it begins. First you accept the idiotic notion we have no instrument readings for the arctic, this isn’t true. Since 1979 we’ve had satellite records of the entire planet. The argument there are insufficient instruments is completely false.
There are no reliable instrument records from anywhere that date back to before 1920.

Reply to  Bartleby
July 24, 2016 1:46 pm

Bartleby, that’s a very good point. We have nearly 40 years of the best possible observations. The satellite temperature records of the lower troposphere do show that the Arctic is warming much faster than the rest of the world. CO2 is well mixed in the troposphere. Why would CO2 cause the Arctic to warm dramatically but not the Antarctic to hardly warm at all? That I think is one of the best arguments against CO2 being the main driver of recent “global” warming. Global warming isn’t global at all. It’s mostly Norther Hemisphere and most of that is Arctic warming.

RoHa
Reply to  Bartleby
July 24, 2016 9:34 pm

Thomas, surely you mean
“Why would CO2 cause the Arctic to warm dramatically but the Antarctic to hardly warm at all?”
or
“Why would CO2 cause the Arctic to warm dramatically but the Antarctic to not warm at all?”
“Hardly” is restrictive, so “not hardly” acts as a double negative.

RoHa
Reply to  thingodonta
July 24, 2016 9:41 pm

“The Arctic is warming faster than the rest of Earth, but there are fewer historic temperature readings from there than from lower latitudes because it is so inaccessible.”
But if there are not many temperature readings, how can they be so sure that the Arctic is warming faster than the rest of Earth?
“a climate model that fully represents the Arctic.”
Oh. That’s how.

Tom Halla
July 24, 2016 8:37 am

A very good rationale for cooking temperature records/sarc

Pop Piasa
July 24, 2016 8:40 am

Why would folks have exaggerated the high temps in the past? Would they not also exaggerate how cold it was in the winters? Why is homogenization of the pre-satellite era needed when the recorded satellite data shows normal interglacial warming?

William R
July 24, 2016 8:42 am

If models show there should have been more warming than models indicate, why then the only logical conclusion is that the actual values must be wrong! Flawless logic. NASA if officially dead.

SAMURAI
July 24, 2016 8:45 am

I’m relieved there are still satellite and radiosonde global temp datasets available that aren’t being corrupted by government rent seekers…
NASA scoundrels have to now corrupt the raw data even more to offset the coming La Niña cooling.
What the pro-CAGW scoundrels haven’t done is explain why satellite lower troposphere global temps show so much less warming than corrupted surface temp datasets, especially since lower troposphere temps should be increasing 20~40% faster than surface temps, because that’s where all the CO2 induced downwelling LWIR is supposed to be originating.
Out of desperation, I’m sure GISTEMP and HADCRUT4 datasets will soon adopt these new “corrections” just like they did with the “pause-busting” KARL2015 paper….
Unscrupulous scoundrels don’t get to keep adjusting empirical evidence to match their hypothetical model projections… That’s not science, that’s called fraud…

Bindidon
Reply to  SAMURAI
July 25, 2016 10:47 am

I’m not sure you know anything about radiosonde temperature records! Because if you did, you wouldn’t have written that.
Here is a plot of several temperature series and their respective trends, all within the satellite era (1979 – today):
http://fs5.directupload.net/images/160725/6obxu8d5.pdf
The pdf is arbitrary scalable so you can see even smallest details.
UAH6.0beta5 TLT (blue) has a much lower trend than not only NASA GISS TEMP land+ocean (plum), but also than RSS 4.0 TTT (green).
And UAH’s trend is also much lower than those of RATPAC B radiosonde measurements at pressure levels of respectively 700 (yellow) and 500 hPa (red).
Here are the trends in °C / decade, for 1979-2016:
– UAH TLT: 0.122
– RATPAC B 700 hPa: 0.166
– RATPAC B 500 hPa: 0.167
– GISS l+o: 0.171
– RSS TTT: 0.177
– IGRA 500 hPa: 0.613.
Let us exclude the IGRA radiosonde dataset: it’s raw data out of which a huge number of outliers must be excluded. That process is called „homogenization“, and its origin is a group of scientists around Prof. Leopold Haimberger (Vienna University, Austria). This group is leader in such tasks (see papers concerning RAOBCORE and RICH), well acknowledged by e.g. John Christy / UAH.
If, as you pretend, radiosondes are trustworthy: how is it then possible that UAH’s trend is about only 70% of that of the two RATPACs, though UAH’s absolute temperatures (about 264K) are a hint on an altitude of about 3.5 km and hence on a pressure level of about 650 hPa?
UAH’s trend in fact rather would be comparable to that of radiosondes measuring temperatures at a pressure level around 300 hPa, i.e. about 6 km higher than expected.
And if radiosondes are so trustworthy: how is it then possible that their trends coincide with those of both the crazy GISS and the „karlized“ RSS4.0 TTT?
Any idea?

July 24, 2016 8:48 am

Oh what a tangled web they have woven.

July 24, 2016 8:49 am

Brilliant. Now if NASA can just build a computer model in which the space shuttles Columbia and Challenger don’t catastrophically fail through faulty design causing the deaths of fourteen brave astronauts, we can rewrite that history as well. Or do we perhaps need to help those poor folks at NASA find their dusty and long misplaced science rule book so they can have a quick peek at the first few pages.

Nylo
Reply to  andrewpattullo
July 24, 2016 11:09 am

No need, already done probably. By NASA’s models, perhaps only 1 in several billion shuttles fail, which proves that it is the small ammount of observations plus really bad luck that makes us think that they are unsafe. If we kept flying them for millenia, we would see no further failure, as models cannot be wrong.
Yeah I know it sounds unbelievable and absurd, but this is what they are telling us about the temperatures. We couldn’t find the really terrifying increase only because we didn’t look enough. So observations have to be adjusted up to reflect the fact that we are very bad at finding heat. Go figure.

July 24, 2016 8:50 am

“quirks”? I must have missed that term while I was in science classes.

RichDo
Reply to  Retired_Engineer_Jim
July 24, 2016 9:05 am

Perhaps it’s some kind of Pokémon.

ScienceABC123
July 24, 2016 8:50 am

Translation: “Our models don’t match the historical record. So we must adjust the historical record to save our models.”

Reply to  ScienceABC123
July 24, 2016 2:42 pm

No, despite what the headline says, there is no mention of adjusting data at all. In fact, they go the other way. They show that of you process model output temperatures in the same way that HADCRUT 4 averages global measured temperature, you get a similar result.

Kurt
Reply to  Nick Stokes
July 24, 2016 5:29 pm

Since the majority of temperature data comes from land-based stations that haven’t been covered with ice during the instrumental period, I’d be interested in knowing how well the models match the observations if limited to just these areas. I noticed that all the “quirks” they mentioned related either to ocean measurements or to land with ice on it, but the conclusion glibly stated that the “global” trends matched when the model output was somehow forced to an apples-to-apples comparison. The suspicious part of me wonders whether this matching between the adjusted model data and the observations holds true for the regions of the globe where no “glitches” were present in the first place, or whether the results are due to the counterbalancing of mismatches in glitched areas vs. non-glitched areas, so that nothing really matches except the average.

Reply to  Nick Stokes
July 24, 2016 5:38 pm

Kurt,
The main discrepancy, which has been discussed for a while, is the use of SST in the indices vs air temp in the models. That affects about 2/3 of the area. It’s not easy to correct for, because the model has to be interpolated to infer temperatures at the right depth. But I expect that is what they have done.
The problem with a region to region comparison that you envisage is that those regions are where there are least observations. HAD4 treats them by omission, and I expect that this study makes similar omission from the GCM data. So it’s the wrong place to try to compare.

Kurt
Reply to  Nick Stokes
July 24, 2016 5:55 pm

But without that comparison, you’re left to wonder whether the match they found was just an accident. And if the conversion of air temperature to sea-surface temperature requires any kind of an inference, because the actual relationship is not known, then the fact the re-interpreted model output matches the observational temperatures is because it was mathematically forced to. It’s not real.

Reply to  Nick Stokes
July 24, 2016 6:29 pm

“requires any kind of an inference, because the actual relationship is not known”
Anything that you quote from a model is an inference. But the relationships are known, within the model, and all that is asked for here is that the model makes its best estimate of SST (at measurement depth) rather than the air temperature above. That is just a matter of apples vs apples. Getting the right model estimate (interpolate) in no way forces agreement with observation.

Kurt
Reply to  Nick Stokes
July 24, 2016 6:58 pm

“But the relationships are known, within the model, and all that is asked for here is that the model makes its best estimate of SST (at measurement depth) rather than the air temperature above.”
A computer can’t make any kind of “best estimate.” It can only implement instructions. When I referred to an unknown relationship I was referring to the quantified relationship, not just some kind of formula with a few parameters that need to be estimated to produce an actual result. This estimating is done by people, not the model, and as long as the people get to take a general relationship and play around with quantifying that relationship, they get to fit the output to match the data.

Reply to  Nick Stokes
July 24, 2016 8:43 pm

“A computer can’t make any kind of “best estimate.” “
I said the model makes the estimate. A GCM produces a gridded representation of a modelled continuous solution field. You can and should interpolate to estimate results at non-grid points. If you want to compare with GCM SST results, you should interpolate to give the best estimate at that location – not use a result at a different location.

Kurt
Reply to  Nick Stokes
July 25, 2016 12:25 am

I think we’re talking past each other here. I’m assuming that the observed historical sea surface temperatures were used to calibrate the models. That may or may not be correct, but given that the data is available I’d be surprised if the modelers didn’t do that. If the model is based on air temperature above the ocean, I’m therefore presuming that during this calibration step, the modelers presumed some relationship between the air temperature above the ocean and the water temperature at the surface, or maybe the temperature at some depth beneath it. The model is also going to have to be programmed to define a relationship between the surface temperature and temperature at depths beneath it, with a human-chosen curve for the fall-off – linear, logarithmic with a chosen weight, etc.
The same is true of the observed temperature data over land. For regions that have data, the model is going to use the historical data to calibrate the model. So the fact that, when the models are limited to air temperatures for only land areas that have historical temperature data, and are modified to compute sea surface temperature at measurement depths, which I’m also presuming were used to calibrate the historical air temperatures above the ocean, it shouldn’t come as a surprise that there is general agreement between the estimates of TCR. It’s also not that relevant to me, since the differences between the model and the observations are always going to be due, significantly, to the model’s assumptions about areas of the Earth for which no data is available.
A model is an abstraction – a mathematical construct. It can’t infer anything. All the inferences are pre-programmed (or built into it if you prefer).

Reply to  Nick Stokes
July 25, 2016 12:48 am

” I’m assuming that the observed historical sea surface temperatures were used to calibrate the models.”
People keep imagining that GCMs are some kind of curve fitting. They aren’t. They are solutions of the discretised Navier-Stokes equations for fluid flow, augmented with solutions for radiative and other heat transfer, water transport etc. They aren’t calibrated from observed temperatures. There is some degree of parameter testing, or even fitting, but that is for a few tens of parameters at most. The models have millions of variables.
“All the inferences are pre-programmed (or built into it if you prefer).”
Models produce a huge array of output numbers, for every timestep and many thousands of points in space, on a grid. From that, you have to make inferences. In particular, you have to infer statistics that you can compare with statistics from observation.

Kurt
Reply to  Nick Stokes
July 25, 2016 1:36 am

The article at the following link says that the models are in fact calibrated (or curve-fitted) to match observed temperatures:
https://judithcurry.com/2013/07/09/climate-model-tuning/
Though it’s true that they don’t tune it to the fine details like ocean surface temperatures as I was assuming above, they do curve fit the models to make sure that they fit the observed global temperature average. In particular, the article says that no model that doesn’t show 20th century warming will get published.

Reply to  Nick Stokes
July 25, 2016 2:02 am

It’s worth reading those extracts from Mauritsen more closely. Eg
“Yet, the span between the coldest and the warmest model is almost 3 K, distributed equally far above and below the best observational estimates, while the majority of models are cold-biased. “

“To parameterized processes that are non-linearly dependent on the absolute temperature it is a prerequisite that they be exposed to realistic temperatures for them to act as intended.”

“Formulating and prioritizing our goals is challenging. To us, a global mean temperature in close absolute agreement with observations is of highest priority because it sets the stage for temperature-dependent processes to act. For this, we target the 1850–1880 observed global mean temperature of about 13.7C.”
This is typical of what they tune for. As he says, the models track the main features of the temperature progression. But they have this disparity in absolute value, which messes up radiative depending on T^4 etc. So they tune for global average mean temperature. This doesn’t mean they try to improve coherence with the temperature graphs that you mention elsewhere – those in any case are anomaly averages, which have no place in a GCM. They tune to get close to a match over just one period. A single number for temperature.

Leonard Lane
Reply to  Nick Stokes
July 26, 2016 11:09 pm

Ah Ha! Another Ministry of Truth trooper joins the comments.

July 24, 2016 8:51 am

I’m not a scientist, just an obsolete, retired, old engineer, but it really irritates me to see the amount of ‘data torturing’ and plain making it up that goes on in the climate “science” establishment. I’ve worked with plenty of data in real life, and we never tried to get away with what’s acceptable in “government work” these days.
If I’d used these techniques in Professor A.D. Moore’s electrical design course back in the late ’50s, he’d probably have thrown me out on my ear.

stevekeohane
July 24, 2016 8:52 am

Still waiting for the change in human height adjustment… That would where considering people were shorter, they previously over-read the temperature, so the past was really cooler. Today people are taller, therefore they under-read the temperature relative to the past, so today, it is actually warmer than the recorded temperatures. Simple logic…

James Francisco
Reply to  stevekeohane
July 24, 2016 7:34 pm

Steve. I have wondered about that parallax error too. I am sure that the parallax error is nothing compared to the errors that were due to made up numbers by the folks who were supposed to go out in the rain, cold and heat to read those thermometers but didn’t bother. When I worked for the government it was called pencil whipping. Of course I never did it. 🙂

Tom Halla
Reply to  James Francisco
July 24, 2016 7:41 pm

Variations on that theme are “kitchen table research”and “drive-by estimating”

Owen in GA
July 24, 2016 8:53 am

I believe the data protection act makes it a criminal offense for government employees to adjust that data. We need an attorney general who will start prosecuting that act. We can start with the climate adjusters and move on to the EPA. They have all been guilty of changing the collected data to more closely match their video games and need to go to jail for it, particularly any that overwrite the original data in the process! If the original data is still available they have an out, but then their “products” need to be called something other than data, because none of it was ever observed.

Reply to  Owen in GA
July 24, 2016 10:38 am

Perhaps there was no INTENT to be deceptive. (The now famous Hillary defense.)

Olaf Koenders
Reply to  firetoice2014
July 24, 2016 11:55 am

Heh.. It all just happened by accident.. 😉

Not Oscar, just a grouch
Reply to  firetoice2014
July 24, 2016 10:11 pm

I had no INTENT to rob that bank! I just happened to do it, is all. So, I get to walk free, right?
If it’s good enough for Hillary, it should be good enough for everyone else. We can save billions on new prison construction.

Reply to  Not Oscar, just a grouch
July 25, 2016 4:44 am

However, the “Precautionary Principle”, of which GW alarmists are so fond, would suggest building more prisons. 😉

RAH
July 24, 2016 8:55 am

I guess the next step will be to tell us that we don’t need no stinking thermometers or other ways of measuring temperatures because the models will tell us not only what the temp has been in the past but will tell us what they will be. Perhaps they’ll outlaw thermometers like they did incandescent light bulbs and toilets that use over a certain amount of water to flush.

Greg Woods
Reply to  RAH
July 24, 2016 10:52 am

Data? We don’t need no stinkin’ data, we have models…

catweazle666
Reply to  Greg Woods
July 25, 2016 4:25 pm

“The data doesn’t matter. We’re not basing our recommendations on the data. We’re basing them on the climate models.”
~ Prof. Chris Folland ~ (Hadley Centre for Climate Prediction and Research)

Eugene WR Gallun
Reply to  RAH
July 24, 2016 12:47 pm

RAH —
Michael Mann has said that we need only stick our heads out the window to know that that climate change is real and endangering mankind — or something to that effect. His words may have been slightly different from mine but the meaning I convey is certainly his.
Eugene WR Gallun

rocketride
Reply to  Eugene WR Gallun
July 25, 2016 6:14 am

Is ‘out the window’ a euphemism for ‘up our аѕѕ’?

AllyKat
Reply to  RAH
July 25, 2016 4:42 pm

Apparently, some of these fancy water-saving toilets are a little too fancy. In a government building that shall remain nameless, the toilets are equipped with special sensors that figure out how much water to use for a flush for every individual “use”. These are powered with electricity, and apparently have no manual flushing mechanism, or at least no external button anyone can find.
Thus, if there is power failure, the toilets cannot be flushed. Insert your own federal government joke here.

Reasonable Skeptic
July 24, 2016 8:57 am

Why don’t they just use the models to tell what the temperature is?

FJ Shepherd
Reply to  Reasonable Skeptic
July 24, 2016 9:16 am

It actual fact, they have been using computer models to determine missing data for surface temperatures for some time now.

Hivemind
Reply to  FJ Shepherd
July 25, 2016 1:56 am

By “determine” missing data, I assume you really mean “make up”?
I’ve seen references to this elsewhere, but the practice really worries me. If you’re algorithm needs a complete 100% data set and you don’t have it, you don’t create false data to fill in the gaps. You find a different algorithm, that can cope with gaps.
Anything else, all you’re doing is running statistics on made-up data.

bit chilly
Reply to  Reasonable Skeptic
July 24, 2016 12:10 pm

the met have been forecasting with models (badly) for some time now . the accuracy of the forecast has gone down the toilet in recent years as the models “improve”.

cedarhill
July 24, 2016 9:01 am

Haven’t there been sci-fi stories about folks worshiping computers? These folks are worshiping a computer program.

July 24, 2016 9:06 am

Australia’s coastal desalination plants were built due to modelling which indicated permanent rain deficit. Nobody bothered about the records which showed Sydney’s driest year being 1888, the continent’s driest year likely being 1902 (but there’s talk that it was 1838 when even the ‘bidgee dried up) and the driest decade being the 1930s.
These unused desals cost Australian ratepayers up to 2 million per day. The Melbourne one suffered serious construction delays due to…
Can you guess?
It’s true that Eastern Australia has endured half a century of rainfall deficit. The catch is that it was from circa 1895 to circa 1947. This was achieved without any retro-fitting by non-Kardashian models.

SMC
July 24, 2016 9:08 am

So, even though my local temperature forecast is calling for cooler temperatures later this week, it is actually going to be hotter? I guess it’s time to move to the arctic so I can experience nice weather all year around and be one of the last breeding pairs of humans. 🙂

E.Martin
July 24, 2016 9:10 am

Orwell nailed it: “Those who control the present control the past — those who the past control the future.”

Alan Kendall
Reply to  E.Martin
July 24, 2016 10:17 am

You have committed a thoughtcrime (and missed a control).

FJ Shepherd
July 24, 2016 9:21 am

So in summary, computer models will be used to determine what past temperatures should have been so that the computer models projecting the future will be in more synch with the past records. Well since 1984 has arrived 32 years too late, I think NASA is finally on to something for once. It sure beats sending a team to Mars – way too much effort there; let’s use the computer to change past temperatures instead – great!

Gary Hladik
Reply to  FJ Shepherd
July 24, 2016 11:17 am

“It sure beats sending a team to Mars…”
Actually, sending humans to Mars is very easy. All you do is MODEL a manned expedition and — surprise! — it turns out the Martians destroyed their own planet via global warming and the last breeding pair died at the Martian south pole! 🙂

Winston Mitchell
July 24, 2016 9:23 am

Much of our history has been modified or is currently being modified, Why omit temperature from this leftist exercise? Even my fifty-year old unabridged Random House dictionary has become a quaint curiosity.

John Peter
July 24, 2016 9:26 am

Now that Senator Cruz is “relieved” of any duty to support the Republican presidential candidate, perhaps he could spend his energy looking into this remarkable piece of “scientific” work. Completion and publication date 19 January 2017 latest.

TA
July 24, 2016 9:30 am

Love that cartoon: Mann is pulling down on one end of the chart and pushing the other end higher. Love it!

Bruce Cobb
July 24, 2016 9:38 am

“The fault, dear Brutus, lies not in our models, but in our data”.

Logos_wrench
July 24, 2016 9:43 am

So there’s a new variable called “The Quirk”. Fascinating. Lol.

July 24, 2016 9:44 am

Does this mean that the reduced warming rate of the 21st century has now been predicted accurately?

Bernie
July 24, 2016 9:55 am

It would save a lot of time and money if they would just calculate the historical global temperature for as far back as needed. I’ll join the consensus that proclaims the calculated temperatures to be accurate. Then I’ll bet that in ten more years, the GCM will still be out of whack.

Laws of Nature
July 24, 2016 9:58 am

There is a very good answer/analysis to this paper by N. Lewis published on J Curry’s blog:
https://judithcurry.com/2016/07/12/are-energy-budget-climate-sensitivity-values-biased-low/
(also reblogged at S. McIntyre’s climate audit: https://climateaudit.org/2016/07/12/are-energy-budget-tcr-estimates-biased-low-as-richardson-et-al-2016-claim/ )
Also some of the comments there are very well worth the time reading them,
LoN

TA
July 24, 2016 9:58 am

“A new NASA-led study finds that almost one-fifth of the global warming that has occurred in the past 150 years has been missed by historical records due to quirks in how global temperatures were recorded.”
Historical records? I wonder which bastardized dataset of historical surface temperatures they used for this study?
How do you do an accurate study when you start out with bastardized surface temperature data?
The results of the Study agree with previous bastardized surface temperature data. What does that telll you?

tomwys1
July 24, 2016 10:03 am

The HadCRUT3 to 4 transition (here:) http://www.colderside.com/Colderside/HadCRUT4.html actually brought about 400 moire mini-Urban Heat Islands into the fold, and they hind-casted them as well, thus mathematically “translating” the entire data set upward.
It still wasn’t enough, so now they are adjusting “real” readings to the overheated models.
I guess Anthony could call this post “Sunday Humor” but I’m not laughing!

commieBob
July 24, 2016 10:07 am

A data set with fewer Arctic temperature measurements naturally shows less warming than a climate model that fully represents the Arctic.

The pre-satellite era temperature of the arctic is poorly known. There is historical evidence that, at various times, it was as warm as it is today. There are reports of ice extent that mirror modern data. Here’s an example from the 1930s.
I’m guessing that the adjusted temperatures won’t agree with the, admittedly sparse, historical record.
Another thing … what about the antarctic temperatures? Shouldn’t they be fully represented too? They don’t seem to change much. They might go a long way to offsetting the more dramatic arctic temperature swings.

July 24, 2016 10:08 am

Bernid Madoff went to jail for adjusting return rates on stocks…….

TA
Reply to  Reality check
July 24, 2016 10:13 am

Yeah, and Bernie just swindled his victims out of a few billion dollars, whereas this CAGW hoax is swindling the U.S. taxpayers and taxpayers around the world out of TRILLIONS of dollars.

Alan Robertson
Reply to  TA
July 24, 2016 11:13 am

Gee whiz. Obvious malfeasance within a government agency. Somebody should turn this over to the Justice Dept. (Just not this Justice Dept.)

skorrent1
July 24, 2016 10:09 am

If historical data mixed water and air temps, which “also skews the historical record toward the cool side”, don’t they need to “adjust” the historical temperature record up to make it more accurate? Wouldn’t this reduce the amount of warming from then to now?
And to think, JPL made it all the way to Mars. Must be a few good scientists there, anyway.

higley7
July 24, 2016 10:16 am

Climate Computer Models are NOT science. They only do what they are programmed to do. They are computer programmers wet-dreams designed to show what their bosses demand. NOT SCIENCE, NOT REAL, NOT USEFUL, and WORTHLESS, particularly when used as a proxy for the real world. Where is that copy of the RICO rules?

July 24, 2016 10:16 am

So from 1939 til 1950 were these “Global Cooling Observations”?
Or have the “Tamperature” (thanks M’lord) adjustments thus far already eliminated that inconvenient truth?

David S
July 24, 2016 10:19 am

If the models don’t agree with reality then reality must be wrong. So the solution is to ignore this reality monkey business.
(I think the absurdity of this comment is sufficiently apparent that I can skip the /sarc tag.)

July 24, 2016 10:26 am

So it is an “apples to apples” comparison because it is comparing computer program output to computer program output.

Hans
Reply to  nhill
July 24, 2016 1:05 pm

Bingo
Eliminate all data entirely
Stroke of genius
Why didn’t anyone think of this before?

DDP
July 24, 2016 10:50 am

“The final issue is that there was considerably more Arctic sea ice when temperature records began in the 1860s”
Considerably more? Are they now re-writing history to include aerial observation from aircraft and satellites in the 19th century, because there is absolutely no way of knowing there was ‘considerably more’ ice in a region that annually loses most of it’s mass every summer.
“The Arctic is warming faster than the rest of Earth, but there are fewer historic temperature readings from there than from lower latitudes because it is so inaccessible.”
So if it is so inaccessible, how do you now there was ‘considerably more’ ice coverage? Go home, you’re drunk. Again.

Sleepalot
Reply to  DDP
July 25, 2016 3:44 am

There were sailing ships roaming the NW Passage in the 1850’s.

July 24, 2016 10:54 am

It a matter of the laws of physics.
Whenever the earth’s temperatures rises the data expands, thus warming appear to be artificially enhanced, which of course it isn’t
Whenever the earth’ temperature falls, the temperature data as everything else shrinks (except the Arctic ice, of course) , hence cooling has to be reduced appropriately.
don’t understand what the fuss is all about.

Steve Fraser
Reply to  vukcevic
July 24, 2016 1:35 pm

I love that!

Greg Woods
July 24, 2016 10:57 am

This is just another step towards legislating science, much like common law was changed into statutory law….

BLAMMO
July 24, 2016 10:57 am

The obvious problem with the scientific method is empirical observation. If we can only compensate for that, we’ll finally get it right.

Clyde Spencer
July 24, 2016 10:57 am

Another way to approach this problem is to acknowledge that there isn’t enough high-quality historical data for the Arctic to say anything reliable about the long-term trend of Arctic temperatures. Thus, the model outputs should be reported for only mid-latitude, NH and compare the model history and predictions with the high-quality temperature records for the same area. If the models don’t agree with the high-quality data, then the models need to be adjusted until they do agree.
Another thing that should be examined is why there is such a difference between the outputs of the various models. Whichever, model(s) appear to be providing the best agreement with the temperature records should be used as an example of how to correct the other models.
It seems to me that the optimal situation would be where a single run of a single model would provide us with a trustworthy prediction of the future. Ensembles of multiple runs would suggest to me that the standard error varies directly with the number of runs. That is, the more runs necessary to make a prediction, the less reliable that prediction is! Logically, there can only be one best prediction from among a host of predictions. Averaging all runs together simply reduces the accuracy of the whole prediction and reduces the reliability of the prediction for any particular point in time.
I agree with the comments that many of the current practicing climate scientists don’t appear to understand the Scientific Method.

Steve Fraser
July 24, 2016 10:58 am

Hmmm. Prior temp records were too low? Does that mean NOAA, NASA And HadCRU have been incompetent in their work? I am shocked!

John Coleman
July 24, 2016 11:07 am

Even scientists who work with no bias or agenda have little chance of producing accurate global temperature data for any year or even decades prior to 1950. Remember our only thermometers pre 1950 were tubes of mercury. Remember than 2/3rds of the Earth is ocean and we had almost no air temperatures over the water. Remember Arctic regions were essentially unmeasured. Remember much of the land mass of Earth was primitively populated and thermometers were few and far between. Remember that even many National Weather Bureau readings in the United States and equivalent readings from other civilized nations were sometimes made on the top of multi story buildings in the downtown districts of cities. Remember there were few weather reporting stations in rural areas. Remember there were no or few readings from high mountain elevations.
It is silly to assume that a scientist today can construct a way to accurately determine temperatures in the scientifically primitive world. Tree rings, ice cores, carbon dating, adjustments and assumptions can, at best, be accurate within a few degrees. Comparing such data with todays data and then reporting on differences of a couple of degrees or even tenths of a degree is sheer folly, in my opinion.
And those of us who have looked at the current data sets in details know they have been structured by scientists with bias and agendas. Today the data sets continue to lack fair and balanced data from cities and rural, from coastal and inland, from mid latitudes and high latitudes, from low elevations and high elevations. Even this satellite data is challenged by refinements in satellites and combining old and new satellite data. But, as of now, the satellite data is far and away the best we have.
Scientists who tell us they have worked through all of these problems to come up with valid data that extends back more than sixty years leave me very skeptical. I would be skeptical even if they were proving that no AGW was occurring. lol

Carla
Reply to  John Coleman
July 24, 2016 12:08 pm

John Coleman July 24, 2016 at 11:07 am
——————————————————-
ditto very well said

gnomish
Reply to  John Coleman
July 24, 2016 7:11 pm

and remember that a global temperature average is as meaningful as a global average telephone number.
nature.com has been offline since this article was posted, btw…

Clyde Spencer
Reply to  gnomish
July 24, 2016 8:02 pm

gnomish,
Your claim makes for an interesting sound bite, but telephone numbers are not a measurement of a physical property. Your statement is really a non sequitur.

gnomish
Reply to  gnomish
July 25, 2016 11:39 am

it’s in interesting sound bite for a reason, yo
taking the temperature in death valley, usa and adding it to the temperature in vostok. antarctica and then dividing by 2 is not a real property of anything.
see how that works? you can make meaningless pie with arithmetic and dazzle your numerologist friends/
the average human being has one ovary and one testicle. bite that soundly.

Richard M
July 24, 2016 11:18 am

This could save a lot of money. No need for any more surface stations or satellites. Just run the models and they will provide the data. I wonder what the people who collect that data would say?

Robert Westfall
July 24, 2016 11:18 am

This is not the first time that NASA has submitted to political pressure. Both the Apollo fire and the Challenger disaster were the result of political expedience. In both cases the engineers objections were over ruled and lives were lost.

Gary Hladik
July 24, 2016 11:25 am

Don’t the IPCC’s climate models fail at a regional level at least as badly as they do globally? If they don’t work when Arctic data are not an issue, then how can Richardson have any confidence they work when the Arctic is included?

Reply to  Gary Hladik
July 24, 2016 12:58 pm

Because funding.

Steven Hales
July 24, 2016 11:28 am

The errors of observation were clustered around time of observation and known instrument errors. If they are mainly random then their effects would be randomly distributed and would be largely eliminated by calculating anomalies. If the assumption was they were non-random then the search for a trend in errors could be spurious and any correction would inject a false trend.

RBom
July 24, 2016 11:28 am

Dogmatocene.
The era of Government mandated (funded) Religious Beliefs.
No ha

Bruce Cobb
July 24, 2016 11:31 am

Warmist law: “Things are always worse than we thought, and we are always surprised that they are.”

Eugene WR Gallun
Reply to  Bruce Cobb
July 24, 2016 12:54 pm

Bruce Cobb — and we are always PLEASED that they are — Eugene WR Gallun

JohnKnight
Reply to  Eugene WR Gallun
July 25, 2016 8:04 pm

Consensus = pleasantly surprised . . and Siants slithers on . .

Tom Anderson
July 24, 2016 11:50 am

The Democrats will pound global warming from now to the election. There could easily be a disaster a day till November.
This is only the beginning.

Pat Kelly
July 24, 2016 11:55 am

I can’t think of a better way to verify your modeling than to use it to demonstrate how past direct observations are flawed – GENIUS!

Svend Ferdinandsen
July 24, 2016 11:59 am

How could low coverage give less warming? They have in modern times reduced the number of thermometers, so would that also influence.
“it is so inaccessible. A data set with fewer Arctic temperature measurements naturally shows less warming than a climate model that fully represents the Arctic.”
By using anomalies they can extend a single thermometer out to 1000km, and by the way the average take care of the density of measurements.

TonyL
July 24, 2016 12:12 pm

The story of arctic exploration is a fascinating one. Many of the scientific expeditions were well equipped to make accurate observations. It would be foolish to discount those observations because “models”.
One topic which recurred a couple of times is this: Whaling ships would report vast reductions in sea ice, with shrinking glaciers, and huge tracts of open ocean which previously had been impassable sea ice. These reports would reignite speculation that the arctic would soon become more inhabitable. The reports would also cause another round on expeditions to find the fabled “Northwest Passage”. (Remember, the panama canal did not exist. A shortcut between the Atlantic and Pacific would have been hugely important)
The stories of the Franklin expedition and the Resolute expedition sent to find them give great insight into the tremendous changes the arctic can undergo. Today, we have the story of HMS Resolute and the Resolute desks.
Care to guess who sits at a Resolute desk today? (Google is your friend, sort of)
But we can not consider any of this because “models”.

July 24, 2016 12:22 pm

Data being adjusted to match models.
This is the death of climate science.
Like a cancer this death will spread to other scientific fields.
Unless it gets cut out.

Reply to  ptolemy2
July 25, 2016 5:34 pm

Certainly needs to be cut out – that’s why I call CAGWarmists GangGreen.

ShrNfr
July 24, 2016 12:35 pm

“I don’t know what you mean by ‘global warming,’ ” Alice said.
Humpty Dumpty smiled contemptuously. “Of course you don’t—till I tell you. I meant ‘there’s a nice model for you!’ ”
“But ‘global warming’ doesn’t mean ‘a model’,” Alice objected.
“When I use science,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.”
“The question is,” said Alice, “whether you can make data mean so many different things.”
“The question is,” said Humpty Dumpty, “which is to be master—that’s all.”
Alice was too much puzzled to say anything, so after a minute Humpty Dumpty began again. “They’ve a temper, some of them—particularly ocean measurements, they’re the proudest—stevenson screens you can do anything with, but not ocean temperatures—however, I can manage the whole lot! Impenetrability! That’s what I say!”

svbeachhouse
July 24, 2016 12:42 pm

This is Obama’s “Ministry of Truth” in full high gear!
Remember what Orwell said, “Those who control the present, control the past. Those who control the past….control the future.”
It’s in our face, but we seemingly can do nothing about it but keep pointing it out.
“Newspeak” is here.

Tom in Florida
Reply to  svbeachhouse
July 24, 2016 5:31 pm

Perhaps it should be changed to “Those who control the models, control the past.”

July 24, 2016 12:42 pm

Fundamental misunderstanding of the paper.
Its not about adjusting the data.
its about comparing APPLES and APPLES
Issue number 1.
Suppose Bob Tisdale wants to compare the DATA about SST with the model projections?
Does he
A) compare the data for SST with with the model outputs for the temperature of AIR over the ocean?
B) Compare the data for SST with the model outputs for SST?
Answer?
B.
Suppose Christy wants to compare his data about temperatures at TLT with models.
Does he
A) compare measured TLT with modelled TLT?
B) compare measured TLT with modeled skin temperature?
Answer
A
Suppose you want to compare Global temperature INDEXS with modelled output.
In the past everyone took a short cut. Even the IPCC
Global temperature indexes are a combination of SST and SAT.
In the past everyone just went to the CMIP data vault or KNMI and pulled down modelled SAT (tas in the GCM world )
So in the past they compared
SST+SAT versus SAT
Well, that’s wrong. its always been wrong. You need apples and apples
The next issue is masking
Do you
A) Compare the model results AT THE :LOCATIONS where you have data?
B) compare the averages of the data with the average of the model?
Answer A
The actual problem is not the data. the problem is the models overeestimate the warming in the arctic while they simultaneously underestimate the ice loss.
So, they are not talking about adjustments to data. Its about comparing apples with apples.

Eugene WR Gallun
Reply to  Steven Mosher
July 24, 2016 1:10 pm

Steven Mosher —
Waving your magic hands over an orange and claiming to have turned it into an apple and then comparing it to a real apple is not how science is done.
Eugene WR Gallun

Marcus
Reply to  Steven Mosher
July 24, 2016 1:10 pm

“A new NASA-led study finds that almost one-fifth of the global warming that has occurred in the past 150 years has been missed by historical records due to quirks in how global temperatures were recorded.”…
Nothing about “comparisons”…He clearly states the problem is “quirks in how global temperatures were recorded.”..

Reply to  Steven Mosher
July 24, 2016 2:38 pm

“Fundamental misunderstanding of the paper.
Its not about adjusting the data.
its about comparing APPLES and APPLES”

Mosh is right, and the headline is wrong. There is no mention of adjusting data in either press release or abstract. In fact what the study does is to process the model output in the same way as HADCRUT4 is compiled, rather than vice versa. Then they find that the models agree with HAD4.
It’s not all that new. Cowtan, Hausfather et al (2015) made the comparison of indices with blended land/sea model output. And Cowtan and Way showed the effect of properly weighting polar region results in HADCRUT on the global index. Finally, the problem of the transition of over-ice temperature to over-water is familiar to anyone who has tried to compile a global average. It’s real.

Tom in Florida
Reply to  Nick Stokes
July 24, 2016 5:33 pm

It’s only real if “global average” had any real meaning.

Kurt
Reply to  Nick Stokes
July 24, 2016 5:47 pm

Mosh is wrong and the headline is right.
The relevant quote from the press release accompanying the paper is that “one-fifth of the global warming that has occurred in the past 150 years has been missed by historical records.” Clearly, the conclusion the paper is trying to draw is that the data is wrong and the models are right, hence the trend in the data needs to be adjusted upwards by at least 19%.” You’re trying to have your cake and eat it too. You first focus on the methodology of the paper, which as you say, massaged the model output to match the observed warming trend that is much lower than the models. But by irrationally assuming that this somehow validates the trends shown by the non-adjusted output of models, the paper advocates for an adjustment of the observational data to match the models. You then say, “No, no . . . the paper isn’t adjusting the data at all, it’s adjusting the model output.”

Reply to  Nick Stokes
July 24, 2016 6:23 pm

“hence the trend in the data needs to be adjusted upwards by at least 19%”
The headline says the observations need to be adjusted by 19% (of what?). That is in no way supported by press release or abstract. You are paraphrasing as “trend in the data” needs to be adjusted. You actually mean “trend deduced from the data”. And that is quite different from adjusting the data. What they point out is that if you use the same deduction process for the models as used for HAD4, you get a 19% lower trend. So yes, it’s reasonable to say that means that if the data was analysed in the same way as the models, the trend would be 19% higher. They are suggesting a better way of deducing the global trend from data, not adjusting the data itself.

gnomish
Reply to  Nick Stokes
July 24, 2016 7:19 pm

has anybody read this paper?
because
Nature.com is DOWN for everyone.
It is not just you. The server is not responding…
http://www.isitdownrightnow.com/nature.com.html
and so it has been since this article was posted.

Gerald Machnee
Reply to  Nick Stokes
July 24, 2016 7:29 pm

Nick Stokes:
**They are suggesting a better way of deducing the global trend from data, not adjusting the data itself.**
As usual, Nick is going around in circles. They are already adjusting the data. What they are implying here is that they are not adjusting it enough. Yes, they said they are 19 percent off. They can now justify adjusting some more. See my solution down a bit lower. We can shut down all stations for 100 % accuracy.

Reply to  Nick Stokes
July 24, 2016 8:38 pm

“They are already adjusting the data. What they are implying here is that they are not adjusting it enough.”
Actually, none of the authors is responsible for handling (or adjusting) a temperature database. And no observed temperatures were modified in the study. One author, Kevin Cowtan, has an account here. They simply took GCM output, and summed it as if it were only known at the locations where HADCRUT had measurements. Then they compared with published HADCRUT. They observed the diminished warming.
They make no suggestion that observations should be modified, and indeed that would not be a useful remedy for this particular issue.

Kurt
Reply to  Nick Stokes
July 24, 2016 8:39 pm

“What they point out is that if you use the same deduction process for the models as used for HAD4, you get a 19% lower trend. So yes, it’s reasonable to say that means that if the data was analysed in the same way as the models, the trend would be 19% higher. They are suggesting a better way of deducing the global trend from data, not adjusting the data itself.”
You’re engaging in sophistry to avoid the whole point of the press release, and presumably the study if accurately represented by the press release (and I’ve seen some press releases that don’t). Specifically, you’re confusing the procedure they used to validate the models (changing the model output) with the proposed solution (change the observational data to match the models).
First, there’s no meaningful distinction between “adjusting the data” and “a better way of deducing the global trend from the data.” The data is what it is, and the trend in the data is solely defined by the numbers comprising the data. You can’t separate the two and pretend that you can alter the trend of the data without altering the data itself. Whether you go through and adjust each entry in the data, every third entry in the data, or ignore the details altogether and cut to the chase by doing some hand-waving “add 19%” to the trend line of the data to match what you think the trend line should have been, you’re still substantively changing the data. You’re just quibbling that the procedure doesn’t bother to go through and adjust all of the numbers so that the real mathematical trend of the data is the new and improved trend.
Second, how precisely would you analyze the data in the same way as the models? The data are merely temperature readings from fixed locations at certain times. The models attempt to simulate how the climate works and produce theoretical temperatures that don’t match the data. The assumption of the study is that the models are right and the data is wrong, on the argument that you can get the modeled temperatures to produce the lower trend of the data if you change the modeled temperatures using some conversion loosely based on some “quirks” in the historical data. Note that the argument of the paper, in true Orwellian fashion, is that the observational data somehow validates a modeled trend 19% higher than it’s actual trend. (I’m not arguing that the model trend is necessarily wrong – I’m just observing the silliness of arguing that observational data can validate something so far off from what it objectively shows).
Just because the paper starts from the modeled temperatures and gets to the lower trend of the observational data does not indicate any technique that would start from the observational data and manipulate it so that its trend is equal to the average modeled trend. And if you were able to do this (and maybe the actual paper behind the paywall does suggest such a technique), certainly then you would have to concede that such a procedure would change the observational data. It would have to.
But because the press release provides no such details and just offers a shortcut of adding 19% to the observational trend, you’re using the fact that they don’t expressly spell out a technique that moves from the raw observational to adjusted data that has a trend as high as the models suggest, so as to simultaneously argue that the paper doesn’t change the observational data BUT that the observational data nonetheless verifies a modeled trend 19% higher than the actual trend of the actual observational data.
It doesn’t matter whether the study’s methodology, in its validation step, changes the model output to match the observational data, or changes the observational data to match the modeled output. Since the paper clearly comes down on the side of adjusting the meaning of the observational data to quantitatively match the theoretical results of the mathematical model, and does so on the false implication that the model’s higher trend has been validated by the observations, the conclusion of the paper is that the observational data should be changed. That they haven’t spelled out the procedure to do so, yet, doesn’t change this basic truth.

Reply to  Nick Stokes
July 24, 2016 8:56 pm

“Specifically, you’re confusing the procedure they used to validate the models (changing the model output) with the proposed solution (change the observational data to match the models).”
It isn’t the proposed solution. Where do you see it proposed?
“The data is what it is, and the trend in the data is solely defined by the numbers comprising the data. You can’t separate the two and pretend that you can alter the trend of the data without altering the data itself.”
You seem to have no idea of how a global trend is calculated. You have a whole lot of sampled temperatures over space (Earth surface) at a whole lot of times – about 2000 months since 1850. You have to first compute an average over space from the samples at each point in time. There is no one method for doing that. HADCRUT makes a grid and forms cell averages where there are points with data, and averages those omitting cells without data. Added to this is the business of calculating anomalies. There is no “solely defined” number; there are different ways of doing it and I think few would claim that the HADCRUT method is obviously the best. When you have averaged in space you have to do a trend estimate in time. That is more standard, though even then you might argue for weighting with respect to error estimate etc.
All of this calculation happens after you decide on what the temperatures actually are, and involves no adjustment of those temperatures.

Reply to  Nick Stokes
July 24, 2016 9:09 pm

That’s all fine and I concede the points made by Steven Mosher and Nick Stokes.
However, what I can not accept, is that climatologists hadn’t thought of this until now!
It is imbecilic to even entertain the notion promoted by this paper, that the erroneous comparisons made to date, were all just an oversight.
The graphic use of overcooked model output served the purpose of frightening the world. But the growing discrepancy between the models and data products such as CRUD4 was becoming a political liability. Hence the dissembling climbdown and assimilating clawback. The whole thing is a disgustingly shameless simulation of the real. The paper is nothing more than a perverted simulacrum of science. It is propaganda of the purest kind.

The simulacrum is never that which conceals the truth – it is the truth which conceals that there is none.

Reply to  Nick Stokes
July 24, 2016 9:42 pm

“However, what I can not accept, is that climatologists hadn’t thought of this until now!”
They have. From the press release:
“Scientists have known about these quirks for some time, but this is the first study to calculate their impact.”
People have looked at the forward direction – how to best calculate an average from the observed data. And that is the ultimate problem. This paper looks at the reverse problem, which quantifies the bias, so it is useful. But it doesn’t solve the problem of how to average the observed data while avoiding the bias.

Dr. S. Jeevananda Reddy
Reply to  Nick Stokes
July 24, 2016 10:38 pm

Nick Stokes – “HADCRUT makes a grid and forms cell averages where there are points with data, and averages those omitting cells without data.” — If the grid or grids represent a particular Climate System and or General Circulation pattern, the whole process of averaging is an erroneous one only. Based on the CS and GCP, through extrapolation and interpolation the empty grid must be filled to present better quality data. Here the major facto on land data is urban and rural conditions. This must be covered.
Dr. S. Jeevananda Reddy

Reply to  Nick Stokes
July 24, 2016 10:53 pm

“through extrapolation and interpolation the empty grid must be filled to present better quality data”
Yes, I think they should. Discussed here.

Kurt
Reply to  Nick Stokes
July 24, 2016 11:57 pm

Nick – you seem to have proven the whole point of my post. I’m fully aware of how temperature data is spatially gridded before it’s temporally plotted. But we are talking about the temporal trend, here. So all those excruciating details of the procedures used to derive the spatially-averaged gridded “data” simply demonstrates all the possible ways that climate scientists can, in the future, manipulate the temporally plotted “data” to change the trend to get it 19% higher – as if there’s a meaningful difference between writing new average numbers comprising your gridded “data” set and deciding new ways of spatially blending raw temperature readings to get to a desired gridded and averaged temporal “data” set that produces a least square error trend line that matches the models.
As implied by the quotations used around the word “data” in the preceding paragraph, you’re also selectively adopting an all-too conveniently stunted definition of the word “data.” Gridded and averaged observed temperature “data” is ubiquitously plotted in graphs from say 1880 to the present or some smaller interval, with a trend line through it. You’d have us believe that the points in the graph, and the 5-year running means, and the trend lines, don’t qualify as “data” because they represent some blended average of raw temperature sensor numbers, and only the initial step of changing these raw numbers via infilling or bias adjustments qualifies as “adjusting data.” That’s silly. If the plotted, annual or monthly points in these graphs qualify as “data” – and they do – then what I said in my earlier post was absolutely correct – “the trend in the data is solely defined by the numbers comprising the data. You can’t separate the two and pretend that you can alter the trend of the data without altering the data itself.” Whatever procedure used to massage the data prior to plotting it on the graph doesn’t strip it of it’s characteristic as observational data.
You adopt this crabbed interpretation of “changing data” as a semantic deception to avoid the plain implication of the press release for the paper – that the temporal trends shown by the gridded and averaged observational data are wrong and should be adjusted upwards to match those of the models, which are supposedly correct. Yes, data is changed when you arbitrarily adjust raw temperature readings by infilling, by correcting for TOB, etc. but that doesn’t mean that data is NOT changed when you subjectively select a technique of spatially averaging those temperature readings to plot the data on a graph, or even by just pretending that the trend through the plotted data is something different by adding a number to it’s slope. It’s an inferential change, in this later case, but you’re still changing the data by pretending it says something it doesn’t.
And the title of the post, which you say is inaccurate, reads “NASA: Global Warming Observations Need a Further 19% UPWARD Adjustment.” Is this title supported by the press release? Well in the opening sentence, the press release states that a “new NASA-led study finds that almost one-fifth of the global warming that has occurred in the past 150 years has been missed by historical records.” That’s their words – not mine. As is the language that “these quirks [in the historical record] hide around 19 percent of global air-temperature warming since the 1860s.” These quotes certainly imply that historical data needs an upward adjustment in its trend line. You’re clearly ignoring the plain meaning of the word “need” in the Title. And note that they’re not complaining about methods, they’re complaining about the deficiencies in the data
Again, think through your position for consistency. The whole thrust of this paper is that the historical data is supposedly inaccurate because it has quirks that “hide” 19% of the “real” trend shown by the models. The paper supposedly demonstrates that the trend shown by the models is the “real” trend by validating the model using the historical temperature record WITHOUT MODIFYING IT. If the paper suggests as you acknowledge, a “better way” of deducing a time-plotted trend from the historical record different than the existing time-plotted trend, wouldn’t this of necessity mean that the plotted observational data – through which the trend line that supposedly validated the computer models was drawn through – would have to be changed if that same data were to have a 19% steeper trend line through it?

Reply to  Nick Stokes
July 25, 2016 1:46 am

It’s true that the press release reflects a journalist’s shaky grasp of the paper. So when he says, for example, that warming was “missed by historical records”, that is a muddled rendition of the main issue in the paper, which is the limited coverage of the HADCRUT set. But you can’t correct that coverage by adjusting observations, and there is no suggestion of such adjustment in the abstract, nor in Cowtan’s extended summary.
” If the plotted, annual or monthly points in these graphs qualify as “data” – and they do”
Well, they aren’t observations. And they aren’t things that you would, in normal usage, adjust. They are computed from actual monthly station (and other) data by a complicated spatial process during which many decisions are made. And these decisions can be reviewed and improved. More relevantly for this paper, their consistency (model average vs index) can be checked.
The paper is in fact about sensitivity, and in particular TCR. And what they do do is stated in the last sentence of the abstract:
“Correcting for these biases and accounting for wider uncertainties in radiative forcing based on recent evidence, we infer an observation-based best estimate for TCR of 1.66 °C, with a 5–95% range of 1.0–3.3 °C, consistent with the climate models considered in the IPCC 5th Assessment Report.”
IOW, they identify a bias and correct (adjust) their estimate of TCR. This is a long way from adjusting observations.

Hugs
Reply to  Nick Stokes
July 25, 2016 3:40 am

Nick, thanks for stopping by.
Eric said
‘NASA researcher Mark Richardson has completed a study which compares historical observations with climate model output, and has concluded that historical observations have to be adjusted, to reconcile them with the climate models.’
This is not right, the observations are not adjusted, just how model run is compared to them.

Reply to  Nick Stokes
July 25, 2016 10:18 am

Why of course Mosh is right, Mosh and Stokes are always right according to Stokes and Mosh. There I just did what this NASA study did.

Leonard Lane
Reply to  Nick Stokes
July 26, 2016 11:22 pm

So it is OK to take the Arctic with little data, claim it reduces warmth of modeled data anomalies for the entire planet, then use it to estimate something with a failed climate model anomaly, write it up, and call it the truth?
Hmm, fundamental NASA research techniques huh?

Bartemis
Reply to  Steven Mosher
July 24, 2016 11:37 pm

Steven Mosher @ July 24, 2016 at 12:42 pm
”So, they are not talking about adjustments to data. Its about comparing apples with apples.”
Not quite. They’re comparing apples to what may be apples. It is not established yet that they are apples. It is only a possibility. That means that the warming rate discrepancy on its own may not invalidate the models. But, failure to invalidate does not mean the models are validated.
Nick Stokes July 24, 2016 at 6:23 pm
”They are suggesting a better way of deducing the global trend from data, not adjusting the data itself.”
Again, no. They are suggesting one possibility for which the models could be accurate even though the data disagree with them. It is a long leap of faith from there to claim that this establishes a better way to deduce the global trend.
The models are only a possibility. The data do not uniquely fit them out of all the possible actual dynamics. Ergo, you can draw no firm conclusions based on mere consistency over a finite region of time and space.

Reply to  Bartemis
July 25, 2016 12:20 am

“They are suggesting one possibility for which the models could be accurate even though the data disagree with them.”
Saying the models disagree with data is meaningless on its own. They are a mass of disparate numbers. It only becomes meaningful if you deduce from each statistics, like trend, which are supposed to be comparable. Apples. And that is what the issue is here – what statistics are comparable. Richardson et al showed that if you use the reduced coverage of HADCRUT for both, you get agreement. That doesn’t prove that either is right, but it gives confidence.

Kurt
Reply to  Bartemis
July 25, 2016 1:12 am

“Richardson et al showed that if you use the reduced coverage of HADCRUT for both, you get agreement. That doesn’t prove that either is right, but it gives confidence.”
Richardson et al., at least in the University of York web post you linked to above, didn’t say anything at all about confidence in which was right, In fact they ducked the question by simply saying that the observed data and the model data measure different things, and that since TCR was defined as relating to air temperature, that the model data should be used as a representation for TCR and the observed data used for some other metric, TBD later.
Also, the apples-to-apples comparison showed agreement with the lower trend. We should at least be able to conclude that the observational data does not agree with higher trend of the model. It may not refute the higher model trend, but does not agree with it. Richardon et al. seems to make this assessment:
“We draw the weaker conclusion that the historical record offers no reason to doubt the estimates of climate sensitivity from climate models.”
We don’t have the full paper, but this seems like another example, of a government agency grossly exaggerating the published conclusions of a scientific article. Shocking, I know.

Reply to  Bartemis
July 25, 2016 4:15 am

“We should at least be able to conclude that the observational data does not agree with higher trend of the model.”
No, what it shows is that when reduced to the lower coverage of HADCRUT 4 (main effect) and with an SST estimate for ocean (and ice border estimated), the model results agree with HADCRUT. The reasonable expectation is that if HADCRUT’s limitations could be overcome and it was extended to the complete coverage and consistent air measure of the models, H4 would match the higher trend. But we don’t know how to do that, so it is not proved.

Bartemis
Reply to  Bartemis
July 25, 2016 1:41 pm

“That doesn’t prove that either is right, but it gives confidence.”
Only if you are predisposed to it. In reality, all it shows is effectively the agreement of a curve fit over a finite interval. Extrapolating a curve fit beyond the region of the fit is always fraught with peril.

Science or Fiction
July 24, 2016 1:00 pm

“The agency freely shares this unique knowledge and works with institutions around the world to gain new insights into how our planet is changing.”
And:
“Read more (paywalled): http://www.nature.com/nclimate/journal/vaop/ncurrent/full/nclimate3066.html
The quirks at NASA wouldn´t discover an inconsistency even if it was sitting on their nose.

gnomish
Reply to  Science or Fiction
July 25, 2016 10:40 am

and unpaywalled. this is here. this is now. this is a freakshow, baby, anyhow:
richardson2016 Reconciled climate response estimates from
climate models and the energy budget of Earth
https://www.sendspace.com/file/vhn1kk
P-}

gnomish
Reply to  gnomish
July 25, 2016 10:41 am

i mean- all these comments and nobody read it… omigosh…
get some.

Gamecock
July 24, 2016 1:00 pm

The future isn’t what it used to be.

July 24, 2016 1:07 pm

When it comes to climate, adjustments are endless. And the numbers all agree, its magical. co2 has the ability to change the past. And another study that will end up being quoted as fact from a guess.
The prediction rate is near perfect at predicting that CAGW will have to adjust the numbers they adjusted and readjusted will have to be readjusted.
Oh, Oh…. we won’t need air conditioners, (the new evil kid in town) if we go into a temperature decline. But with adjustments I’m certain that whatever the future holds it will be, regardless of reality, hotter than ever.

Science or Fiction
July 24, 2016 1:07 pm

How come that the press releases at NASA never provide the full title of the paper, identifies the authors and provides a link to the paper itself?
Isn´t NASA paid by tax payers? Shouldn´t products like this paper be freely available for public scrutiny?

Leonard Lane
Reply to  Science or Fiction
July 26, 2016 11:29 pm

Why, Science or Fiction, how dare you ask for transparency from NASA. Of course they will ask taxpayers who want to read the article to pay for the work they have already paid for and the overhead, and the bonuses, and for … well for everything in that rotting bureaucracy . How far they have fallen since 1969.

David S
July 24, 2016 1:26 pm

I’m a layman when it comes to Climate science but I have always wondered why current temperatures haven’t been adjusted downwards to account for the Urban Heat Island effect. I would think that would have wiped out most of the recent global warming and would’ve created more relevant apples to those referred to in this study.

son of mulder
Reply to  David S
July 24, 2016 2:07 pm

Add to that the removal of atmospheric sulphur by the clean air acts in 50’s 60’s 70’s 80’s which would have caused warming. And we’re only laymen.

Editor
July 24, 2016 1:29 pm

I am altering the data. Pray I don’t alter it any further.

Christopher Hanley
July 24, 2016 1:43 pm

The Cook/Nuccitelli/Lewandowsky/Oreskes connection explains a lot.
‘Richardson M, Selected Publications:
Cook J, Oreskes N, Doran P, Anderegg W, Verheggen B, Maibach E, Carlton S, Lewandowsky S, Skuce A, Green S, Nuccitelli D, Jacobs P, Richardson M, Winkler B, Painting R, Rice K (2016)
Cook J, Oreskes N, Doran P, Anderegg W, Verheggen B, Maibach E, Carlton S, Lewandowsky S, Skuce A, Green S, Nuccitelli D, Jacobs P, Richardson M, Winkler B, Painting R, Rice K (2016) Consensus on consensus: a synthesis of estimates on human-caused global warming. Environmental Research Letters doi: 10.1088/1748-9326/11/4/048002
Cook J, Nuccitelli D, Green SA, Richardson M, Winkler B, Painting R, Way R, Jacobs P, Skuce A (2013) Quantifying the consensus on anthropogenic global warming in the scientific literature. Environmental Research Letters 8 (2) doi: 10.1088/1748-9326/8/2/024024.
Nuccitelli D, Cowtan K, Jacobs P, Richardson M, Way R, Blackburn A-M, Stolpe MB, Cook J (2013) Comment on “Cosmic-Ray Driven Reaction and Greenhouse Effect of Halogenated Molecules: Culprits for Atmospheric Ozone Depletion and Global Climate Change” International Journal of Modern Physics B doi: 10.1142/S0217979214820037’.

Icepilot
July 24, 2016 1:51 pm

When Models & Data conflict => Change the Data … then hide, or better, destroy the original Data. This is much easier if you control your own Server.

July 24, 2016 2:07 pm

Why bother looking out your window? Just look at your Windows.
Talk about an alternate reality!

Svend Ferdinandsen
July 24, 2016 2:39 pm

“Because it isn’t possible to add more measurements from the past, the researchers instead set up the climate models to mimic the limited coverage in the historical records.”
And climate models are very poor to regional climate, so how can they tell anything about the accuracy.
Especially in the Arctic it is difficult, because the models operate with anomalies from a temperature you dont know about, and the Arctic is very sensitive to absolute temperature around 0.

marty
July 24, 2016 2:52 pm

I wait for the moment, they adjust the global temperature 1870 to – 3 ° K 🙂

Gerald Machnee
July 24, 2016 2:59 pm

I am not sure what Mosh and Stokes said, but I have a solution.
Close all the observation stations. Use only models. When the media need a temperature for a location, you find it on the model curve and read it. SUCCESS!! 100 % accuracy.

Reply to  Gerald Machnee
July 24, 2016 4:07 pm

It is simple.
Suppose u have a model. It covers the whole earth.
Suppose it shows no warming from -80 to 80.
Suppose it shows 10c warming at both poles.
You average everything and find the global average is
.5c. Because the poles are a small area.
But still your average shows some warming.
Now you have observations. . They only cover -60 to +70
Their average shows no warming.
So.. The authors are pointing out that the models show warming where we have no observations to compare.
But if we compared them where they both have data they look correct.
A smart skepic would not compare apples and oranges.

Not Oscar, just a grouch
Reply to  Steven Mosher
July 24, 2016 10:48 pm

Try this point of view.
Earth 200k YBP equals Earth 400k YBP (approximately)
Earth (with no anthropogenic greenhous gases) does not equal Earth (with anthropogenic greenhouse gases)
Now, tell me the second statement is not true. You guys say the two are not equal constantly by telling us how much we have altered the climate system. You seem to have no reason to lie about that, so that statement must be true. Therefore, the only possible comparison is apples to oranges. I agree with you. It is simple.

Bartemis
Reply to  Steven Mosher
July 24, 2016 11:54 pm

“But if we compared them where they both have data they look correct.”
So what? I can fit a quadratic polynomial to an exponential curve, and show that they agree very closely over a particular domain. Outside of that domain, however, there is no guarantee that they will continue to track and, indeed, they will not for long.

Leonard Lane
Reply to  Gerald Machnee
July 26, 2016 11:32 pm

Better yet, vote for Trump and hope he eliminates all the global warming fraud in NASA, and throughout the bloated government.

Steve Oregon
July 24, 2016 3:01 pm

Is Mark Richardson a quirkologist? That’s nice.

whiten
July 24, 2016 3:07 pm

Funny.
If I got this right……this seems another attempt to save the Global Warming, by adding 19% more warming via Arctic……..
So the claim is that there is some loss of Global Warming due to bad measurements in the Arctic……..meaning that there possibly is a loss of natural Global warming, natural yes…..any loss of warming in Arctic is a loss of a natural warming. Increasing warming via Arctic is increasing the natural warming quantity, as the anthropogenic one increases or if it existed at all will be in Tropics not Arctic.
If nothing missing in Tropics then nothing anthropogenic is missing……..
In the end of the day their models either wrong or not supporting at all the AGW……
Really funny.
The AGW is missing as a result of missing hot spots in Tropics……..did these guys forget that so soon?
No amount of “correction” can fabricate that……..
But anyway, it will be strange to contemplate an easy end to science weirding.
Hopefully I have not got this backwards……
cheers

July 24, 2016 3:25 pm

And the ‘enlightened’ ones (the same ones whose collective stupidity was leveraged to pass obamacare as said by the architect of obamacare, Jonathan Gruber) wonder why we want to check their work. This is just going to make me even more suspicious of climate ‘science.’
FWIW, I also noticed on the JPL kid’s climate page last year that they show a close correlation between Antarctica temp and CO2 but make absolutely NO mention that the temperature rises first.

July 24, 2016 3:35 pm

Just throw in an volcano or two and watch the numbers jump

July 24, 2016 3:58 pm

It would seem that soon NASA will replace observations with model simulation numbers. Why even correlate to erroneous real world data?

July 24, 2016 3:59 pm

When you have models that are total garbage, adjustment to real world data is a massive, full time, large staffing job.

Steve Oregon
July 24, 2016 4:10 pm

If they add 19% we’re all a heck of a lot closer to dying.
Imagine the climate refugees when the news gets out.
Floridians will be heading to moving to unprepared Fargo.
And stuff like that there.
Kidding aside. Are we in an era of human degradation or what?
The combined ignorance and dishonesty that is common place and acceptable makes me wonder where all of this is heading.
If we can’t have accurate information and honesty guiding the human race what are we to expect?
It can’t be good.

Eamon Butler
July 24, 2016 4:17 pm

Still discovering that they got it wrong. I may have misunderstood this but, it seems they want to introduce warmer historical Arctic records that don’t exist. Isn’t that going against the grain where they previously made the past ”cooler”? Surely a warmer past is not part of the narrative.
If cooling is imminent, they will need to suppress current temps. Logic and history tell us that it won’t keep warming forever. Even if the whole alarming thing was correct, something more powerful has kept the warming in check and indeed brought it to a halt for the best part of twenty years. Not insignificant, I think.
I also don’t think that comparing Today’s thermometer observations with a guess what it may have been, in a remote inaccessible part of the Arctic, (or anywhere else) 150/ 200/ 500 … years ago, is an apple to apple comparison.
Eamon.

Reply to  Eamon Butler
July 24, 2016 7:09 pm

“Still discovering that they got it wrong. I may have misunderstood this but, it seems they want to introduce warmer historical Arctic records that don’t exist. Isn’t that going against the grain where they previously made the past ”cooler”? Surely a warmer past is not part of the narrative.”
The records EXIST
the problem is that
GISS cannot use them
HADCRUT cannot use them
WHY
1. because they both use ANOMALIES
2. to calculate an anomaly the series needs data in either 1951-1980 time period
or the 1961-1990 time period
Let me make this point with a huge HYPERBOLE
Imagine tommorrow that we instrumented every square inch of the arctic and measured the temperature for 30 years..
They would still not be ale to use this data because they rely on an anomaly method that uses a base period
BUT, if you work in absolute temperature you can use this data. So for the arctic ( take greenland) there is a good amount of data that neither Hadcrut nor GISS can use
HADCRUT just leaves the arctic blank
GISS extrapolates.
Well you dont have to do either one of those if you just use the data that is available..
And there is more available than we use.. so we can actually do out of sample testing..
which is fun, but nobody cares much

Hugs
Reply to  Steven Mosher
July 25, 2016 3:35 am

Thanks for explaining this.

Philip Schaeffer
Reply to  Steven Mosher
July 25, 2016 7:57 pm

Indeed. It’s amazing how someone who is accused by some here of distorting the temperature record to ensure his paycheck, spends so much time explaining science, with insights that are hard to come by if you don’t actually actively work in the field.

Sundance
July 24, 2016 4:26 pm

In other news climate scientists are introducing their version of the casino game of roulette. Their version consists of a wheel having only red numbers and a roulette table only allowing bets on black but they assure the public that the game is fair. 😉

Steve G
July 24, 2016 4:42 pm

Never mind what the real-world data says. Rely on our data since we are the ones that get paid millions and want to continue to receive millions of dollars from the government. Because we know what’s best because we get paid lots of money and we want to continue to get paid lots of money. Because we know what’s best, because we get paid lots of money and want to continue to get paid lots of money. What a joke

Mjw
July 24, 2016 5:01 pm

It would be a lot cheaper for all concerned if we closed all weather observation stations and just relied on computer models to tell us what the temperature is.

Bill Illis
July 24, 2016 5:13 pm

There are records that go back to the 1700s.
The Royal Society and the Hudson Bay Company teamed up in 1769 to record one of the earliest temperature records based on the new accurate thermometer which was only recently invented at that time.
I think this was compiled by Tim Ball for his Phd thesis based on Hudson Bay Company records.
No long-term warming trend from these earliest of all records for the quasi-Arctic.
http://www.john-daly.com/p-bears/hudson%20bay.gif
http://www.john-daly.com/p-bears/hudson%201769-2002.gif
http://www.john-daly.com/p-bears/hudson%201895-2002.gif
And then Greenland temperature records back to 1880.
http://appinsys.com/globalwarming/RS_Greenland_files/image023.gif
“NOT A CHANCE” that this study used ANY of these records.

Bill Illis
Reply to  Bill Illis
July 25, 2016 5:42 am

Yet, somehow your Berkeley Earth regional summary produces 3.0C of warming for Nunuvut and Greenland based on these records. Just like when I checked my own location and found 3.0C warming when the quality-controlled figures are just 0.5C.
Your “break-point” algorithm is biased upward by more than 100%. The increase is more than doubled compared to quality controlled station records in every case.

K
July 24, 2016 6:32 pm

Well, if you’re going to alter the data to make it warmer, what better time than to release it than in the third week of July, during a heat wave.

Leonard Lane
Reply to  K
July 26, 2016 11:41 pm

K, you know that was purely accidental as was Hansen’s and Wirth’s adjusting the thermostats higher in the Congressional hearing. All purely chance. Simply chance, NASA does not employ scoundrels, it only hires the best, the very best scientists in the world for the space program. The nuts, knuckle draggers, and other assorted undesirables NASA puts in their climate change studies programs.

Asp
July 24, 2016 6:33 pm

“The future is certain, it is only the past that needs to be managed!” Comment emanating from inside the Soviet bloc during the ‘Cold War’.

MikeH
July 24, 2016 6:34 pm

Could you imagine if I had a failing business (funded by government grants), but I state that my business model CLEARLY states I have a successful business, all I need to do is to magically increase my business profit on paper by 19%, Shazam! A profitable business. Here, Mr. Taxpayer, is my inflated bill which includes a generous compensation package for me and my board of directors.
Any company doing such a business would be rightfully hauled in by the S.E.C. and the CEO would be in handcuffs. If a corporation would be labeled as ‘evil’ for such a practice, why not a government entity?
Why not? You can’t blame them, they’re trying to save the planet.

July 24, 2016 6:44 pm

Let see if I can help.
One problem they have is that they use hadcrut
hadcrut does not use all the data that is available for the poles.
Remember HADCRUT uses anomalies. So they cannot use data that starts after 1990. ( Like CRN)
or like some of the data that is available for the interior of greenland.
Consequently Hadcrut misses some of the accelerated warming in the arctic.
BUT.. and here is what a smart skeptic would note..
EVEN IF you use a global series that does have better arctic coverage, the models STILL warm the poles too fast
http://static.berkeleyearth.org/graphics/figure52.pdf

catcracking
Reply to  Steven Mosher
July 25, 2016 3:23 am

Steven
Next you will be telling us that skeptics photo shopped this picture.comment image?w=720

ilma630
Reply to  catcracking
July 25, 2016 3:31 am

There are other photos like this. You should add the date when this was taken though, as that’s important.

Bob Boder
Reply to  catcracking
July 25, 2016 11:27 am

Catcracking
Mosh and his ilk are trying as hard as they can to push the warming in to a trend that matches the models, they can’t in any way have the warming in the 30s and 40s be as great as the warming now and allow the cooling in the 50s 60s and 70s so they want everyone to believe that they keep finding things that prove the warming happened later not earlier. Mosh will scream that all the raw adjustments actually make the trend cooler not warmer that is true but they also flatten out the warming in the 30s and 40s. I used to think that he was a credible source but the in the last year it has become more and more apparent that he is not and that he is agenda driven and fight to keep his pay check.

Bartemis
Reply to  Steven Mosher
July 25, 2016 2:59 pm

I think you guys missed the, for want of a better word or phrase, olive branch Steve partially extended:

“EVEN IF you use a global series that does have better arctic coverage, the models STILL warm the poles too fast.”

MikeH
July 24, 2016 6:52 pm

Early in the comments, it was ask by Phillip Bratby why is JPL doing climate research, Dudey responded:
“JPL is a captive FFRDC (Federally Funded Research and Development Center) for NASA. See https://en.wikipedia.org/wiki/Federally_funded_research_and_development_centers
OK, but that still asks the question, why is NASA (the National Aeronautics and Space Administration) doing climate research? Shouldn’t all of this be NOAA, the National OCEANIC and ATMOSPHERIC Administration? NOAA should be contracting NASA to build, launch and maintain the satellites, but the research should be headed by NOAA.
I know the answer, everyone wants a piece of the pie. But I just wanted to point out the obvious.

Clyde Spencer
Reply to  MikeH
July 24, 2016 8:29 pm

MikeH,
Not unlike the US Geological Survey, which now has a large contingency of biologists on its staff, studying polar bears and fish among other things. Despite the name, it is being turned into the US BS.

Reply to  MikeH
July 25, 2016 10:43 am

MikeH almost all of the federal alphabet soup of agencies do duplicate and triplicate and quadruple the work that should be done only once by once agency. How else can you get $19.3 trillion in debt?

Steve from Rockwood
July 24, 2016 7:31 pm

Thank God the models are wrong. They’d all be out of work.

Eugene WR Gallun
July 24, 2016 7:35 pm

EXPLAINING THE TRICK
(yes, oranges are being compared to apples)
The claim is made that the past temperature data has certain flaws that lower global warming predictions. Then it is said if those flaws are applied against the models then suddenly the predictions of the models lower to match the predictions of the past temperature data.
(The conclusion being that only if you deliberately apply flaws to the models do they predict wrongly.)
The trick is that the models have totally different flaws than the supposed flaws in the temperature data.
They are actually making AN ARBITRARY GLOBAL CORRECTION (even though they claim to find their value in past temperature data) to the models that in no sense corrects for THE ENTIRELY DIFFERENT FLAWS THAT MAKE THE MODELS WORTHLESS
There is absolutely no proof here that the models are correct. There is plenty of proof that the people who wrote this paper are idiots.
And they are comparing oranges to apples. The flaws in the temperature data are oranges. The flaws in the models are apples. They can’t be used against one another.

Reply to  Eugene WR Gallun
July 25, 2016 10:34 am

Thank you, Eugene.

Eugene WR Gallun
Reply to  Eugene WR Gallun
July 25, 2016 2:04 pm

MOD — Would you replace what I wrote just above with this. What I wrote above, though not wrong, is a confused mess. Sometimes you can’t find the words. Below is a simpler explanation. If you can or can’t, thankyou. EWRG
EXPLAINING THE TRICK
The claim is made that the past temperature data has certain flaws that lower global warming predictions. Then it is said if those flaws are applied against the temperature data used in the models then suddenly the predictions of the models lower to match the predictions of the past flawed temperature data.
The conclusion drawn being that only if you deliberately apply flaws to the models do they predict wrongly.
The first trick is that the models have totally different flaws than the supposed flaws in the temperature data.
The models are assumed to be perfect and their flaws and failure to mimic reality are not addressed.
The second trick is that when you blow off the bullshit what the author’s are really saying is that the temperature data they use in their models is correct — and the measured real world temperature data is incorrect and must be altered to match the temperature data already being used in the models. In other words, real world data must be altered so that when it is plugged into the models it gives the modelers the answers they want. The models are perfect — it is the real world data that is wrong. (That, of course, is the main pillar of climate science — the real world data is wrong.)
There is absolutely no proof here that the models are correct. There is plenty of proof that the people who wrote this paper are grifters.
Eugene WR Gallun

Louis
July 24, 2016 8:17 pm

“water warms less than air”
Don’t they mean that water warms less quickly than air? It does take longer to raise the temperature of water than it does air. But the way it is worded implies that a container of cool water placed in a room that is set to a fixed warm temperature would never ever warm to the same temperature as the air in the room. Did this come from scientists?

Philip Schaeffer
July 24, 2016 8:36 pm

The headline is simply wrong.
“NASA: Global Warming Observations Need a Further 19% UPWARD Adjustment”
OK, somebody quote the bit in the paper or abstract where they talk about changing observations by 19%. It simply doesn’t exist in the paper or abstract.
Simple as that. Outright rubbish. There can’t have been much skeptical thinking went into that claim, and yet here it is. If it isn’t an deliberate misrepresentation, then at the very least it calls into question the competency of the author to understand the paper in question.
Either way, it is a shame to see such low quality work published here.

Bindidon
Reply to  Philip Schaeffer
July 25, 2016 10:55 am

I agree.

July 24, 2016 9:42 pm

So with all the massive adjustments, we have a sensitivity of 1.66C per doubling?
That’s great news – they’ve completely discredited the fraudulent and absurd upper-end predictions of 4C per doubling. With no possibility of CO2 doubling that means we can expect no more than about 1C this century, right? That should take us close to the 2C that peer reviewed science ™ tells us is net positive for the world.

Reply to  Andrew
July 25, 2016 4:04 am

“they’ve completely discredited the fraudulent and absurd upper-end predictions of 4C per doubling.”
No, you have mixed up Transient Climate Response, their 1.66, with Equilibrium Climate Sensitivity.

Bob Boder
Reply to  Nick Stokes
July 25, 2016 11:28 am

Nick Stokes
no you are rewriting history otherwise the C in CAGW would never have been there.

KO
July 24, 2016 10:53 pm

It seems NASA, once a bastion of hard science, has forgotten its raison d’etre. Imagine running a Moon Mission on the basis below – as the astronuats passed Mars outbound to Alpha Centauri, no doubt the technicians in the control room would be saying that the Solar System and galaxy needed to be adjusted to fit the “Computer Modelling”….Talk about regression to Flat Earth thinking.

Ed Zuiderwijk
July 25, 2016 12:42 am

It’s almost funny. But it will get funnier when they discover that the models are seriously flawed. They are, because they have a much to high climate forcing by CO2 built in.
So, when they discover the error they will be faced with this dillemma: how to re-adjust the now adjusted earlier adjustments. It will come straight out of a Monty Python script.

ilma630
July 25, 2016 1:38 am

It beggars belief that they say they can new take new observational measurements IN THE FUTURE and correlate them with models NOW! – “With this modification, the models and observations largely agree on expected near-term global warming”. That’s astrology, *not* science!

Mjw
July 25, 2016 1:46 am

Does this mean that every single Climate Model generated up until this point is wrong?
(Retorical question)

Johann Wundersamer
July 25, 2016 2:41 am

Yes, Nick Stokes –
The study applied the quirks in the historical records to climate model output and then performed the same calculations on both the models and the observations to make the first true apples-to-apples comparison of warming rates. With this modification, the models and observations largely agree on expected near-term global warming.
________________________________________
short :
applied the quirks in the historical records to climate model output and then performed the same calculations on both the models and the observations.
With this modification, the models and observations largely agree on expected near-term global warming.
________________________________________
gives :
: performing calculations to historical records, climate models, climate models output,
: AND real world observations
: finally do the trick.
________________________________________
You’re the one to do them all right!

Johann Wundersamer
July 25, 2016 2:47 am

[ can’t help, mod, in german that’s spelled ‘Arschkriecher’ ]

Johann Wundersamer
July 25, 2016 2:52 am

http://www.google.at/search?site=&source=hp&ei=n-CVV7zSIIzPgAaSy6awDw&q=er+m%C3%B6ge+mich+im+arsche+k%C3%BCssen+&oq=er+m%C3%B6ge+mich+im+arsche+k%C3%BCssen+&gs_l=mobile-gws-hp.12…5133.40588.0.42962.30.30.0.7.7.0.1165.5878.0j27j4-1j0j1j1.30.0….0…1c.1.64.mobile-gws-hp..0.19.3495.3..0j41j0i3j0i22i30j0i19j0i13i30i19j0i22i30i19j30i10.q094h5RYtgg

tadchem
July 25, 2016 4:30 am

I would like to see a survey article compiling all the ‘adjustments’ that have been made to the raw data and the adjustments to the adjusted (and readjusted) data. I suspect that there is a continuing growth in the size of the adjustments.

ilma630
Reply to  tadchem
July 25, 2016 4:38 am

Someone like Steve Goddard (Tony Heller) is probably well suited for this. He’s been tracking these ‘adjustments’ and calling them out for a long time. It would be good to have a group of non-alarmist folk compile all these with Paul into a blistering report to be send to all major governments, including the new UK Secretary for Industry, Energy & Business, Greg Clark.

ilma630
July 25, 2016 5:27 am

They key question to ask now of course, is “does man’s CO2 emissions have *any* effect on temperature or climate?”. All the real-world observational evidence seems to be pointing to ‘zero effect’ (or anything too small to reliably measure, and so is just noise).

Bindidon
July 25, 2016 6:18 am

Paul Homewood on July 24, 2016 at 8:24 am
Rather than commenting about how easy it can be to misunderstand information about the relations between models and real data, I prefer to react on comments about wrong assumptions on real data.
The Arctic is no warmer now than the 1930s
This, Paul, is a wrong assumption. One of its many origins is a plot shown by Climate4you:
http://www.climate4you.com/images/MAAT%2070-90N%20HadCRUT4%20Since1900.gif
The plot in itself is correct, as is the data it shows.
1. A better estimate of the temperatures there during the last century nevertherless you will obtain by processing NOAA’s unadjusted GHCN land station record, and averaging the data produced between january 1880 and today by all stations located above 70° N.
Suddenly you discover that neither today’s tempeatures nor even those of your golden 1930ies were the highest! A descending sort of the warmest months shows like this for the first twenty:
1909 8
1905 8
1886 7
1896 7
1905 7
1915 7
1882 8
1894 8
1886 8
1888 7
1888 8
1904 8
1896 8
1890 7
1890 8
1901 7
1887 7
1911 7
1898 7
1882 7
2. But the main problem with Climate4you’s plot is elsewhere: it shows only data for the regions above 70° N, whereas the Arctic regions in fact start with 60° N (look for example at the understanding of polar regions published by the satellite-driven measurements in the troposphere, UAH and RSS).
The descending sort of the warmest months above 60° N shows like this for the first twenty:
1915 7
1894 7
1927 7
2010 7
1901 7
2005 7
2003 7
1896 7
1916 7
2007 7
2012 7
1911 7
1919 7
2013 7
1908 7
1913 7
1925 7
1920 7
2011 7
1899 7
And suddenly you discover that among these twenty warmest months, one of the first five, four of the first ten and seven of the twenty belong to years… following 2000!!! An you will be disappointed: none of the 1930 era is present in the list.
3. This rather dramatic change in the top 20 ranking list certainly will be interpreted ba anybody as a hint on the latitude stripe 60° N – 70° N being much more subject to warming than is the stripe above it. An indeed: restricting the averaging process to stations located therein gives you the following list:
2010 7
2003 7
2005 7
2012 7
1927 7
2007 7
1915 7
2013 7
2011 7
2014 7
2004 7
1894 7
2006 7
2008 7
1925 7
1994 7
2001 7
1936 7
1938 7
1901 7
Full podium for the years after 2000, eight of them within the ten warmest, and thirteen within the top 20. Solely two years of the 1930 era appear at positions 18 and 19.
4. A sort by ascending order gives you for the twenty coldest months since january 1880 for the region 60° N – 90° N :
1982 1
1979 2
1966 1
1972 1
1969 1
1985 2
1971 1
1966 2
1968 1
1989 1
1974 1
1975 1
1970 1
1976 2
1979 1
1967 1
1951 1
1965 2
1980 1
1987 1
All coldest months between 1880 and 2016 lie in the period 1950-1990.
5. Warming rate for this Arctic region since january 1979: 0.83 ± 0.49 °C per decade.
6. Data source for the unadjusted GHCN record:
http://www1.ncdc.noaa.gov/pub/data/ghcn/v3/ghcnm.tavg.latest.qcu.tar.gz

Bindidon
Reply to  Bindidon
July 25, 2016 6:48 am

A “detail” I forgot: to mention that this warming since 1979 (the satellite era) should not hide the fact that the region nevertheless has a cooling trend when considered since 1880: -0.35 ± 0.07 °C.

Bob Boder
Reply to  Bindidon
July 25, 2016 11:33 am

as usual the attempt to eliminate the warming in the 30s to try and flatten out the trend line, this is getting ridiculous.

Bindidon
Reply to  Bindidon
July 25, 2016 3:42 pm

Bob Boder on July 25, 2016 at 11:33 am
as usual the attempt to eliminate the warming in the 30s to try and flatten out the trend line, this is getting ridiculous.
What is here really ridiculous, Bob Boder, is that you suspect something but in fact aren’t even able to prove anything to be wrong.
So go on work, Bob Boder, and
– download the unadjusted GHCN dataset from the URL
http://www1.ncdc.noaa.gov/pub/data/ghcn/v3/ghcnm.tavg.latest.qcu.tar.gz
– select records out of the data file, according to criteria you need (a latitude stripe, a country like e.g. USA or Greenland, etc)
– write a piece of software to average the monthly data in the selected subset
– verify and validate your software using Excel or whatever else beyond any suspicion against errors
– produce the output according to your selection.
And when you happen to generate anything differing from what I did, come back here and we clarify who was right. Until you reach that point I propose you simply shut up.

July 25, 2016 10:48 am

isn’t this just a very long way of saying that the HadCRUT4 index lacks sufficient information to be used as a proxy for global temperature?

RockyRoad
July 25, 2016 11:11 am

So they admit their models are off by 19%. That they think otherwise shows just how despicably evil and stupid they are.

Solomon Green
July 25, 2016 11:32 am

Nick Stokes,
“It’s true that the press release reflects a journalist’s shaky grasp of the paper. So when he says, for example, that warming was “missed by historical records”, that is a muddled rendition of the main issue in the paper, which is the limited coverage of the HADCRUT set.”
Is Mr. Stokes implying that the NASA scientists who produced the paper have no say in the press release issued by their employer? I cannot believe that any scientist with a shred of integrity would not insist on a correction if the press release relating to their work was not accurate and conveyed a false impression.

Bindidon
Reply to  Solomon Green
July 25, 2016 4:03 pm

You could for example measure the discrepancy between the title of
http://www.nasa.gov/feature/goddard/nasa-study-mass-gains-of-antarctic-ice-sheet-greater-than-losses
and its contents.
It’s amazing.

Joel Snider
July 25, 2016 12:47 pm

This reflects something I’ve tried to explain to people who trust in ‘institutions’ – and it’s even worse with ‘scientific’ institutions, which lay-people too-often regard as ‘objective’ – the thing is that they are made up of human beings and are only as reputable as the people currently occupying positions within them. And no organization is immune to human failings.
And , in many ways, scientific organizations are MORE susceptible. Consider that we live in a world where Catholic priests – supposedly the very standard of morality – have predated on children. So why would a scientific body – who have jettisoned morality in favor of ethic, be beyond reproach?
There is the discipline of science and then there are those who practice it – institutions like NASA are made up of these. And I daresay, the current crop makes too many of these once-reputable bodies something less than they were.

ilma630
Reply to  Joel Snider
July 26, 2016 1:29 am

This article by Dr. Tim Ball I think expresses the fundamental problem with climate science, as a symptom of the wider problem in science and science education: https://wattsupwiththat.com/2016/07/24/credibility-loss-in-climate-science-is-part-of-a-wider-malaise-in-science/

Sean
July 25, 2016 12:54 pm

“If we make these adjustments to create fake data that agrees with our models, then our data agrees with our models, ergo we have proven our models.”
Circular logic from NASA’s corrupt group of cargo cult scientists.
How far NASA has fallen.
Its shameful.

Leonard Lane
Reply to  Sean
July 26, 2016 11:56 pm

Sean, but please don’t think for a second that the fall was accidental, due to incompetence, or any such things; the fall of NASA was deliberate and issue driven to satisfy the rank politicians and thereby to grab their bags full of taxpayer money.

RockyRoad
Reply to  Sean
July 28, 2016 5:34 am

I wonder if NASA is having problems recruiting astronauts? Who in their right mind would want to ride any of their rockets considering how they view reality?
For example: “Oops–looks like our rocket will miss Mars by a million miles; I guess we’ll just have to move Mars a million miles so our mission isn’t a complete failure.”

Resourceguy
July 25, 2016 1:31 pm

Science is in full retreat at this point with agency militias taking full advantage of the institutional collapse and bullying the occupied community.

willhaas
July 25, 2016 1:42 pm

So Mother Nature is wrong and the models, which ever one chooses to use. are correct. Hence all measurement data should be thrown out all together. The world can save a lot of money by no longer taking weather measurements but instead relying 100% on computer simulations. One may see that it is raining outside but if the models say it is dry then it is dry no matter how wet one may get outside.

Phil
July 25, 2016 1:57 pm

I am confused by climate science – especially the “science of adjustments.” So, the arctic air temperatures are warming faster than the rest of the world. There is less ice cover in the arctic. Ice acts as an insulator that prevents the arctic ocean from transferring heat to the arctic air, thereby trapping the “hidden heat,” right? With less ice cover, wouldn’t that mean that the arctic air temperatures are rising because a lot more heat is being lost by the arctic ocean to the arctic air and then to outer space? Wouldn’t that imply that the arctic SYSTEM is cooling? Just really confused.

July 25, 2016 1:59 pm

I will publish here my e-mail to NASA. Note that I highlighted the terms “quirks” and “space” in my original e-mail.
Dear NASA,
In your recent press release, http://www.jpl.nasa.gov/news/news.php?feature=6576, Carol Rasmussen’s claims that “global warming that has occurred in the past 150 years has been missed by historical records due to quirks”. Carol claims that “quirks hide around 19 percent of global air-temperature warming since the 1860s”.
However, I see no reference of any scientific publication demonstrating the validity of these hypothic quirks.
Did she use the “the vantage point of space”? Did she pull such conclusions from her ass? Also, did I miss a class in math about quirks? (Note that these are simple “Yes or no” questions).
P.S. Talking about “Earth’s interconnected natural systems with long-term data records”, how are the outer systems doing? I am kindly directing your attention to “the space”, “the stars”, “the space weather”, or if you prefer, “the space climate” (yes that’s sarcasm), subjects that seem to be much more in scope of National Aeronautics and Space Administration.
Kindly,
A concerned reader

Philip Schaeffer
Reply to  legionnetwork
July 25, 2016 7:10 pm

Should it be assumed that you actually read the paper in question to see if it did explain the answers to your questions?

July 25, 2016 2:19 pm

A new NASA-led study finds that almost one-fifth of the global warming that has occurred in the past 150 years has been missed by historical records

Hmmmm…..maybe they’re trying to save more than just the climate models.
Weren’t there claims that by now there should have a whole, whole bunch of “climate refugees” that hasn’t occurred?
May the next step is to claim all the legal immigrants to the US in the past weren’t due to things like the Irish potato famine but “climate change”?

Dr. Strangelove
July 25, 2016 4:47 pm

Mark Richardson: According to my computer model, one-fifth of the global warming in the past 150 years has been missed by historical records
http://gimmegimmegames.com/wp-content/uploads/2013/09/Pokemon-MMORPG.jpg

Philip Schaeffer
July 25, 2016 8:09 pm

It’s rather depressing reading many of the comments here. We have a reasonable paper which is blatantly misrepresented by Mr Worrall, and when people who do have a clue what they’re talking about attempt to explain what the paper actually says, the cattle choir just moos louder about how everything is fraudulent and manipulated and we can’t know anything useful from the data anyway. Of course, none of those people actually noticed that the headline to this article is rubbish, as they would have if they actually had a clue what the paper really says.

Philip Schaeffer
July 26, 2016 5:16 am

Anthony, are you going to allow this blatantly false headline to stand on your site that has your name attached to it?
[Reply
1. You are in error. My name is not attached to it, Eric Worrall is the author.
2. The 19% number in the headline is accurate, and from the NASA press release.
3. Your interpretation of things here almost always differs, using some sort of guilt trip to persuade me to do something that is the responsibility of the author is misguided -Anthony]

JohnKnight
Reply to  Philip Schaeffer
July 26, 2016 4:46 pm

Yo cowboy,
“NASA” declares;
“A new NASA-led study finds that almost one-fifth of the global warming that has occurred in the past 150 years has been missed by historical records due to quirks in how global temperatures were recorded.”
If they declared *…study finds that almost one-fifth of the global warming that MAY HAVE occurred in the past 150 years MAY HAVE been missed by historical records*, I might consider the headline Erik posted unwarranted. As it is, I see no latitude for denying that “NASA” is essentially calling for adjusting EFFECTIVE Global Warming Observations . . What else could they be implying

Philip Schaeffer
July 26, 2016 5:19 am

I mean, it’s one thing to disagree with someone, but it’s another thing altogether to put words in their mouth that they didn’t utter.

4TimesAYear
July 26, 2016 1:14 pm

When the “data doesn’t matter” you get to fill in your own blanks?

July 26, 2016 2:57 pm

AS W.M.Briggs says, “effects” in models are not effects, they are parameters, and subjectively set

Reply to  Mark - Helsinki
July 26, 2016 2:58 pm

Essentially they create a false reality

Ken L.
July 28, 2016 12:24 am

It seems to me that they are continuing to operate under the same flawed assumption that has plagued this whole subject from the start – namely that all you have to do in order to accurately project climate in the future is retroactively tweak climate models to fit the past. While that might work in some more confined scientific disciplines, I do not see how it can achieve the accuracy claimed for a system as complex and chaotic as the Earth’s climate – about which we are still largely in the dark and figuratively grasping at straws. But then what do I know – I’m just a layman.