Guest essay by Eric Worrall
NASA researcher Mark Richardson has completed a study which compares historical observations with climate model output, and has concluded that historical observations have to be adjusted, to reconcile them with the climate models.
The JPL Press Release;
A new NASA-led study finds that almost one-fifth of the global warming that has occurred in the past 150 years has been missed by historical records due to quirks in how global temperatures were recorded. The study explains why projections of future climate based solely on historical records estimate lower rates of warming than predictions from climate models.
The study applied the quirks in the historical records to climate model output and then performed the same calculations on both the models and the observations to make the first true apples-to-apples comparison of warming rates. With this modification, the models and observations largely agree on expected near-term global warming. The results were published in the journal Nature Climate Change. Mark Richardson of NASA’s Jet Propulsion Laboratory, Pasadena, California, is the lead author.
The Arctic is warming faster than the rest of Earth, but there are fewer historic temperature readings from there than from lower latitudes because it is so inaccessible. A data set with fewer Arctic temperature measurements naturally shows less warming than a climate model that fully represents the Arctic.
Because it isn’t possible to add more measurements from the past, the researchers instead set up the climate models to mimic the limited coverage in the historical records.
The new study also accounted for two other issues. First, the historical data mix air and water temperatures, whereas model results refer to air temperatures only. This quirk also skews the historical record toward the cool side, because water warms less than air. The final issue is that there was considerably more Arctic sea ice when temperature records began in the 1860s, and early observers recorded air temperatures over nearby land areas for the sea-ice-covered regions. As the ice melted, later observers switched to water temperatures instead. That also pushed down the reported temperature change.
Scientists have known about these quirks for some time, but this is the first study to calculate their impact. “They’re quite small on their own, but they add up in the same direction,” Richardson said. “We were surprised that they added up to such a big effect.”
These quirks hide around 19 percent of global air-temperature warming since the 1860s. That’s enough that calculations generated from historical records alone were cooler than about 90 percent of the results from the climate models that the Intergovernmental Panel on Climate Change (IPCC) uses for its authoritative assessment reports. In the apples-to-apples comparison, the historical temperature calculation was close to the middle of the range of calculations from the IPCC’s suite of models.
Any research that compares modeled and observed long-term temperature records could suffer from the same problems, Richardson said. “Researchers should be clear about how they use temperature records, to make sure that comparisons are fair. It had seemed like real-world data hinted that future global warming would be a bit less than models said. This mostly disappears in a fair comparison.”
NASA uses the vantage point of space to increase our understanding of our home planet, improve lives and safeguard our future. NASA develops new ways to observe and study Earth’s interconnected natural systems with long-term data records. The agency freely shares this unique knowledge and works with institutions around the world to gain new insights into how our planet is changing.
For more information about NASA’s Earth science activities, visit:
Read more: http://www.jpl.nasa.gov/news/news.php?feature=6576
The abstract of the study;
Reconciled climate response estimates from climate models and the energy budget of Earth
Climate risks increase with mean global temperature, so knowledge about the amount of future global warming should better inform risk assessments for policymakers. Expected near-term warming is encapsulated by the transient climate response (TCR), formally defined as the warming following 70 years of 1% per year increases in atmospheric CO2 concentration, by which point atmospheric CO2 has doubled. Studies based on Earth’s historical energy budget have typically estimated lower values of TCR than climate models, suggesting that some models could overestimate future warming2. However, energy-budget estimates rely on historical temperature records that are geographically incomplete and blend air temperatures over land and sea ice with water temperatures over open oceans. We show that there is no evidence that climate models overestimate TCR when their output is processed in the same way as the HadCRUT4 observation-based temperature record3, 4. Models suggest that air-temperature warming is 24% greater than observed by HadCRUT4 over 1861–2009 because slower-warming regions are preferentially sampled and water warms less than air5. Correcting for these biases and accounting for wider uncertainties in radiative forcing based on recent evidence, we infer an observation-based best estimate for TCR of 1.66 °C, with a 5–95% range of 1.0–3.3 °C, consistent with the climate models considered in the IPCC 5th Assessment Report.
Read more (paywalled): http://www.nature.com/nclimate/journal/vaop/ncurrent/full/nclimate3066.html
Frankly I don’t know why the NASA team persist with trying to justify their increasingly ridiculous adjustments to real world observations – they seem to be receiving all the information they think they need from their computer models.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

They have no shame.
It’s not obvious why “they”, the JPL, are involved in climate studies. Anybody know what the climate has to do with jet propulsion?
because SHUTUP look a squirrel…
Money, over $200b so far, available to any and all who will support global warming alarmism. Thank you Al Gore. Part of the oermanent bureaucracy now.
JPL is a captive FFRDC (Federally Funded Research and Development Center) for NASA. See https://en.wikipedia.org/wiki/Federally_funded_research_and_development_centers
They provide technical services to NASA and exist for that purpose. That includes satellites and other items of interest to NASA.
Oh, technical services like providing excuses as to why their models don’t work? Nice. This may not work out quite the way they hoped.
I recently published an article on the spurious idea of “averaging” water and air temperatures on Climate Etc. I’m glad it has caused some reflection ( even if they got the implications inside out ).
https://judithcurry.com/2016/02/10/are-land-sea-temperature-averages-meaningful/
Now models are tuned primarily to reproduce the ( apparently defective ) water+air climate record of this period. So if the (air) warming was actually greater than their defective mushy, non scientific “averages” then the tuning they did to reproduce it will be wrong. Their models will not be sensitive enough , they were tricked into being less sensitive by being tuned to a mistakenly small rate of warming.
So now they need to fiddle the fudge factors to accurately reproduce air temperature record, not the mixed up, unscientific land+sea “averages”. They can then run their MORE sensitive models. Project them forwards from 1990 and come back and tell us if it works any better than the current plan.
Sorry guys, you need to think this through. You can’t have it both ways. You can’t tune to land+sea then start making excuses that we should only be looking air temps. Apples and oranges as you say.
The other problem this is the that the worst discrepancy is not with the surface land+sea mish-mash but with the lower tropo satellite retrievals. And UAH TLT is air temperatures only.
Nice try fellas, but no cookie.
This may not work out quite the way they hoped.
I recently published an article on the spurious idea of “averaging” water and air temperatures on Climate Etc. I’m glad it has caused some reflection ( even if they got the implications inside out ).
https://judithcurry.com/2016/02/10/are-land-sea-temperature-averages-meaningful/
Now models are tuned primarily to reproduce the ( apparently defective ) water+air climate record of this period. So if the (air) warming was actually greater than their defective mushy, non scientific “averages” then the tuning they did to reproduce it will be wrong. Their models will not be sensitive enough , they were tricked into being less sensitive by being tuned to a mistakenly small rate of warming.
So now they need to fiddle the fudge factors to accurately reproduce air temperature record, not the mixed up, unscientific land+sea “averages”. They can then run their MORE sensitive models. Project them forwards from 1990 and come back and tell us if it works any better than the current plan.
Sorry guys, you need to think this through. You can’t have it both ways. You can’t tune to land+sea then start making excuses that we should only be looking air temps. Apples and oranges as you say.
The other problem this is the that the worst discrepancy is not with the surface land+sea mish-mash but with the lower tropo satellite retrievals. And UAH TLT is air temperatures only.
Nice try fellas, but no cookie.
Some interesting points
Bitter&Twisted
July 24, 2016 at 8:01 am
They have no shame.”
My thoughts exactly.
If the data doesn’t fit your conjecture, adjust the data.
Feynman was wrong.
“The data doesn’t matter. We’re not basing our recommendations on the data. We’re basing them on the climate models.”
~ Prof. Chris Folland ~ (Hadley Centre for Climate Prediction and Research)
And, NASA used taxpayer owned data to adjust, then taxpayer funded models to model, then taxpayer money to go back and adjust, and then taxpayer money to write us this tripe, and finally taxpayer money to publish it. This is an excellent example of the Ministry of Truth sucking up public money to change history and make it “truth”. How many thousands of other things has the Ministry or Truth made “true”. We will never know, but, we have lost a huge amount of our history, scientific data, and scientific truth. Sadly, they do it in hidden ways so that the real data etc. can never be restored. The old truth is disappeared and the new “truth” lives on.
It really is awful.
“there are fewer historic temperature readings from there than from lower latitudes because it is so inaccessible… the researchers instead set up the climate models to mimic the limited coverage in the historical records.”
Mimic? Fabricate is the word. And how do these “researchers” validate their models? Here we go:
“With this modification, the models and observations largely agree on expected near-term global warming.”
What “observations”? The models now agree with the “researchers” expectations. Let’s not bandy words.
This is a travesty. That it was published in a journal that makes claims to be scientific is just horrible. Frighteningly manipulative. Blatantly biased.
Notice the study lacks double blind controls. Of course they will find what they expected to find. there is even a name for this:
The observer-expectancy effect (also called the experimenter-expectancy effect, expectancy bias, observer effect, or experimenter effect) is a form of reactivity in which a researcher’s cognitive bias causes them to subconsciously influence the participants of an experiment. Confirmation bias can lead to the experimenter interpreting results incorrectly because of the tendency to look for information that conforms to their hypothesis, and overlook information that argues against it.[1] It is a significant threat to a study’s internal validity, and is therefore typically controlled using a double-blind experimental design.
https://en.wikipedia.org/wiki/Observer-expectancy_effect
To me, it makes no sense. Before this study, they tuned the models to past temperatures, as part of the process of refining the models. To do that, they could not have used more historical measurements from the Arctic than they use now, because there weren’t any more. So to “set up the climate models to mimic the limited coverage in the historical records” is to do things exactly the same way as before. I find it very difficult to disagree with Bartleby: “This is a travesty. That it was published in a journal that makes claims to be scientific is just horrible. Frighteningly manipulative. Blatantly biased.”. I applaud Bartleby for the restraint shown, and the temperate language used.
I think you are completely misreading the paper. As I understand it the authors have taken the
output from their global models and processed the data in the identical way to what other researchers have done with the measured historical temperature data and have found good
agreement. Note that in this case there is no adjustment to the raw data and similarly no adjustments to the modelled data but rather only in the method used to calculate an average temperature from a climate model.
Geronimo – if you are correct, then they got the conclusion precisely backwards. When the model results are limited to a form that can be checked by observations in an apples-to-apples comparison, as they put it, the model results are not as alarming as what the IPCC and others try to scare us with. They’ve simply discovered the reason why models overstate the actual observed temperature trend. What they do with this is pull a bait-and switch. You’ll notice that the press release cleverly glosses over the implication that their new model output shows a lot less warming than the published IPCC models – that’s how they get their agreement with observations. Having established that the model output CAN be made to agree with real world observations by forcing the model to produce only the kind of output that can be verified by observations, they use that to buttress the IPCC”S “authoritative” assessment reports of impending calamity based on the assumptions of the model that can’t be verified by observations.
Take, for instance, the example given with respect to Arctic coverage. Here’s what they say: “The Arctic is warming faster than the rest of Earth, but there are fewer historic temperature readings from there than from lower latitudes because it is so inaccessible. A data set with fewer Arctic temperature measurements naturally shows less warming than a climate model that fully represents the Arctic.”
This is nonsense. It’s only true if the model shows more warming in the regions not represented by actual station data than the observed trend in the regions that do have station data. The true warming rate in remote regions having no temperature record is just as unknown today as it was yesterday. At the end of the day, all this “study” means is that the warming rate shown by the IPCC models relies on information that can’t be verified by observations, so we’re just going to have to take the IPCC models on faith, even though the historical data does not support a conclusion of an alarming warming rate.
It’s far worse than awful–it’s criminal. Criminals extort and lie to advance their cause, and this is exactly that.
“The Arctic is warming faster than the rest of Earth, but there are fewer historic temperature readings from there than from lower latitudes because it is so inaccessible. A data set with fewer Arctic temperature measurements naturally shows less warming than a climate model that fully represents the Arctic.”
Perhaps it’s just me that’s a trifle obtuse – but then, I’m not a climate “scientist”, I’m merely a somewhat ancient engineer.
That seems to assert that the more temperature measuring points there are, the higher the average temperature is going to be.
But they have loadsfunding as they produce what their political masters want.
NASA Lies in climate data https://www.youtube.com/watch?v=Gh-DNNIUjKU
Well observations of the system will only match simulations of the model, when the model is an accurate model of the real system. It also is helpful; very helpful; excuse me it’s absolutely necessary that the observations of the system, do comprise a valid sample of a band limited system.
G
If this group tried to replicate NASA’s achievement of putting a man on the moon, I believe their results would only be a lot of dead astronauts.
Or if the Earth had 2 moons, they would have landed on the wrong one!
My goodness! Aren’t NASA “new” concerns all about “not enough temperature measurements in the Arctic” (and various other uninhabited regions) justification for sending up weather satellites? Oops, we have two already, not to mention weather balloon data.
The satellites measurements presumably give equal weight to all these uninhabited areas.
Well, the really smart people at NASA are still hunting down the facts that show how Muslims have provided massive value to the growth of the U.S., as directed by B. Hussein Obama.
I’m sure it’s pure coincidence that the adjustments they make land “observations” in the middle of averaged model output.
“These quirks hide around 19 percent of global air-temperature warming since the 1860s. ”
Ummm, didn’t they “adjust” the old temperatures LOWER and RAISE the more recent temperatures a few years back ???
yep….
…and
The final issue is that there was considerably more Arctic sea ice when temperature records began in the 1860s….not
They can fix all of this with a new theory that the freezing and boiling points of water have been increasing with time. No one noticed because we’ve been ignorantly using them to calibrate our thermometers.
Yeah, sounds like another prediction that is then assumed to be true simmilar to “no increase in outgoing radiation” or flooded pacific islands.
Isn’t this sort of back asswards science? Fudge the data to fit a model?
Is this the sort of think Bill Nye talks about?
Well not really, there is no evidence the models are wrong. Since none exists, they are right and the observations must be wrong, so the biases that make them wrong need to be found and removed.
Makes perfect sense if your funding depends on being right.
It actually seems that their funding is inversely correlated with the accuracy of their predictions.
Only a cynic would regard the almost incessant failure of the models to predict temperature as invalidating them.
I’ve realised where those predicted missing 50 million climate refugees are, they must be at the bottom of the ocean with the increased heat!
No, makes perfect sense if you funding depends on telling them what they want to hear.
Used to be—before the data was so uncooperative.
“95% of the models agree. Therefore the observations must be wrong.”, Roy Spenser (Tongue planted firmly in cheek.)
Actually it is 97% of the models that agree. This 97% modelling consensus proves that the historical records were deliberately mismeasured at the time to try and hide the horrors behind evil Man Made Global Warming. Thank God Mann et al. were too smart for them.
The 95% is part of a quote from Dr. Spenser. It was repeated verbatim.
Spenser’s words needed to be adjusted to fit the models. 😉
Ummmm…It’s Dr. Roy SpenCER. Try to get the name spelled right if you are going to quote him.
Thanks. If I could type, I might be dangerous.
Eh, it’d be backwards if that was what was going on, sure. But it’s not.
They’re saying that what we measured in real life are not exactly the same metrics as what had been reported from the models. We’d been partly comparing apples and oranges. For instance, the models can report their temperatures for the entire surface of the Earth, whereas in real life, our measurements in the Arctic have historically been a bit sparse. You may need to account for that, either by improving your measurements there, or by dropping that region out of what you report from the models.
So, what’s happening is just that they’re saying “hey, let’s make sure we’re doing a fair comparison”. And good comparisons are a good thing in science; you want to make sure that you’re being as true as possible to the data and what it represents.
You have to try pretty hard to skew this into bad science.
“You have to try pretty hard to skew this into bad science.”
No, you don’t. Fabricating data is not good science. It’s a far cry from saying the initial measurements are bad and there’s no way to correct that. That’s good science. Bad science involves saying we don’t have any data so we’ll make some up, and look, it fits our model! Surprise surprise! Aren’t we just brilliant? Who’d have thunk it!?
Then they should re-work the models to see if they match when only using the valid data sets. They aren’t doing that either. Just fabricating some data up north, and claiming the models match.
Windchaser —
They are being true to the data???? THEY ARE CHANGING THE DATA TO MAKE IT FIT THE MODELS!!!!!
Man is the measure of all things that be
Post Normal Science
Is funding compliance
So I got the data in me!
Do you think we are living in the Matrix — where the computer model creates the world we think is real??????? The computer model determines what reality is???
Windchaser you ought to change your name to Fartseeker. You obviously enjoy that which smells to high heaven.
Eugene WR Gallun
How can you drop a region from a “global” model and still call it a gobal model?
What they are saying is tantamount to, we got bad data for the world except NY. So we put in only NY data and adjusted the model(really how do you adjust a model w/o making it a new model) and our NEW model matches the observed non-high-warming temperatures for NY, therefore the OLD model which uses the bad data for input and outputs a higher then observed trend for world is correct and the bad world temperature inputs should be adjusted to show more global warming.
Except, Bartleby, no one is fabricating data. They’re seeing what would happen if we applied the same mask to model temperatures that we have in the real-world data. And the result is that there’s about a 20% change, and much better agreement among the models and surface measurements.
See this bit in the press release: “Because it isn’t possible to add more measurements from the past, the researchers instead set up the climate models to mimic the limited coverage in the historical records.”
…did you actually read the paper?
Look, you’ve got a difference between how two different measurements are made – one in real life, one in the models. You can fix that from either side, by changing what you measure in real life so that it’s closer to the model outputs, or doing the same for the model outputs. Or some mix of both.
But the important thing is that you compare apples-to-apples. That it’s a fair comparison. That if some area is left out of one set of data, it’s left out of the other one as well. At least if you intend to compare them.
And it turns out that when you do do a fair comparison, when you adjust for these differences in how the measurements are performed, there’s about a 20% change in the model outputs. And then they match the surface measurements.
No data is fabricated in this process; read the paper and you can see that. They just took the model outputs for temperature that actually corresponded to what we have from real-world measurements. Is there some scientific problem with that, with doing a fair comparison?
let’s make sure we’re doing a fair comparison
=================
you can’t because the records don’t exist. they are instead fabricating data to compare with the model, they even subconsciously admit their error:
“With this modification, the models and observations largely agree on expected near-term global warming.”
The observer-expectancy effect (also called the experimenter-expectancy effect, expectancy bias, observer effect, or experimenter effect) is a form of reactivity in which a researcher’s cognitive bias causes them to subconsciously influence the participants of an experiment. Confirmation bias can lead to the experimenter interpreting results incorrectly because of the tendency to look for information that conforms to their hypothesis, and overlook information that argues against it.[1] It is a significant threat to a study’s internal validity, and is therefore typically controlled using a double-blind experimental design.
https://en.wikipedia.org/wiki/Observer-expectancy_effect
No, they aren’t. Read the press release carefully. Where does it say that they’re changing the data?
It doesn’t. They only masked the model outputs, so that the models are “measuring” the same thing as our real-life measurements.
They literally tell you this. See this quote: “Because it isn’t possible to add more measurements from the past, the researchers instead set up the climate models to mimic the limited coverage in the historical records.“
Windchaser:
Yes, let’s no longer pay attention to the real data behind the curtain. Just listen to to the booming voice of our computer models.
You say “no one is fabricating data.” Of course they are. Say a scientist honestly thinks there is a relationship between X nutrient and cancer prevention, and one day just writes down what he thinks might happen to each of 500 individuals given a daily dose of the nutrient and 500 control subjects, i.e. he just posits the theoretical results of his idea. Subject 1 gets no cancer, subject 2 gets cancer after 15 years, subject 3 gets cancer in 9 years, etc. where the end result is that the imaginary dosage group has a 25% less incidence of cancer than the imaginary control group.
Then, the scientist says “Hey, why should I go through the time and expense of an epidemiological study. I’ll just pretend that what I wrote down was an actual study, calculate out all the statistics, and submit it.” Certainly, everyone would agree that this is a textbook case of fabrication, even if the scientist was absolutely sincere in his belief that X inhibits the incidence of cancer.
But what if the scientist got a computer to do the dirty work for him? Does it cease to be fabrication at that point? Say the scientist’s has a physical theory as to how the body reacts to substance X so as to reduce the risk of cancer by 25% and just programs a computer to implement this physical process, then calibrates the results to known background cancer rates. The computer spits out “runs” for 500 virtual people who took X and 500 virtual people who didn’t. The scientist collates the results with all the relevant statistics and submits it to a journal for publication being perfectly upfront about this being a computer simulation. Setting aside whether the scientist avoids the ethical consequences of fabrication by being upfront about the methodology – why would the first example be “fabricating data” but the second example not?
The IPCC computer runs do not produce “data.” They never have and they never will. Yet your garden-variety climate scientist treats it as data. They average the computer runs of a single model, despite the fact that this average has no real world significance at all (this average only telling you the expected result of another X computer runs of that model, as opposed to telling you anything about the real world). They average together the runs of different models (no meaning at all) so they can speculate on what the long-term consequences might be to the hitherto-unknown Western Pin-Striped Marmot that inhabits the remote rocky slopes of Mt. Rainier. They compute ridiculously meaningless “95% confidence intervals” etc. It’s all a joke. A functioning computer can only do precisely what it was told to do by its programmer – no more no less.
The study referenced in this post implicitly concedes that the results of the IPCC computer models, as presented by the IPCC, do not accurately reflect what the observations indicate. It also implicitly concedes that if the computer output of the IPCC models were modified to reflect only information that could be compared to actual observations, then like the observations, the computer models would ALSO show a warming rate much less than that predicted by the IPCC.
Any reasonable person would therefore conclude that there is no SCIENTIFIC evidence, i.e. evidence experimentally confirmed against observations, of dangerous warming. But instead the study, or at a minimum the press release accompanying the study, suggests the opposite conclusion – that because the component of the computer output that can be verified against actual observations matches the relatively low warming rate of the observations, that this somehow (illogically) quantitatively supports the higher warming trend of the non-adjusted “authoritative” IPCC model warming rates, even though this higher warming rate CAN’T be quantitatively verified against observations.
This is propaganda, masquerading as science.
Windchaser —
Here is what they are doing. They claim that past temperature data is flawed, failing to show enough warming. They specify what they think those flaws are. Then they take their models and apply the same flaws to the model output. Suddenly their model output matches what the measured temp data predicts.
They claim that their models only match the measured data if they DELIBERATELY FLAW THEIR MODELS!
They start out with the assumption that their models are FLAWLESS and claim to demonstrate that only if their models are DELIBERATELY flawed to match the flaws they claim exist in the temperature record will both make the same predictions.
In their twisted minds It then follows that this proves their models truly are flawless because they only produce poor results if you deliberately flaw them.
But the model have never ever worked!! They are intrinsically flawed within themselves. The flaws in the models have nothing to do with the flaws they claim exist in past temperature data. THIS IS REALLY JUST SIMPLE MISDIRECTION. THEY MISDIRECT BY SAYING — LOOK AT THE FLAWS IN THE TEMPERATURE RECORD! DON’T YOU DARE LOOK AT THE DIFFERENT FLAWS IN OUR FLAWLESS MODELS!
DISGUSTING.
Eugene WR Gallun
Windchaser
Wow that was quick, caught already, by the way the new name is dumb and as I said before you have to change your writing style when you change your name.
Climatology is pseudoscience – or junk science. They adjust the data to fit the model.
or you could use satellite data instead, which treats the earth’s atmosphere reasonably consistently.
Do these guys have any idea of how the scientific process is supposed to take place?
That’s a rhetorical question, right?
I guess we skeptics are all “quirks” too. “They’re quite small on their own, but they add up in the same direction,” “We were surprised that they added up to such a big effect.”
The corruption of the left knows no boundaries irrespective of the field of endeavor one examines.
I know JPL are rocket scientists but that doesn’t necessarily make them of the left.
Assuming measurements are wrong so as the pet theory isn’t disproven is not a political decision.
It’s an ethical decision.
It’s an ethical decision to be unethical.
Yes, it is an ethical decision (to be unethical).
Sounds reasonable. If the modelling isn’t working then change the physical historical data to that point needed agree with the models.
Those NASA guys have a sense if humour. Release an April Fools Day article 3 months after April Fools Day.
Where in the press release does it say they’re changing the physical historical data?
It doesn’t. Read it carefully. You’re being misled.
What they did was mask the model outputs, so that the areas used in the average was the same areas that we have in real life. In other words, they’re just checking that we’re measuring the same thing in a model/reality comparison.
troll alert
I think Ministry of Truth trooper rather than troll.
Isn’t this quirk thing just like the claim that the skeptic scientists who are retired are senile and neither recall the past properly or comprehend progressive science?
Durnit… neither-nor.
The Arctic is no warmer now than the 1930s
And what about the Antarctic?
Well, according to RSS3.3 TLT it is in the average quite stable:
http://data.remss.com/msu/graphics/TLT/plots/RSS_TS_channel_TLT_Southern%20Polar_Land_And_Sea_v03_3.png
It gets a bit cooler at some places (Halley station for example), but a bit warmer on the Peninsula and within West Antarctica.
What is the science behind this statement?
There is none.
BEST have the same bullsh*t idea.
Yes, Mosh posted here a few weeks ago that every estimate of Arctic warming underestimates Arctic warming, lol.
Artificial data is needed.
…to replace the real estimates.
– As ice area decreases (as it’s been doing), the surface absorbs more sunlight and it warms more. That’s a feedback.
– Cowtan and Way used satellite measurements to show that the Arctic is warming faster than the average.
– Actual ground measurements show more warming as you get closer to the Arctic, at least before they become too sparse to use.
roflmao..

Apart from the El Nino spike in Jan/Feb, there has been ZERO warming in the satellite record for the NoPol for some 20 years.
And before the 1998 El Nino, it was actually COOLING !!
The fact that the Arctic is warming faster than the rest of the plant is evidence that the warming is not caused by increased levels of CO2. Recent satellite images show CO2 does not vary by more than about 10 ppm from place to place at any given time. The greenhouse effect of CO2 is mostly masked by water vapor and water vapor is scarce at the poles, so the theory predicts more warming at the poles. But UAH temp data show that most of the warming is at the North Pole, and virtually none of it at the South Pole. This weighs agains CO2 being the primary cause of Arctic warming.
As for the apples to apples comment, it would be far more scientific to change the model output to fit the observed data set (average of ocean and land surface temps) rather than to use an untested model to infer that observations were flawed.
AndyG55 on July 24, 2016 at 1:34 pm
What about slowly but surely stopping to manipulate your readers, AndyG55?
You perfectly know that the ENSO event El Niño is a part of the climate, and that you can’t extract it out of any temperature measurement. Why don’t you then extract La Niña or even volcano eruptions as well?
Stop splitting UAH’s temperature records all the time to cherry-pick what you need, and you then will have to accept that UAH’6.0beta5 TLT’s trend for the North Pole from jan 1979 till june 2016 is 0.246 ± 0.023 °C per decade.
And that the trend for this region measured by RSS3.3 TLT for the same period is of 0.339 ± 0.030 °C per decade:
http://data.remss.com/msu/graphics/TLT/plots/RSS_TS_channel_TLT_Northern%20Polar_Land_And_Sea_v03_3.png
That’s the truth, AndyG55; you can’t change it.
That’s progressive science. You use phrases like “naturally shows” as an authority implying that anyone who questions your claim would have to be a whack.
Isn’t this a clear admission that the models are clearly wrong and not fit for predictive purpose. Changing data to verify a model used to be regarded as academicfraud. All hypothesis can be validated using such reasoning. Can everything conceived – be true? GK
They prove that what they call “temperature anomalies” is not data. It is what they decide.
IIRC, there was some tortured soul in ths old, and unlamented, Soviet Union who said of his government, that the know the future with perfect clarity, but they cannot predict the past.
These “scientists” can’t be charged with anything? Howzabout outright fraud and corruption? I mean. This guy admits it right in the “paper”!! Has science really sunk so low? To steal a phrase from SDA, this isn’t your grandmother’s science.
RICO, anyone?
I’m going to be so very happy when Mr. T cuts all this funding and reads them the riot act.
This is all about to come to a crashing end and it couldn’t be soon enough.
well….at least they admitted they don’t have enough data to even do this
“A data set with fewer Arctic temperature measurements naturally shows less warming…”.
Not necessarily at all, since most stations in the Arctic are located in more accessible areas where ice has partly melted and/or in the process of melting, especially during summer; extrapolating these areas to the entire Arctic means one may introduce an ice-free/ice melting warming bias over areas that are permanently covered by ice.
This means, in effect, that having a dataset with more Arctic temperature measurements would actually show less warming, not more, since areas permanently covered by ice would show less warming than those that are partly or seasonally ice free. Naturally, those areas which become more ice-free during summer warm faster under a regional warming trend.
This major assumption undermines the entire article, in fact it is a major reason that ice ages end so rapidly; they key is the amount of ice/snow that melts in summer, extrapolating these areas to the whole Arctic is comparing apples to oranges, and therefore the article’s conclusions are invalid.
If they had no measurements at all the Arctic would be on fire!
“This means, in effect, that having a dataset with more Arctic temperature measurements would actually show less warming, not more”
And this is how it begins. First you accept the idiotic notion we have no instrument readings for the arctic, this isn’t true. Since 1979 we’ve had satellite records of the entire planet. The argument there are insufficient instruments is completely false.
There are no reliable instrument records from anywhere that date back to before 1920.
Bartleby, that’s a very good point. We have nearly 40 years of the best possible observations. The satellite temperature records of the lower troposphere do show that the Arctic is warming much faster than the rest of the world. CO2 is well mixed in the troposphere. Why would CO2 cause the Arctic to warm dramatically but not the Antarctic to hardly warm at all? That I think is one of the best arguments against CO2 being the main driver of recent “global” warming. Global warming isn’t global at all. It’s mostly Norther Hemisphere and most of that is Arctic warming.
Thomas, surely you mean
“Why would CO2 cause the Arctic to warm dramatically but the Antarctic to hardly warm at all?”
or
“Why would CO2 cause the Arctic to warm dramatically but the Antarctic to not warm at all?”
“Hardly” is restrictive, so “not hardly” acts as a double negative.
“The Arctic is warming faster than the rest of Earth, but there are fewer historic temperature readings from there than from lower latitudes because it is so inaccessible.”
But if there are not many temperature readings, how can they be so sure that the Arctic is warming faster than the rest of Earth?
“a climate model that fully represents the Arctic.”
Oh. That’s how.
A very good rationale for cooking temperature records/sarc
Why would folks have exaggerated the high temps in the past? Would they not also exaggerate how cold it was in the winters? Why is homogenization of the pre-satellite era needed when the recorded satellite data shows normal interglacial warming?
If models show there should have been more warming than models indicate, why then the only logical conclusion is that the actual values must be wrong! Flawless logic. NASA if officially dead.
I’m relieved there are still satellite and radiosonde global temp datasets available that aren’t being corrupted by government rent seekers…
NASA scoundrels have to now corrupt the raw data even more to offset the coming La Niña cooling.
What the pro-CAGW scoundrels haven’t done is explain why satellite lower troposphere global temps show so much less warming than corrupted surface temp datasets, especially since lower troposphere temps should be increasing 20~40% faster than surface temps, because that’s where all the CO2 induced downwelling LWIR is supposed to be originating.
Out of desperation, I’m sure GISTEMP and HADCRUT4 datasets will soon adopt these new “corrections” just like they did with the “pause-busting” KARL2015 paper….
Unscrupulous scoundrels don’t get to keep adjusting empirical evidence to match their hypothetical model projections… That’s not science, that’s called fraud…
I’m not sure you know anything about radiosonde temperature records! Because if you did, you wouldn’t have written that.
Here is a plot of several temperature series and their respective trends, all within the satellite era (1979 – today):
http://fs5.directupload.net/images/160725/6obxu8d5.pdf
The pdf is arbitrary scalable so you can see even smallest details.
UAH6.0beta5 TLT (blue) has a much lower trend than not only NASA GISS TEMP land+ocean (plum), but also than RSS 4.0 TTT (green).
And UAH’s trend is also much lower than those of RATPAC B radiosonde measurements at pressure levels of respectively 700 (yellow) and 500 hPa (red).
Here are the trends in °C / decade, for 1979-2016:
– UAH TLT: 0.122
– RATPAC B 700 hPa: 0.166
– RATPAC B 500 hPa: 0.167
– GISS l+o: 0.171
– RSS TTT: 0.177
– IGRA 500 hPa: 0.613.
Let us exclude the IGRA radiosonde dataset: it’s raw data out of which a huge number of outliers must be excluded. That process is called „homogenization“, and its origin is a group of scientists around Prof. Leopold Haimberger (Vienna University, Austria). This group is leader in such tasks (see papers concerning RAOBCORE and RICH), well acknowledged by e.g. John Christy / UAH.
If, as you pretend, radiosondes are trustworthy: how is it then possible that UAH’s trend is about only 70% of that of the two RATPACs, though UAH’s absolute temperatures (about 264K) are a hint on an altitude of about 3.5 km and hence on a pressure level of about 650 hPa?
UAH’s trend in fact rather would be comparable to that of radiosondes measuring temperatures at a pressure level around 300 hPa, i.e. about 6 km higher than expected.
And if radiosondes are so trustworthy: how is it then possible that their trends coincide with those of both the crazy GISS and the „karlized“ RSS4.0 TTT?
Any idea?
Oh what a tangled web they have woven.
Brilliant. Now if NASA can just build a computer model in which the space shuttles Columbia and Challenger don’t catastrophically fail through faulty design causing the deaths of fourteen brave astronauts, we can rewrite that history as well. Or do we perhaps need to help those poor folks at NASA find their dusty and long misplaced science rule book so they can have a quick peek at the first few pages.
No need, already done probably. By NASA’s models, perhaps only 1 in several billion shuttles fail, which proves that it is the small ammount of observations plus really bad luck that makes us think that they are unsafe. If we kept flying them for millenia, we would see no further failure, as models cannot be wrong.
Yeah I know it sounds unbelievable and absurd, but this is what they are telling us about the temperatures. We couldn’t find the really terrifying increase only because we didn’t look enough. So observations have to be adjusted up to reflect the fact that we are very bad at finding heat. Go figure.
“quirks”? I must have missed that term while I was in science classes.
Perhaps it’s some kind of Pokémon.
Translation: “Our models don’t match the historical record. So we must adjust the historical record to save our models.”
No, despite what the headline says, there is no mention of adjusting data at all. In fact, they go the other way. They show that of you process model output temperatures in the same way that HADCRUT 4 averages global measured temperature, you get a similar result.
Since the majority of temperature data comes from land-based stations that haven’t been covered with ice during the instrumental period, I’d be interested in knowing how well the models match the observations if limited to just these areas. I noticed that all the “quirks” they mentioned related either to ocean measurements or to land with ice on it, but the conclusion glibly stated that the “global” trends matched when the model output was somehow forced to an apples-to-apples comparison. The suspicious part of me wonders whether this matching between the adjusted model data and the observations holds true for the regions of the globe where no “glitches” were present in the first place, or whether the results are due to the counterbalancing of mismatches in glitched areas vs. non-glitched areas, so that nothing really matches except the average.
Kurt,
The main discrepancy, which has been discussed for a while, is the use of SST in the indices vs air temp in the models. That affects about 2/3 of the area. It’s not easy to correct for, because the model has to be interpolated to infer temperatures at the right depth. But I expect that is what they have done.
The problem with a region to region comparison that you envisage is that those regions are where there are least observations. HAD4 treats them by omission, and I expect that this study makes similar omission from the GCM data. So it’s the wrong place to try to compare.
But without that comparison, you’re left to wonder whether the match they found was just an accident. And if the conversion of air temperature to sea-surface temperature requires any kind of an inference, because the actual relationship is not known, then the fact the re-interpreted model output matches the observational temperatures is because it was mathematically forced to. It’s not real.
“requires any kind of an inference, because the actual relationship is not known”
Anything that you quote from a model is an inference. But the relationships are known, within the model, and all that is asked for here is that the model makes its best estimate of SST (at measurement depth) rather than the air temperature above. That is just a matter of apples vs apples. Getting the right model estimate (interpolate) in no way forces agreement with observation.
“But the relationships are known, within the model, and all that is asked for here is that the model makes its best estimate of SST (at measurement depth) rather than the air temperature above.”
A computer can’t make any kind of “best estimate.” It can only implement instructions. When I referred to an unknown relationship I was referring to the quantified relationship, not just some kind of formula with a few parameters that need to be estimated to produce an actual result. This estimating is done by people, not the model, and as long as the people get to take a general relationship and play around with quantifying that relationship, they get to fit the output to match the data.
“A computer can’t make any kind of “best estimate.” “
I said the model makes the estimate. A GCM produces a gridded representation of a modelled continuous solution field. You can and should interpolate to estimate results at non-grid points. If you want to compare with GCM SST results, you should interpolate to give the best estimate at that location – not use a result at a different location.
I think we’re talking past each other here. I’m assuming that the observed historical sea surface temperatures were used to calibrate the models. That may or may not be correct, but given that the data is available I’d be surprised if the modelers didn’t do that. If the model is based on air temperature above the ocean, I’m therefore presuming that during this calibration step, the modelers presumed some relationship between the air temperature above the ocean and the water temperature at the surface, or maybe the temperature at some depth beneath it. The model is also going to have to be programmed to define a relationship between the surface temperature and temperature at depths beneath it, with a human-chosen curve for the fall-off – linear, logarithmic with a chosen weight, etc.
The same is true of the observed temperature data over land. For regions that have data, the model is going to use the historical data to calibrate the model. So the fact that, when the models are limited to air temperatures for only land areas that have historical temperature data, and are modified to compute sea surface temperature at measurement depths, which I’m also presuming were used to calibrate the historical air temperatures above the ocean, it shouldn’t come as a surprise that there is general agreement between the estimates of TCR. It’s also not that relevant to me, since the differences between the model and the observations are always going to be due, significantly, to the model’s assumptions about areas of the Earth for which no data is available.
A model is an abstraction – a mathematical construct. It can’t infer anything. All the inferences are pre-programmed (or built into it if you prefer).
” I’m assuming that the observed historical sea surface temperatures were used to calibrate the models.”
People keep imagining that GCMs are some kind of curve fitting. They aren’t. They are solutions of the discretised Navier-Stokes equations for fluid flow, augmented with solutions for radiative and other heat transfer, water transport etc. They aren’t calibrated from observed temperatures. There is some degree of parameter testing, or even fitting, but that is for a few tens of parameters at most. The models have millions of variables.
“All the inferences are pre-programmed (or built into it if you prefer).”
Models produce a huge array of output numbers, for every timestep and many thousands of points in space, on a grid. From that, you have to make inferences. In particular, you have to infer statistics that you can compare with statistics from observation.
The article at the following link says that the models are in fact calibrated (or curve-fitted) to match observed temperatures:
https://judithcurry.com/2013/07/09/climate-model-tuning/
Though it’s true that they don’t tune it to the fine details like ocean surface temperatures as I was assuming above, they do curve fit the models to make sure that they fit the observed global temperature average. In particular, the article says that no model that doesn’t show 20th century warming will get published.
It’s worth reading those extracts from Mauritsen more closely. Eg
“Yet, the span between the coldest and the warmest model is almost 3 K, distributed equally far above and below the best observational estimates, while the majority of models are cold-biased. “
…
“To parameterized processes that are non-linearly dependent on the absolute temperature it is a prerequisite that they be exposed to realistic temperatures for them to act as intended.”
…
“Formulating and prioritizing our goals is challenging. To us, a global mean temperature in close absolute agreement with observations is of highest priority because it sets the stage for temperature-dependent processes to act. For this, we target the 1850–1880 observed global mean temperature of about 13.7C.”
This is typical of what they tune for. As he says, the models track the main features of the temperature progression. But they have this disparity in absolute value, which messes up radiative depending on T^4 etc. So they tune for global average mean temperature. This doesn’t mean they try to improve coherence with the temperature graphs that you mention elsewhere – those in any case are anomaly averages, which have no place in a GCM. They tune to get close to a match over just one period. A single number for temperature.
Ah Ha! Another Ministry of Truth trooper joins the comments.