Climate Science Double-Speak

A Quick Note from Kip Hansen

 

questionsA quick note for the amusement of the bored but curious.

While in search of something else, I ran across this enlightening page from the folks at UCAR/NCAR [The University Corporation for Atmospheric Research/The National Center for Atmospheric Research — see pdf here for more information]:

What is the average global temperature now?

We are first reminded that “Climate scientists prefer to combine short-term weather records into long-term periods (typically 30 years) when they analyze climate, including global averages.”  As we know,  these 30-year periods are referred to as “base periods” and different climate groups producing data sets and graphics of Global Average Temperatures often use differing base periods, something that has to be carefully watched for when comparing results between groups.

Then things get more interesting, in that we get an actual number for Global Average Surface Temperature:

“Today’s global temperature is typically measured by how it compares to one of these past long-term periods. For example, the average annual temperature for the globe between 1951 and 1980 was around 57.2 degrees Fahrenheit (14 degrees Celsius). In 2015, the hottest year on record, the temperature was about 1.8 degrees F (1 degree C) warmer than the 1951–1980 base period.

Quick minds see immediately that 1.8°F warmer than 57.2°F is actually 59°F [or 15° C]  which they simply could have said.

UCAR/NCAR goes on to “clarify”:

“Since there is no universally accepted definition for Earth’s average temperature, several different groups around the world use slightly different methods for tracking the global average over time, including:

    NASA Goddard Institute for Space Studies

    NOAA National Climatic Data Center

    UK Met Office Hadley Centre”

We are told, in plain language, that there is no accepted definition for Earth’s average temperature, but assured that it is scientifically tracked by the several groups listed.

It may seem odd to the scientifically-minded that Global Average Temperature is measured and calculated to the claimed precision of hundredths of a degree Celsius without first having an agreed upon definition for what is being measured.

When I went to school, we were taught that all data collection and subsequent calculation requires the prior establishment of [at least] an agreed upon Operational Definition of the variables, terms, objects, conditions, measures, etc. involved.

A brief of the concept: “An operational definition, when applied to data collection, is a clear, concise detailed definition of a measure. The need for operational definitions is fundamental when collecting all types of data.  When collecting data, it is essential that everyone in the system has the same understanding and collects data in the same way. Operational definitions should therefore be made before the collection of data begins.”

Nonetheless, after having informed the world that there is no agreed upon definition for Global Average Temperature, UCAR assures us that:

“The important point is that the trends that emerge from year to year and decade to decade are remarkably similar—more so than the averages themselves. This is why global warming is usually described in terms of anomalies (variations above and below the average for a baseline set of years) rather than in absolute temperature.”

In fact, the annual anomalies themselves differ one-from-another by > 0.49°C — an amount just slightly smaller than the whole reported temperature anomaly from 1987 to date (a 30-year climate period).  [The difference between GISS June 2017 and UAH June 2017].

So, let’s summarize:

  1. We are told that 2015, the HOTTEST year ever, was …. what? ….. 59°F or 15° C – which is not hot except maybe in the opinion of the Inuit and other Arctic peoples — which may be a clue as to why they really talk in anomalies instead of absolute temperatures.
  2. Although a great deal of fuss is being made out of Global Average Temperature, there is no agreed upon definition of what Global Average Temperature actually means or how to calculate it.
  3. Despite the problems of #2 above, major scientific groups around the country and the world are happily calculating away on the as-yet undefined metric, each in a slightly different way.
  4. Luckily (literally, apparently) the important point is that although all the groups get different answers to the Global Average Surface Temperature question – we suppose it’s because of that lack of an agreed upon definition of what they are calculating — the trends they find are “remarkably similar”. [That choice of wording does not fill me with confidence in the scientific rigor of the findings — it so sounds like my term – “luckily”].  Even less reassuring is being told that the trends are “more [remarkably similar] … than the averages themselves.
  5. And finally, because there is no agreed upon definition of Global Average Temperature and the results for the undefined metric from varying groups are less [remarkably] similar than the trends; even the calculated anomalies themselves from the different groups are as far apart from one another as the entire claimed temperature rise over the last 30 year climatic period.

# # # # #

 

Author’s Comment Policy:

Although some of this brief note is intended tongue-in-cheek, I found the UCAR page interesting enough to comment on.

Certainly a far cry from settled science — both parts by the way — not settled — and [some of it] not solid science.

I’m happy to read your comments and reply — but not to Climate Warriors.

# # # # #

 

 

 

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
206 Comments
Inline Feedbacks
View all comments
John W. Garrett
August 16, 2017 12:22 pm

Now I’m really confused 😉
Thank you (as always), Kip.

Gamecock
August 16, 2017 12:29 pm

“I believe that climate scientists put decimal points in their forecasts to show they have a sense of humor.”
H/T William Gilmore Simms

Chris4692
August 16, 2017 12:34 pm

An average is not necessary. Think more in terms of an index such as the Dow Jones, or S&P500 stock indexes. It does not matter what the number is when finding a trend, it only matters that the index is consistently calculated the same way each time it is calculated.

MarkW
Reply to  Chris4692
August 17, 2017 7:54 am

Chris, if the index you calculate doesn’t at least approximate the movement of the whole, then it is worthless.
That’s one reason why the DOW has fallen out of favor. 100 years ago, the 50 top companies represented the bulk of the total value in the market. Today the fraction of total wealth represented by the top 50 companies is only a tiny fraction of total market value.

Chip
August 16, 2017 12:35 pm

I’ve been a science geek all my life with hobbies ranging from astronomy to ornithology, so initially I took the climate scientists at face value. Then I stumbled into Climate Audit and WUWT, and was blown away by the politics and deceipt.
It’s very unfortunate that such a new and promising scientific field was hijacked by politicians and ideologues. Caution, skepticism and moderation are no match for fear and paranoia. But I think we’ve passed peak hysteria, even if the politicians and media will refuse to let go.

David Cage
August 16, 2017 12:42 pm

If the earth approximates to a black body or is even remotely close to it are all the photographs of it from space shown by NASA faked in a studio or photoshopped?

Reply to  David Cage
August 16, 2017 1:21 pm

David,
The Earth, as viewed from space is not a black body, but the Moon is once you subtract the reflected energy and the Earth would be too if not for its atmosphere. Relative to the emission behavior of something like a planet or moon, reflected light is irrelevant to the radiant balance, except indirectly by its absence. It might seem that the Moon is very bright, but its albedo is only about 0.12, where if it’s albedo was 1.0, it would be as bright as the Sun with a temperature of absolute 0 and no emissions in the LWIR!
If you look at Earth in the LWIR and were only concerned about the total average emissions, it would be indistinguishable from an ideal BB at about 255K. If you further examined the emitted spectrum, you would notice that the peak average emissions (color temperature per Wein’s displacement) for clear skies corresponds to the average temperature of the surface below and for cloudy skies corresponds to the temperature of the cloud tops when adjusted for non unit cloud emissivity, but in both cases, the spectrum has gaps arising from GHG absorption, reducing the total emitted energy to what an ideal BB at 255K would emit.
It’s important to point out that the emission temperature of Earth is dominated by the emission temperature of clouds covering about 2/3 of the planet, which for Earth clouds is about 262K, so the NET absorption band attenuation required by GHG’s for the emissions to be equivalent to BB at 255K is not a whole lot.

Lance Wallace
August 16, 2017 1:34 pm

The first (and only) time I ever looked at all 40 or so of the CMIP models, I made the “mistake” of calculating absolute temperatures rather than anomalies. I was astounded to note that the different models varied by about 3 degrees C in their baseline absolute temperature for their starting year (1880, I think). Now consider two models differing by 3 C. Each model will include some areas of the globe that are below the freezing point of water, but one will have a much higher area of ice than the other, affecting estimates of albedo, etc. So that alone would lead to major changes in how well each model matches reality. Since all models are tuned, each will adopt a different method of tuning in order to match historical records. So we would see some more or less arbitrary choices of aerosols, clouds, and other items of great uncertainty in order to make the fudge factors work.

Reply to  Lance Wallace
August 16, 2017 1:44 pm

Yes. See Mauritsen 2013 on the absolute temperature disparities in CMIP3 and 5 and the model tuning implications. Discussed in essay Models all the way Down.

Michael Jankowski
August 16, 2017 1:34 pm

Well at least they didn’t try to report it to 0.01.

hunter
August 16, 2017 1:48 pm

Well look to how religions in the past reconciled the contradictory, vague and deceptive aspects of their various scriptures and dogmas.
The climatocracy are doing much the same.

john harmsworth
Reply to  hunter
August 16, 2017 3:07 pm

Who is their God? Al Gore? Lol! He’s big enough.

u.k.(us)
August 16, 2017 2:02 pm

Who wants to get rich beyond their wildest dreams ?
Invent an a/c compressor that isn’t so loud/annoying that the cicadas compete with the noise.

August 16, 2017 2:12 pm

Independent of incomparable baselines, the global anomalies aren’t fit for purpose for a basic reason, inadequate coverage. This is true for land only, where large swaths of Africa, South America, and northern Eurasia either have no data or no long term data. The same is true except moreso for yhe oceans in the pre float/Argo era. Best would be to create a Dow Jones like global index of good, well maintained stations with long records. For example RutherGlen Ag and Darwin in Australia, DeBilt Netherlands, Sulina Rumania, Armaugh Ireland, Hokkaido Japan, Lincoln (University station) Nebraska, Rekjavik Iceland, Durban South Africa. Note not all are GHCN. No homogenization. Perhaps coverage area weighted. That way one has a land record unbiased anomaly trend. Why has this not been done? I suspect because it would show little or no warming, just like each of the named candidates for the index..

August 16, 2017 2:23 pm

When will the next “base” period be defined and used?

Malrob
August 16, 2017 2:29 pm

Is global average temperature a meaningful concept? A bit like averaging all the numbers in the phone book – only one phone will answer.

Reply to  Malrob
August 16, 2017 2:44 pm

The global average in C is not particularly useful because there is too much latitudinal and regional variation. But a correctly computed global anomaly is (for climate trends) because it refers to change over time relative to each specific station independently. That change over time can meaningfully be averaged globally. The residual big problem is individual station quality. As said above, most GHCN is not for purpose. And there are many fit for purpose stations not in GHCN. Rutherglen Australia, University of Nebraska at Lincoln, and Univeristy of Durban, South Africa are examples noted above.

MarkW
Reply to  Malrob
August 17, 2017 7:59 am

A global average with absolute precision is impossible.
However you can get an average, it’s just that the error bars will depend on the number and distribution of your sensors.
The more sensors you have and the more complete the distribution, then the lower your error bars will be.
The error bars for the current climate network would have to be at least 5C, given the paucity of sensors and the extremely poor distribution. (Most are in N. America and W. Europe)
As you go back into the past, both the quality, number and distribution of the sensors gets worse.

knr
August 16, 2017 3:10 pm

these 30-year periods have no scientific meaning or value , this period cam about because it was hopped that given this long the failure or reality to match models regards the relationship CO2 and temperature increases would be overcome by a change in reality .
It simply has no meaning , no value , no validity other than as a political tool . It could have easily been 40 or 35 years without making any difference at all .
It is indeed a classic example o the area where numbers are picked out of thin air and whose only value comes there perceived impact in supporting ‘the cause ‘

Gary Pearse
Reply to  knr
August 17, 2017 12:32 am

60yrs would be a better base, then we have the other half of the sine wave on the main sub-century natural variability curve. This helped warmer disaster proponents over the first half of the wave but now its peaked and going down again much to the their chagrin. You will see this base period changed before they endure the return of the Pause.

D. J. Hawkins
Reply to  knr
August 17, 2017 10:48 am

The concept of climate normals goes back to the 1930’s in the US. I suspect the interval was chosen partly due to limited coverage and more so due to the onerous task of doing the necessary calculations by hand.

Robber
August 16, 2017 3:27 pm

Great news. The world’s average temperature is 15 degrees, and that is the hottest the world has been since the industrial revolution began. Global warming? Still a bit chilly isn’t it? Where’s Josh with an appropriate cartoon?

August 16, 2017 5:27 pm

“In fact, the annual anomalies themselves differ one-from-another by > 0.49°C —”
Apologies for not being to put up the plot now but it would be good to see a moving SD for 120 months of differences. Places like Argentina ( hardly a backwater in the early 20th C) has only data for Buenos Aires until 1960. Surely the different methods mean that the spread of differences decreases with time?

Reply to  Robert B
August 16, 2017 10:03 pm

comment image
And it does, until mid century to what is expected for monthly uncertainties of 0.1 C (or √2 for the difference). This is for the difference in BEST and CRUTemp.
Why does it get worse as third-world countries start taking temperatures seriously?

billw1984
Reply to  Robert B
August 17, 2017 5:38 am

Do both of these either have or leave out oceans? Need to compare two similar things. Also, not sure you can really do standard deviation with just two numbers. Comparison with 3-4 land only data sets would be more informative. But, I understand what you are getting at.

Reply to  Robert B
August 17, 2017 8:21 pm

They’re both land only and SD of 60 values ( difference in 60 consecutive months) the 60 is arbitrary choice as indicator of more precise measurements as more and better data comes in. That they seem to correlate better when the 40s blip needs to go rather than for the past 30 years is a concern.

Robert from oz
August 16, 2017 6:31 pm

How dare you guys sending the hockey schtick Mann to oz , can’t you keep him over there somehow , we have enough fake scientists here already .

Pamela Gray
August 16, 2017 7:01 pm

In geological time scales we are discussing in this post an indiscernible rise in temperature at a time we should be warm anyway. Carry on.

Gary Pearse
August 17, 2017 12:21 am

Kip, that’s for starters. When they systematically through an algorithm keep changing the past data, both their global temperatures and anomalies of previous base periods have a life of only one month. As Mark Steyn said at the Senate Committee, how can one consider what the temperature will be in 2100 when we still don’t know what it will be in 1950! This means even their models are tuned to something that doesn’t exist anymore.

J Streb
August 17, 2017 3:58 am

Standard day sea level definition = 59 deg F / 15 deg C, 1013 millibars / 14.2 psi.

Mark Rae
August 17, 2017 5:51 am

Thanks! Interesting article

tom0mason
August 17, 2017 7:05 am

“Although some of this brief note is intended tongue-in-cheek, I found the UCAR page interesting enough to comment on.
Certainly a far cry from settled science — both parts by the way — not settled — and [some of it] not solid science.”

But is it not the same with so much in climate science.
Just testing to see how long before this comment is deleted.

D. J. Hawkins
Reply to  Kip Hansen
August 17, 2017 10:51 am

It’s simple projection, Kip. It’s SOP for the warmunists, so they believe everyone must do it.

tom0mason
Reply to  Kip Hansen
August 17, 2017 11:33 am

I’ve had some problems with WordPress and/or Firefox lately. Although it appears my comments were being accepted they were not. Things seemed to have settled down after uninstalling/reinstalling the browser (Firefox).

tom0mason
Reply to  tom0mason
August 17, 2017 1:40 pm

@Forrest Gardener
Basically my comment above was a test as my comments appeared to have been accepted and posted, however after closing Firefox browser then restarting any browser the comments had disappeared.
I finally realized it was probably the Firefox, and remembered it had updated itself twice recently. I can only think something had screwed-up in the update process.
It was not just this site but also on other WordPress sites and only with Firefox.
As I said a complete uninstalling/reinstall of Firefox today and this appears (I hope) to have cleared the issue.
P.S. I am on a Linux system which has been very stable for more than 5 years.

August 17, 2017 7:17 am

Worse than we don’t know the present temperature, the pre-industrial temperature is more uncertain. We are told by COP21 we should not exceed 2 C above pre-industrial temp. But the best temperature record has a range of 7 to 10 C. 3 degrees spread is greater than the target 2 degrees. And no SST data. The uncertainty is unknown. They have no idea what absolute temperature they are aiming for.
http://blogs.nature.com/news/files/2012/07/berkeley.jpg

Reply to  Kip Hansen
August 17, 2017 7:16 pm

Here’s link. Slightly different 1750s range = 6.6 to 9.6 C
http://berkeleyearth.lbl.gov/regions/global-land

tadchem
August 17, 2017 1:24 pm

I was taught that an operational definition is a definition of a term in a manner so explicit that all persons applying the definition as a criterion for identifying something would come to exactly the same conclusion as to whether or not the definition applies in any particular instance of it’s attempted application.
In empirical science an operational definition of a quantity is a definition that references the complete, replicable process for quantifying the result of the operation. This, in principle, allows separate investigators to apply the same process to the determination of a quantity and to directly compare their results. For example, ‘temperature’ can be measured by a process that involves the comparison of voltages between two thermocouples, one of which is in thermal contact with the object of interest and the other is in contact with a specific medium of precisely known reference temperature.
Logically an operational definition identifies a well-characterized parent group to which a term belongs, along with necessary and sufficient criteria to distinguish it from all other members of the same group. For example, to define ‘sanguine’ as ‘the color of blood’ identifies the parent group (‘colors’) and provides a criterion (‘is your color the same color as blood?’) that clearly distinguishes it from other members of the parent group.
The important point of an operational definition is that it completely removes all individual variation among observers from the exercise.

August 18, 2017 8:42 am

Harson:
I have previously criticized you for trying to be a ‘jack of all trades’ writer,
covering too many subjects to be an expert in all of them.
I particularly criticized your article on obesity where you claimed
calories didn’t matter — something fat people love to hear!
To demonstrate that I have nothing against you, and only judge
what you write:
I can’t tell you how disappointed I am after reading this article,
and finding it was better than a related post I made on my climate change blog:
http://elonionbloggle.blogspot.com/2017/08/total-confusion-on-absolute-mean-global.html
I congratulate you on a good article, and selecting a far too often forgotten subject:
What is the absolute mean global temperature?
A secondary question, ignored just as often, and perhaps a subject for your next article, is:
How can one number represent the ever changing climate on our planet?

Reply to  Kip Hansen
August 22, 2017 6:20 am

Harson:
I started reading your “Law of Averages” series when they were published, but stopped reading during Part 2, not satisfied with your understanding of economics. (I’ve written a Finance & Economics newsletter since 1977 as a hobby, and have a Finance MBA).
But … I read your Part 3 yesterday, and it turned out to be the best of the three parts, by far.
I had typed a comment on your Averages Part 2 article right after I read it in June, but never posted it:
I decided to leave you alone after giving you so much grief about your obesity article.
I changed my mind today, because my comments on economic data quality / data adjustments might lead you to write a new article on climate data quality / data adjustments, assuming you haven’t already done that.
There are four types of economics “adjustments”:
1)Needed “adjustments” with good explanations,
2) Needed adjustments that are ignored,
3) Unnecessary “adjustments” with no logical explanation, and
4) “Adjustments” made long after the initial data release
hoping no one will notice.
Harson wrote in Averages Part 2:
“I am not an economist …”
and then proved it.
Your “economics” did not include needed data adjustments in most of the charts
… and there is a better way to compare households.
I wrote an Income Inequality article in my January 2013 economics newsletter
that explained the many data adjustments needed.
I also found a better way to measure “inequality”:
“Spending is easy to measure, and there has been little or no change of Rich vs. Poor spending inequality in the past few decades. Income is hard to measure accurately. The increasing income inequality trend in the past few decades has been greatly exaggerated by “data mining”.”
Four examples of some of the many factors that distort
typical long-term household income analyses
… unless data adjustments are made:
(1) Household size and age has been changing:
Smaller and older.
More single person households = lower household income.
More households with only retired people = lower household income.
(2) 1980’s Adjusted Gross Income (AGI) definition changes:
The upper class shifted where they reported their business income after their personal tax rates suddenly became lower than the corporate tax rates they had been paying. The IRS even warned about that: “(AGI) Data for years 1987 and after are not comparable to pre-1987 data because of major changes in the definition of adjusted gross income.”
(3) People changing income quintiles during their lives:
The Top x% are not the same people / households every year,
especially the Top 1% and Top 5%.
Comparisons usually assume they are.
(4) Middle class income deferred until after retirement:
IRA and 401k retirement savings contributions “hide” middle class income
until withdrawals after retirement.
The maximum contribution limits allow high income households
to “hide” a much smaller percentage of their incomes
than middle-class households.

Reply to  Kip Hansen
August 25, 2017 10:01 am

TIME magazine is as useful and accurate as Wikipedia = hopeless.
.
Left-wing bias and pro-consensus on every subject
I recall your other favorite “source” on obesity was the left-biased New York Times !
Your points on obesity were wrong, even if there was a 99.9% consensus in your favor,
which of course there was not.
Calorie intake and usage is simple physics that you, or any one else, have not proven wrong.
And you also are as stubborn as a junk yard dog,
although I like that characteristic.
I simply pointed out that the economics used in YOUR Averages Part 2 post was biased
in favor of the consensus that income equality has been increasing at a
fast rate in recent decades.
I showed how easy it is to present data / charts that lead readers to a wrong conclusion
if you don’t thoroughly understand the data in the charts.
The same is true in climate science
Understanding the data, and making adjustments needed for a fair comparison,
would show that the income inequality gap has not changed that much in the past 40 years
— and the claim of a rapidly growing income gap is grossly overstated.
Making needed data adjustments (or ignoring needed adjustments) can completely change the conclusion
from raw data.
In climate science I believe “adjustments” have too often been used to make the temperature actuals more closely match the CO2 controls the climate theory / models.