A Quick Note from Kip Hansen
A quick note for the amusement of the bored but curious.
While in search of something else, I ran across this enlightening page from the folks at UCAR/NCAR [The University Corporation for Atmospheric Research/The National Center for Atmospheric Research — see pdf here for more information]:
“What is the average global temperature now?”
We are first reminded that “Climate scientists prefer to combine short-term weather records into long-term periods (typically 30 years) when they analyze climate, including global averages.” As we know, these 30-year periods are referred to as “base periods” and different climate groups producing data sets and graphics of Global Average Temperatures often use differing base periods, something that has to be carefully watched for when comparing results between groups.
Then things get more interesting, in that we get an actual number for Global Average Surface Temperature:
“Today’s global temperature is typically measured by how it compares to one of these past long-term periods. For example, the average annual temperature for the globe between 1951 and 1980 was around 57.2 degrees Fahrenheit (14 degrees Celsius). In 2015, the hottest year on record, the temperature was about 1.8 degrees F (1 degree C) warmer than the 1951–1980 base period.”
Quick minds see immediately that 1.8°F warmer than 57.2°F is actually 59°F [or 15° C] which they simply could have said.
UCAR/NCAR goes on to “clarify”:
“Since there is no universally accepted definition for Earth’s average temperature, several different groups around the world use slightly different methods for tracking the global average over time, including:
NASA Goddard Institute for Space Studies
NOAA National Climatic Data Center
UK Met Office Hadley Centre”
We are told, in plain language, that there is no accepted definition for Earth’s average temperature, but assured that it is scientifically tracked by the several groups listed.
It may seem odd to the scientifically-minded that Global Average Temperature is measured and calculated to the claimed precision of hundredths of a degree Celsius without first having an agreed upon definition for what is being measured.
When I went to school, we were taught that all data collection and subsequent calculation requires the prior establishment of [at least] an agreed upon Operational Definition of the variables, terms, objects, conditions, measures, etc. involved.
A brief of the concept: “An operational definition, when applied to data collection, is a clear, concise detailed definition of a measure. The need for operational definitions is fundamental when collecting all types of data. When collecting data, it is essential that everyone in the system has the same understanding and collects data in the same way. Operational definitions should therefore be made before the collection of data begins.”
Nonetheless, after having informed the world that there is no agreed upon definition for Global Average Temperature, UCAR assures us that:
“The important point is that the trends that emerge from year to year and decade to decade are remarkably similar—more so than the averages themselves. This is why global warming is usually described in terms of anomalies (variations above and below the average for a baseline set of years) rather than in absolute temperature.”
In fact, the annual anomalies themselves differ one-from-another by > 0.49°C — an amount just slightly smaller than the whole reported temperature anomaly from 1987 to date (a 30-year climate period). [The difference between GISS June 2017 and UAH June 2017].
So, let’s summarize:
- We are told that 2015, the HOTTEST year ever, was …. what? ….. 59°F or 15° C – which is not hot except maybe in the opinion of the Inuit and other Arctic peoples — which may be a clue as to why they really talk in anomalies instead of absolute temperatures.
- Although a great deal of fuss is being made out of Global Average Temperature, there is no agreed upon definition of what Global Average Temperature actually means or how to calculate it.
- Despite the problems of #2 above, major scientific groups around the country and the world are happily calculating away on the as-yet undefined metric, each in a slightly different way.
- Luckily (literally, apparently) the important point is that although all the groups get different answers to the Global Average Surface Temperature question – we suppose it’s because of that lack of an agreed upon definition of what they are calculating — the trends they find are “remarkably similar”. [That choice of wording does not fill me with confidence in the scientific rigor of the findings — it so sounds like my term – “luckily”]. Even less reassuring is being told that the trends are “more [remarkably similar] … than the averages themselves.”
- And finally, because there is no agreed upon definition of Global Average Temperature and the results for the undefined metric from varying groups are less [remarkably] similar than the trends; even the calculated anomalies themselves from the different groups are as far apart from one another as the entire claimed temperature rise over the last 30 year climatic period.
# # # # #
Author’s Comment Policy:
Although some of this brief note is intended tongue-in-cheek, I found the UCAR page interesting enough to comment on.
Certainly a far cry from settled science — both parts by the way — not settled — and [some of it] not solid science.
I’m happy to read your comments and reply — but not to Climate Warriors.
# # # # #
Now I’m really confused 😉
Thank you (as always), Kip.
“I believe that climate scientists put decimal points in their forecasts to show they have a sense of humor.”
H/T William Gilmore Simms
An average is not necessary. Think more in terms of an index such as the Dow Jones, or S&P500 stock indexes. It does not matter what the number is when finding a trend, it only matters that the index is consistently calculated the same way each time it is calculated.
Chris ==> Ah, if that were only the case we would be dealing with…..something like….Science.
The CAGW trick is to keep calculating, keep re-defining, keep adjusting (both the past and the present — they’d adjust the future if they could figure out how to do it) : The only thing the CAGW promoters don’t do is change their overall narrative (kinda like the NY Times — which has a pre-established editorial narrative for all possible stories, journalists write to the narrative, not the actual news).
Chris, if the index you calculate doesn’t at least approximate the movement of the whole, then it is worthless.
That’s one reason why the DOW has fallen out of favor. 100 years ago, the 50 top companies represented the bulk of the total value in the market. Today the fraction of total wealth represented by the top 50 companies is only a tiny fraction of total market value.
Chris & MarkW ==> The index idea is perfectly valid for some sorts of data — the trick is knowing what the Index really represents and what movements of the index mean in the real world.
In the stock market, an index can give a good general idea of investor confidence (well, these days, computer-trading programmatic confidence, maybe). A consumer price index can inform us about prices consumers pay for common items. A Grocery Basket Index tells us if prices of basic foods are risingor falling, and how much.
An Index of Land & Sea Surface Temperatures can tell us only what it is really counting. It probably can not tell us if the Earth is warming and certainly not why.
It is the SIGNIFICANCE — the implied meaning — of these indexes that are false. Indexes are great propaganda tools, because they produce scientific looking numbers to which can be attached all sorts of meanings that seem reasonable at first glance.
I’ve been a science geek all my life with hobbies ranging from astronomy to ornithology, so initially I took the climate scientists at face value. Then I stumbled into Climate Audit and WUWT, and was blown away by the politics and deceipt.
It’s very unfortunate that such a new and promising scientific field was hijacked by politicians and ideologues. Caution, skepticism and moderation are no match for fear and paranoia. But I think we’ve passed peak hysteria, even if the politicians and media will refuse to let go.
If the earth approximates to a black body or is even remotely close to it are all the photographs of it from space shown by NASA faked in a studio or photoshopped?
Cage ==> You can probably be sure that ALL photos on the internet have been Photoshopped — to one degree or another. I actually Photoshop even simple graphs I use in my essays — to increase contrast, to take out background greys, improve readability (especially borrowed images of graphs). I also Photoshop images (photos) to improve their appearance on the web — years ago one had to do this or everything was ghastly — no so much today — but ai do it as a matter of course.
David,
The Earth, as viewed from space is not a black body, but the Moon is once you subtract the reflected energy and the Earth would be too if not for its atmosphere. Relative to the emission behavior of something like a planet or moon, reflected light is irrelevant to the radiant balance, except indirectly by its absence. It might seem that the Moon is very bright, but its albedo is only about 0.12, where if it’s albedo was 1.0, it would be as bright as the Sun with a temperature of absolute 0 and no emissions in the LWIR!
If you look at Earth in the LWIR and were only concerned about the total average emissions, it would be indistinguishable from an ideal BB at about 255K. If you further examined the emitted spectrum, you would notice that the peak average emissions (color temperature per Wein’s displacement) for clear skies corresponds to the average temperature of the surface below and for cloudy skies corresponds to the temperature of the cloud tops when adjusted for non unit cloud emissivity, but in both cases, the spectrum has gaps arising from GHG absorption, reducing the total emitted energy to what an ideal BB at 255K would emit.
It’s important to point out that the emission temperature of Earth is dominated by the emission temperature of clouds covering about 2/3 of the planet, which for Earth clouds is about 262K, so the NET absorption band attenuation required by GHG’s for the emissions to be equivalent to BB at 255K is not a whole lot.
The first (and only) time I ever looked at all 40 or so of the CMIP models, I made the “mistake” of calculating absolute temperatures rather than anomalies. I was astounded to note that the different models varied by about 3 degrees C in their baseline absolute temperature for their starting year (1880, I think). Now consider two models differing by 3 C. Each model will include some areas of the globe that are below the freezing point of water, but one will have a much higher area of ice than the other, affecting estimates of albedo, etc. So that alone would lead to major changes in how well each model matches reality. Since all models are tuned, each will adopt a different method of tuning in order to match historical records. So we would see some more or less arbitrary choices of aerosols, clouds, and other items of great uncertainty in order to make the fudge factors work.
Yes. See Mauritsen 2013 on the absolute temperature disparities in CMIP3 and 5 and the model tuning implications. Discussed in essay Models all the way Down.
Well at least they didn’t try to report it to 0.01.
Well look to how religions in the past reconciled the contradictory, vague and deceptive aspects of their various scriptures and dogmas.
The climatocracy are doing much the same.
Who is their God? Al Gore? Lol! He’s big enough.
Who wants to get rich beyond their wildest dreams ?
Invent an a/c compressor that isn’t so loud/annoying that the cicadas compete with the noise.
Independent of incomparable baselines, the global anomalies aren’t fit for purpose for a basic reason, inadequate coverage. This is true for land only, where large swaths of Africa, South America, and northern Eurasia either have no data or no long term data. The same is true except moreso for yhe oceans in the pre float/Argo era. Best would be to create a Dow Jones like global index of good, well maintained stations with long records. For example RutherGlen Ag and Darwin in Australia, DeBilt Netherlands, Sulina Rumania, Armaugh Ireland, Hokkaido Japan, Lincoln (University station) Nebraska, Rekjavik Iceland, Durban South Africa. Note not all are GHCN. No homogenization. Perhaps coverage area weighted. That way one has a land record unbiased anomaly trend. Why has this not been done? I suspect because it would show little or no warming, just like each of the named candidates for the index..
Rud ==> Gads, hate to be serious on this thread….the satellite products are problematic as well, as they are derived metrics — in that they do not measure temperature themselves, but ‘something’ else which is then translated into what is believed to be the equivalent of ‘surface temperature’.
That said, they may, in the long run, prove to be useful for climate studies.
When will the next “base” period be defined and used?
Matthew W ==> There are some groups using 1980-2010 (may be 81 ).
PS: Try not to ask serious questions when we are fooling around and making fun……Kip 🙂
Is global average temperature a meaningful concept? A bit like averaging all the numbers in the phone book – only one phone will answer.
The global average in C is not particularly useful because there is too much latitudinal and regional variation. But a correctly computed global anomaly is (for climate trends) because it refers to change over time relative to each specific station independently. That change over time can meaningfully be averaged globally. The residual big problem is individual station quality. As said above, most GHCN is not for purpose. And there are many fit for purpose stations not in GHCN. Rutherglen Australia, University of Nebraska at Lincoln, and Univeristy of Durban, South Africa are examples noted above.
Rud ==> Some thought has to be given to the idea that anomalies “refer.. to change over time relative to each specific station independently ” and thus “can meaningfully be averaged globally”.
Obviously one can do it mathematically, and maybe even worm out a way to make it seemingly properly weighted for area etc etc, but it will not, and can not, tell us anything about the quantity of extra solar energy being retained by the Earth due to GHGs. It will only tell us something (mostly political) about how temperatures are generally changing — and that only maybe meaningful.
Forrest ==> Well, there are lots of ways to retain the useful and necessary information from weather station data. Aggregating is not always a good or even useful idea — visualizations that show multiple data sets on the same graph, for instance, can give a better idea of regional trends or boundaries.
See my recent series on Averages.
Forrest ==> ah….aggregating onto a single visual….yes, often a very good choice. There a lot of interesting ways to show data visually, each making their own contribution to our understanding.
In CliSci, the continuous insistence on on showing a single global average metric for climate phenomena has been obscuring and hiding most of all the important information for decades.
A global average with absolute precision is impossible.
However you can get an average, it’s just that the error bars will depend on the number and distribution of your sensors.
The more sensors you have and the more complete the distribution, then the lower your error bars will be.
The error bars for the current climate network would have to be at least 5C, given the paucity of sensors and the extremely poor distribution. (Most are in N. America and W. Europe)
As you go back into the past, both the quality, number and distribution of the sensors gets worse.
MarkW ==> Yes, the GAST error bars, if based on reality, would be larger/wider than the rise in temperature over the industrial era.
That said, Mosher is probably right, at least we know it is warmer now than in the depths of the Little ice Age.
these 30-year periods have no scientific meaning or value , this period cam about because it was hopped that given this long the failure or reality to match models regards the relationship CO2 and temperature increases would be overcome by a change in reality .
It simply has no meaning , no value , no validity other than as a political tool . It could have easily been 40 or 35 years without making any difference at all .
It is indeed a classic example o the area where numbers are picked out of thin air and whose only value comes there perceived impact in supporting ‘the cause ‘
knr ==> Yes, that is correct. There is nothing particular scientific or even magical about the 30 in 30-year-climatic-average. It is just a convention in today’s climate science (might change tomorrow).
60yrs would be a better base, then we have the other half of the sine wave on the main sub-century natural variability curve. This helped warmer disaster proponents over the first half of the wave but now its peaked and going down again much to the their chagrin. You will see this base period changed before they endure the return of the Pause.
The concept of climate normals goes back to the 1930’s in the US. I suspect the interval was chosen partly due to limited coverage and more so due to the onerous task of doing the necessary calculations by hand.
Great news. The world’s average temperature is 15 degrees, and that is the hottest the world has been since the industrial revolution began. Global warming? Still a bit chilly isn’t it? Where’s Josh with an appropriate cartoon?
Robber ==> Mother to child at breakfast table: “OK, Jimmy, your lunch is in your backpack…the school bus will be here soon ..let me check the weather..Oh, Hottest Day ever again, 59 degrees, better wear your sweater”….
“In fact, the annual anomalies themselves differ one-from-another by > 0.49°C —”
Apologies for not being to put up the plot now but it would be good to see a moving SD for 120 months of differences. Places like Argentina ( hardly a backwater in the early 20th C) has only data for Buenos Aires until 1960. Surely the different methods mean that the spread of differences decreases with time?
And it does, until mid century to what is expected for monthly uncertainties of 0.1 C (or √2 for the difference). This is for the difference in BEST and CRUTemp.
Why does it get worse as third-world countries start taking temperatures seriously?
Do both of these either have or leave out oceans? Need to compare two similar things. Also, not sure you can really do standard deviation with just two numbers. Comparison with 3-4 land only data sets would be more informative. But, I understand what you are getting at.
They’re both land only and SD of 60 values ( difference in 60 consecutive months) the 60 is arbitrary choice as indicator of more precise measurements as more and better data comes in. That they seem to correlate better when the 40s blip needs to go rather than for the past 30 years is a concern.
How dare you guys sending the hockey schtick Mann to oz , can’t you keep him over there somehow , we have enough fake scientists here already .
In geological time scales we are discussing in this post an indiscernible rise in temperature at a time we should be warm anyway. Carry on.
Kip, that’s for starters. When they systematically through an algorithm keep changing the past data, both their global temperatures and anomalies of previous base periods have a life of only one month. As Mark Steyn said at the Senate Committee, how can one consider what the temperature will be in 2100 when we still don’t know what it will be in 1950! This means even their models are tuned to something that doesn’t exist anymore.
Standard day sea level definition = 59 deg F / 15 deg C, 1013 millibars / 14.2 psi.
Thanks! Interesting article
But is it not the same with so much in climate science.
Just testing to see how long before this comment is deleted.
tom0mason ==> Why would you think that anyone would delete your comment?
Most comments here do not even go through moderation — most are simply passed through (after the usual forbidden words automagic-moderation).
Your comment would only be deleted if it was in gross violation of WUWT Policy — it has to be pretty bad to get deleted altogether. The Moderator or the post author (in this case, myself) might “snip” out some bit of egregious offensiveness (death threats, nasty name calling) etc.
Your comment above doesn’t violate any WUWT policy — in fact, doesn’t actually say anything other than you think it might be deleted.
It’s simple projection, Kip. It’s SOP for the warmunists, so they believe everyone must do it.
I’ve had some problems with WordPress and/or Firefox lately. Although it appears my comments were being accepted they were not. Things seemed to have settled down after uninstalling/reinstalling the browser (Firefox).
tom ==> Good, glad you got it sorted out. Neither the management nor the authors here want anyone to feel their input is not welcome. I’ll admit that I will occasionally get a comment that “can’t be posted” … and have to re-write it even though there are no obvious “bad” words etc.
Tom and Forrest ==> you can always ask the Moderator. Including MODERATOR in your comment, either on a post or on the Test page, automagically calls the comment to his/her/its attention. The Mod can check to see if your comment is struck somewhere. The author of an individual post can usually do the same thing.
@Forrest Gardener
Basically my comment above was a test as my comments appeared to have been accepted and posted, however after closing Firefox browser then restarting any browser the comments had disappeared.
I finally realized it was probably the Firefox, and remembered it had updated itself twice recently. I can only think something had screwed-up in the update process.
It was not just this site but also on other WordPress sites and only with Firefox.
As I said a complete uninstalling/reinstall of Firefox today and this appears (I hope) to have cleared the issue.
P.S. I am on a Linux system which has been very stable for more than 5 years.
Worse than we don’t know the present temperature, the pre-industrial temperature is more uncertain. We are told by COP21 we should not exceed 2 C above pre-industrial temp. But the best temperature record has a range of 7 to 10 C. 3 degrees spread is greater than the target 2 degrees. And no SST data. The uncertainty is unknown. They have no idea what absolute temperature they are aiming for.
http://blogs.nature.com/news/files/2012/07/berkeley.jpg
Strangelove ==> Hmmmm….. The one thing good about the BEST graphs is that they include some sort of error bars or CIs, although not wide enough — the Global average before the world wars are really just vague guesses and would have error bars just as wide as those shown for 1750 (no instrumental record goes back that far — and it is too close to the present to use paleo methods, in my opinion.)
Where is this graph from — can you give a link?
Here’s link. Slightly different 1750s range = 6.6 to 9.6 C
http://berkeleyearth.lbl.gov/regions/global-land
Lensman ==> Yeah — different. Thanks for the link.
The image you provided is from the original BEST pjt Results paper in 2012:
static.berkeleyearth.org/papers/Results-Paper-Berkeley-Earth.pdf
which does not appear on their current site, and most links to it returned by Google are broken. This one returns the .pdf file.
I do not trust the BEST data, methods, or motivation — particularly their attempts at the attribution problem.
I was taught that an operational definition is a definition of a term in a manner so explicit that all persons applying the definition as a criterion for identifying something would come to exactly the same conclusion as to whether or not the definition applies in any particular instance of it’s attempted application.
In empirical science an operational definition of a quantity is a definition that references the complete, replicable process for quantifying the result of the operation. This, in principle, allows separate investigators to apply the same process to the determination of a quantity and to directly compare their results. For example, ‘temperature’ can be measured by a process that involves the comparison of voltages between two thermocouples, one of which is in thermal contact with the object of interest and the other is in contact with a specific medium of precisely known reference temperature.
Logically an operational definition identifies a well-characterized parent group to which a term belongs, along with necessary and sufficient criteria to distinguish it from all other members of the same group. For example, to define ‘sanguine’ as ‘the color of blood’ identifies the parent group (‘colors’) and provides a criterion (‘is your color the same color as blood?’) that clearly distinguishes it from other members of the parent group.
The important point of an operational definition is that it completely removes all individual variation among observers from the exercise.
tadchem ==> And for CliSci, lacking an Operational Definition for one of its most commonly referenced metrics…..? Maybe the same for sea surface temperature (there are at least two different terms in use), sea level … worse than Biology which lacks an agreed upon definition for what constitutes a “species”.
Harson:
I have previously criticized you for trying to be a ‘jack of all trades’ writer,
covering too many subjects to be an expert in all of them.
I particularly criticized your article on obesity where you claimed
calories didn’t matter — something fat people love to hear!
To demonstrate that I have nothing against you, and only judge
what you write:
I can’t tell you how disappointed I am after reading this article,
and finding it was better than a related post I made on my climate change blog:
http://elonionbloggle.blogspot.com/2017/08/total-confusion-on-absolute-mean-global.html
I congratulate you on a good article, and selecting a far too often forgotten subject:
What is the absolute mean global temperature?
A secondary question, ignored just as often, and perhaps a subject for your next article, is:
How can one number represent the ever changing climate on our planet?
Richard ==> Thank you for your kind comment — it is a sign of real intellectual maturity to step out of the all-too-ubiquitous trend of personalization — making everything personal or about a person — rather than discussing ideas, concepts, understandings. This is much appreciated.
Your suggestion that I deal with this question “How can one number represent the ever changing climate on our planet?” is very good — I have touched on the topic several times and refer to it as the Single_Number Fallacy. My recent series on The Laws of Averages covers a lot of this same ground — Part 1, Part 2 and Part 3.
In addition to the question of “Is it possible to reduce [XXXXX problem or phenomenon] to a single meaningful number?” is the parallel question, possibly more important, “What information is lost or hidden when we attempt to do so?”
I will give this issue some more thought and see if there is a potential stand-alone essay on your specific suggestion. Thank you.
Harson:
I started reading your “Law of Averages” series when they were published, but stopped reading during Part 2, not satisfied with your understanding of economics. (I’ve written a Finance & Economics newsletter since 1977 as a hobby, and have a Finance MBA).
But … I read your Part 3 yesterday, and it turned out to be the best of the three parts, by far.
I had typed a comment on your Averages Part 2 article right after I read it in June, but never posted it:
I decided to leave you alone after giving you so much grief about your obesity article.
I changed my mind today, because my comments on economic data quality / data adjustments might lead you to write a new article on climate data quality / data adjustments, assuming you haven’t already done that.
There are four types of economics “adjustments”:
1)Needed “adjustments” with good explanations,
2) Needed adjustments that are ignored,
3) Unnecessary “adjustments” with no logical explanation, and
4) “Adjustments” made long after the initial data release
hoping no one will notice.
Harson wrote in Averages Part 2:
“I am not an economist …”
and then proved it.
Your “economics” did not include needed data adjustments in most of the charts
… and there is a better way to compare households.
I wrote an Income Inequality article in my January 2013 economics newsletter
that explained the many data adjustments needed.
I also found a better way to measure “inequality”:
“Spending is easy to measure, and there has been little or no change of Rich vs. Poor spending inequality in the past few decades. Income is hard to measure accurately. The increasing income inequality trend in the past few decades has been greatly exaggerated by “data mining”.”
Four examples of some of the many factors that distort
typical long-term household income analyses
… unless data adjustments are made:
(1) Household size and age has been changing:
Smaller and older.
More single person households = lower household income.
More households with only retired people = lower household income.
(2) 1980’s Adjusted Gross Income (AGI) definition changes:
The upper class shifted where they reported their business income after their personal tax rates suddenly became lower than the corporate tax rates they had been paying. The IRS even warned about that: “(AGI) Data for years 1987 and after are not comparable to pre-1987 data because of major changes in the definition of adjusted gross income.”
(3) People changing income quintiles during their lives:
The Top x% are not the same people / households every year,
especially the Top 1% and Top 5%.
Comparisons usually assume they are.
(4) Middle class income deferred until after retirement:
IRA and 401k retirement savings contributions “hide” middle class income
until withdrawals after retirement.
The maximum contribution limits allow high income households
to “hide” a much smaller percentage of their incomes
than middle-class households.
Richsard == Interesting — of course, I wasn’t writing about economics — I was writing about the ins-and-outs of using and misusing averages.
Richard ==> On the Obestity Issue — see Time Magazines recent take at The Weight Loss Trap — http://time.com/magazine/us/4793878/june-5th-2017-vol-189-no-21-u-s/ — it echos ,my points in Modern Scientific Controversies Part 4: The Obesity Epidemic.
TIME magazine is as useful and accurate as Wikipedia = hopeless.
.
Left-wing bias and pro-consensus on every subject
I recall your other favorite “source” on obesity was the left-biased New York Times !
Your points on obesity were wrong, even if there was a 99.9% consensus in your favor,
which of course there was not.
Calorie intake and usage is simple physics that you, or any one else, have not proven wrong.
And you also are as stubborn as a junk yard dog,
although I like that characteristic.
I simply pointed out that the economics used in YOUR Averages Part 2 post was biased
in favor of the consensus that income equality has been increasing at a
fast rate in recent decades.
I showed how easy it is to present data / charts that lead readers to a wrong conclusion
if you don’t thoroughly understand the data in the charts.
The same is true in climate science
Understanding the data, and making adjustments needed for a fair comparison,
would show that the income inequality gap has not changed that much in the past 40 years
— and the claim of a rapidly growing income gap is grossly overstated.
Making needed data adjustments (or ignoring needed adjustments) can completely change the conclusion
from raw data.
In climate science I believe “adjustments” have too often been used to make the temperature actuals more closely match the CO2 controls the climate theory / models.
Richard ==> You seem to miss the point that neither I nor Time Magazine are the original researchers — the source of the findings — whose work we write about — both Time and I give lots of references to the original work, that you can read and decide for yourself whether you find their findings compelling.
The obesity findings of the last decade are as much a surprise to the researchers as they are “unbelievable” to you. That’s what makes it interesting — and why Obesity Science is involved in a Scientific Controversy. There are many, like yourself, who simply refuse to give up their previous beliefs about the causes and cures of obesity — despite the solid science that shows that that old view is simply wrong.
The Obesity example is important because it points up the fact that long-standing, widely-held “truths” in a field may be simply a consensus of the field wide bias — and when deeply entrenched, very hard to root out when better designed long-term studies find that the earlier views are wrong.
The same is true in Climate Science — the entrenched consensus is based on bias which produced lots of confirming “evidence” — and any and all contrary findings — instead of being embraced as clarifying new data — is denigrated, condemned, and suppressed.
I am not interested, really, in differing views of economics — my examples were only about differences in data presented to the public resulting from differing misapplications of averages.