Kevin Trenberth Defends the Climate Community “Scientific Method”

Guest essay by Eric Worrall

In the wake of the science committee testimony, Climate Scientist Kevin Trenberth has insisted that Climate Science does follow the scientific method. But Trenberth himself may have strayed outside accepted scientific methodology.

Yes, we can do ‘sound’ climate science even though it’s projecting the future

Nobody can observe events in the future so to study climate change, scientists build detailed models and use powerful supercomputers to simulate conditions, such as the global water vapor levels seen here, and to understand how rising greenhouse gas levels will change Earth’s systems. NCAR/UCAR, CC BY-NC-ND

April 6, 2017 4.01am AEST

Authors

Kevin Trenberth

Distinguished Senior Scientist, National Center for Atmospheric Research

Reto Knutti

Professor, Eidgenössische Technische Hochschule (ETH) Zürich

Increasingly in the current U.S. administration and Congress, questions have been raised about the use of proper scientific methods and accusations have been made about using flawed approaches.

This is especially the case with regard to climate science, as evidenced by the hearing of the House Committee on Science, Space and Technology, chaired by Lamar Smith, on March 29, 2017.

Chairman Smith accused climate scientists of straying “outside the principles of the scientific method.” Smith repeated his oft-stated assertion that scientific method hinges on “reproducibility,” which he defined as “a repeated validation of the results.” He also asserted that the demands of scientific verification altogether preclude long-range prediction, saying, “Alarmist predictions amount to nothing more than wild guesses. The ability to predict far into the future is impossible. Anyone stating what the climate will be in 500 years or even at the end of the century is not credible.”

Why climate scientists use models

The wonderful thing about science is that it is not simply a matter of opinion but that it is based upon evidence and physical principles, often pulled together in some form of “model.”

In the case of climate science, there is a great deal of data because of the millions of daily observations made mostly for the purposes of weather forecasting. Climate scientists assemble all of the observations, including those made from satellites. They often make adjustments to accommodate known deficiencies and discontinuities, such as those arising from shifts in locations of observing stations or changes in instrumentation, and then analyze the data in various ways.

Projections, not predictions

With climate models as tools, we can carry out “what-if” experiments. What if the carbon dioxide in the atmosphere had not increased due to human activities? What if we keep burning fossil fuels and putting more CO2 into the atmosphere? If the climate changes as projected, then what would the impacts be on agriculture and society? If those things happened, then what strategies might there be for coping with the changes?

These are all very legitimate questions for scientists to ask and address. The first set involves the physical climate system. The others involve biological and ecological scientists, and social scientists, and they may involve economists, as happens in a full Intergovernmental Panel on Climate Change (IPCC) assessment. All of this work is published and subject to peer review – that is, evaluation by other scientists in the field.

The question here is whether our models are similar enough in relevant ways to the real world that we can learn from the models and draw conclusions about the real world. The job of scientists is to find out where this is the case and where it isn’t, and to quantify the uncertainties. For that reason, statements about future climate in IPCC always have a likelihood attached, and numbers have uncertainty ranges.

The models are not perfect and involve approximations. But because of their complexity and sophistication, they are so much better than any “back-of-the envelope” guesses, and the shortcomings and limitations are known.

Read more: https://theconversation.com/yes-we-can-do-sound-climate-science-even-though-its-projecting-the-future-75763

Trenberth has a lot of faith in his models – so much so, a few years ago he demanded that the “null hypothesis” be reversed. If accepted, this would have meant a reversal of the burden of proof regarding the assumption of human influence on global climate.

“Humans are changing our climate. There is no doubt whatsoever,” said Trenberth. “Questions remain as to the extent of our collective contribution, but it is clear that the effects are not small and have emerged from the noise of natural variability. So why does the science community continue to do attribution studies and assume that humans have no influence as a null hypothesis?”

To show precedent for his position Trenberth cites the 2007 report by the Intergovernmental Panel on Climate Change which states that global warming is “unequivocal”, and is “very likely” due to human activities.

Read more: https://wattsupwiththat.com/2011/11/03/trenberth-null-and-void/

Trenberth’s demands for a reversal of the burden of proof with regard to climate were rejected by the scientific community. Even climate advocate Myles Allen, head of University of Oxford’s Atmospheric, Oceanic and Planetary Physics Department, thought Trenberth’s demands for a reversal of the burden of proof were wrong.

“The proponents of reversing the null hypothesis should be careful of what they wish for,” concluded Curry. “One consequence may be that the scientific focus, and therefore funding, would also reverse to attempting to disprove dangerous anthropogenic climate change, which has been a position of many sceptics.”

I doubt Trenberth’s suggestion will find much support in the scientific community,” said Professor Myles Allen from Oxford University, “but Curry’s counter proposal to abandon hypothesis tests is worse. We still have plenty of interesting hypotheses to test: did human influence on climate increase the risk of this event at all? Did it increase it by more than a factor of two?”

###

All three papers are free online:

Trenberth. K, “Attribution of climate variations and trends to human influences and natural variability”: http://doi.wiley.com/10.1002/wcc.142

Curry. J, “Nullifying the climate null hypothesis”: http://doi.wiley.com/10.1002/wcc.141

Allen. M, “In defense of the traditional null hypothesis: remarks on the Trenberth and Curry opinion articles”: http://doi.wiley.com/10.1002/wcc.145

Read more: Same link as above

The problem with climate science is there is no way to test the core prediction, that the Earth will heat substantially in response to anthropogenic CO2 emissions, other than to wait and see.

Important secondary predictions which should be observable by now, such as the missing tropospheric hotspot, or a projected acceleration in sea level rise, have not manifested.

Even more embarrassing, mainstream models cannot even tell us what climate sensitivity to CO2 actually is.

Is equilibrium climate sensitivity 1.5C temperature increase per doubling of CO2? Or is it 4.5C / doubling of CO2? The IPCC Fifth Assessment Summary for Policy Makers cannot give you that answer.

… The equilibrium climate sensitivity quanti es the response of the climate system to constant radiative forcing on multi- century time scales. It is de ned as the change in global mean surface temperature at equilibrium that is caused by a doubling of the atmospheric CO2 concentration. Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence), and very unlikely greater than 6°C (medium confidence)16. The lower temperature limit of the assessed likely range is thus less than the 2°C in the AR4, but the upper limit is the same. This assessment re ects improved understanding, the extended temperature record in the atmosphere and ocean, and new estimates of radiative forcing. {TS TFE.6, Figure 1; Box 12.2} …

Read more: IPCC Fifth Assessment WG1 Summary for Policy Makers (page 14)

Why is this range of possible climate sensitivities embarrassing? Consider the Charney Report, from 1979;

… We believe, therefore, that the equilibrium surface global warming due to doubled CO2 will be in the range 1.5C to 4.5 C, with the most probable value near 3°C …

Read more: http://www.ecd.bnl.gov/steve/charney_report1979.pdf (page 16)

As theories are refined, key physical quantities should be resolved with greater accuracy. For example, the first measurements of the speed of light, conducted in 1676, were 26% wrong – a remarkable estimate for that period of history, but still wide of the mark. More research – better quality measurements and calculations resolved the original uncertainty about the speed of light, which is now known to a high degree of accuracy.

This failure of climate science to follow the normal scientific progression to more accurate estimates should be a serious concern. This lack of convergence on a central climate sensitivity estimate, after decades of research effort, strongly suggests something is missing from the climate models.

Whatever the missing or mishandled factor is, it has a big influence on global climate. The evidence for this is the embarrassingly broad range of estimates for climate sensitivity to a doubling of CO2, and the failure of those estimates to converge.

If climate models were capable of producing accurate predictions, if they showed any sign of converging on a reasonable climate sensitivity estimate, if predicted secondary phenomena such as the tropospheric hotspot and sea level rise acceleration were readily observable, there would be a lot less resistance to Trenberth’s apparent demand that climate model projections be accepted as somehow equivalent to empirical observations.

It should be obvious to anyone there are way too many loose ends to even come close to such acceptance.

According to the Oxford English Dictionary, the scientific method is A method of procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses.

Any suggestion that model projections should be accepted as a substitute for systematic observation and experiment, any suggestion that model output from models which have failed several key tests can be relied upon, any suggestion that defective model output constitutes proof of human influence on global climate, in my opinion utterly violates any reasonable understanding of what the scientific method should be.

0 0 votes
Article Rating
463 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
rocketscientist
April 12, 2017 3:21 pm

If the climate community indeed followed the scientific method then we merely have to wait 100 years or so to determine if their hypothesis is correct.

george e. smith
Reply to  rocketscientist
April 12, 2017 3:36 pm

And our climate models have the benefit of consistency.

Although we have a total of 57 / 97 of these models, they all consistently fail to predict or even to project what the experimentalist have already observed has happened.

So we ARE following the “scientific method”. In the trade we call it the “shotgun approach”. Excuse me I meant the Shogun approach, as in ” I’m the boss, so we do it MY way ! ”

g

Pop Piasa
Reply to  george e. smith
April 12, 2017 6:31 pm

George, I just wonder what the qualls are for “Distinguished Senior Scientist”.

Hivemind
Reply to  george e. smith
April 12, 2017 11:07 pm

It’s basically the same as in any religion. Eg, Reverend, Very Reverend, Most Reverend. No, I didn’t just make that up.

Kalifornia Kook
Reply to  george e. smith
April 13, 2017 6:01 am

Damn, Hivemind, for a second there I thought I had learned something!

Kalifornia Kook
Reply to  george e. smith
April 13, 2017 6:01 am

Damn, Hivemind, for a second there I thought I had learned something!

george e. smith
Reply to  george e. smith
April 13, 2017 11:21 am

Well Pop, I don’t quite know what either the quals or prerequs are.

I do know that for the millennium year, my AM saw fit to hang the D word on my shingle. The 5th one given to that point. The S word was already there from my graduating year; over a half Century ago.

In that instance, it was peers who made the recommendation, so I feel gratified that my efforts made some sort of contributions to the embetterment of us all.

But I would happily transfer that gong to a lifelong friend; who is far more deserving that I.

G

whiten
Reply to  george e. smith
April 13, 2017 11:52 am

george e. smith
April 12, 2017 at 3:36 pm

george, I do not mean to upset you, but Trenberth is a very clever guy..

For example:
—-
“And our climate models have the benefit of consistency.”
—-

will be technically as correct as the:


“Although we have a total of 57 / 97 of these models, they all consistently fail to predict or even to project what the experimentalist have already observed has happened.”
——

So, practically, him and you could be equally wrong or equally wright…..your choice to pick and chose what..

But when at it consider the matter that may weight….terminology,,,,, you do ignore the difference that projections and predictions have in context of GCMs, and Trenberth doggedly uses the term climate models as equivalent of the experiment…

Climate models is a very lose terminology for the contemplated experiment in question…..as any climate models running in laptops or iphones or tablets or Ipads or whatever do not constitute as “the experiment” ………….the only ones that do and are actually used as means for climate policy is GCMs, the beasts that literally are not and can not be climate models……

When there may be contemplated that projections may be considered as a kind of predictions or even leading to some predictions, in the contexts of GCMs and climate, GCMs can not predict climate and its change, while in the same time can “safely” be contemplated as having considerable value on projections……..like telling us if the lag of ppms to temps still is played by the simulation, and values for the max thermal expansion of atmosphere and its relation to ppm increase is at a given projected value,,, but not actually giving the actual climatic variation pattern…..

Now just for the sake of this argument….if contemplating that Mann at his Congress testimony has just “thrown himself under the bus”, Trenberth who has not testified any where is trying to give the last push and kick to another one who has already testified before a UK parliamentary committee and explained, justified and ended up strongly supporting why GCMs are not and should not be allowed and be prohibited to go for a “dry” ran, aka a proper validation……
So according to that UK parliamentary testimony GCMs never were or have being allowed, at least not officially, to do a dry ran, aka a non artificial CO2 emission scenario.

So if Trenberth addresses experiment in the context of the artificial human CO2 emissions, he is simply taking a position of what I may call that of a “honest and sincere “prostitute”” in all this….

According to Trenberth:

“With climate models as tools, we can carry out “what-if” experiments. What if the carbon dioxide in the atmosphere had not increased due to human activities”

And as far as that goes, according to a climatologist modeller testifying before a UK parliamentary committee, that it is not true by any given, unless the “prostitution” of terminology considered, and very seriously when at that….

The actual experiment that has any weight is not of any climate computer models, but GCMs, which are no climate models……even in principle of the naming…..

Any way this may seem a bit confusing…..but let me tell you what sometimes confuses me…the inability for “sceptics” here, to realize that the real villains is not the GCMs or the experiment, but the ones with authority over the experiment, the “masters” in charge of the GCMs, who literally pervert and confuse the meaning of the “experiment” to their own or their cabal interest, they belong to.

GCMs do very much contradict the AGW and the zealot cabal of the AGWers…..no matter how strange that may seem, to me is a very obvious truth….
GCMs and their simulations, the result and most about that experiment is the best ally against the AGW madness……

Looked closely, The GCMs may prove that the testimony of that “modeler” climatologist before the UK parliamentary committee was a plain deception of his own peers and a perjury, where the testimony was in support for future national policy in the context of climate science AGW and GCMs as according to his expertise when it can be shown that the guy new the truth to be otherwise than claimed…….
The reason given in that testimony for the blocking of the proper validation of GCMs, by a dry ran, aka with no any artificial CO2 emission scenario, was a pure intentional deception……..

So my main point is….it will totally be unwise to follow the line of further and further pollution in the contemplation of the ‘experiment”, especially when a lot more towards such further polluting as, will be offered and provoked by the very villains that “overlord” the experiment up to now…..aka the GCMs.

Again the national and global climate polices are actually driven and supported by the official orthodox AGW “science” and the GCMs, but are very much supported and consolidated by the GCMs and the numbers and projections given by such GCMs simulations……and a lot of intentional misinterpretations and abuse with such numbers and projections..

Other climate models perse, whether laptop ones or tablet ones or what ever, have only one meaning and purpose, that of very shiny and deceptive confusing toys…..employed in such cases only in the prospect to divert attention and increase confusion, in the support of a deceiver to reach his desired successful con…

Maybe this already to long for now,,,,and please do forgive me for any harshness that may be there in my part, only speaking freely my mind….

cheers

skorrent1
Reply to  george e. smith
April 13, 2017 11:52 am

Dozens of models, hundreds of runs. One or two of those runs actually come close to matching observations so far. Why haven’t these “scientists” focused on these runs/models/assumptions instead of giving equal weight to runs that are so wildly inaccurate? An “average of runs” is just as useless as an “annual global average mean temperature”.

george e. smith
Reply to  george e. smith
April 13, 2017 1:09 pm

So whiten !

you far two slo.

g is NOT G

gesundtheit !

mike
Reply to  rocketscientist
April 12, 2017 6:45 pm

Demonstration of a cooling period would show how wrong their predictions have been. “Hide the decline” II. Waiting for 2020 toward achieving 20/20.

MRW
Reply to  rocketscientist
April 12, 2017 6:52 pm

If the climate community indeed followed the scientific method then we merely have to wait 100 years or so to determine if their hypothesis is correct.

As Lord Monckton has pointed out countlessly times before, it was Al-Haythem the 11th C Islamic scholar who invented the scientific method. He defined it. He demanded mathematical rigor, and the agreement of experiment and observation to hypothesis, or the hypothesis was worthless.
http://harvardmagazine.com/2003/09/ibn-al-haytham-html
http://www.1001inventions.com/ibnalhaytham (popular)
http://www.realclearscience.com/blog/2014/03/the_muslim_scientist_who_birthed_the_scientific_method.html (popular)
http://lostislamichistory.com/ibn-al-haytham-the-first-scientist/ (popular).

Trenberth trifles with that rigor.

Duster
Reply to  MRW
April 12, 2017 7:34 pm

One of the sad facts of human society is that over half the population persists in preferring Bronze Age myth to working at learning something. Even worse is that many of those who DO work at learning regard their reasoning and efforts as sacrosanct and not to be doubted or otherwise interfered with. The Middle East, the Middle Ages, the Spanish Inquisition, witch burnings, book burnings, and Holocausts are the outcome of certainty.

RACookPE1978
Editor
Reply to  Duster
April 13, 2017 1:23 am

Duster

Even worse is that many of those who DO work at learning regard their reasoning and efforts as sacrosanct and not to be doubted or otherwise interfered with. The Middle East, the Middle Ages, the Spanish Inquisition, witch burnings, book burnings, and Holocausts are the outcome of certainty.

Add to that list of “certainty”: Those who believe in the climastrology of CAGW, and all the death, ills to society and loss of freedom that their chosen response to their religion of climastrology requires, are more beholden to their faith that any of “bronze age myth” you so clearly despise.

JohnKnight
Reply to  MRW
April 12, 2017 8:40 pm

Duster,

“One of the sad facts of human society is that over half the population persists in preferring Bronze Age myth to working at learning something.”

What bronze age myth are you referring to? (and how can you be sure? ; )

Reply to  MRW
April 12, 2017 9:48 pm

Duster, +1.

John Knight, perhaps Duster was speaking metaphorically. Perhaps he meant to say that projecting lower brain stem superstition onto statistical noise flat earth style is not science.

Every mathematical equation is a model. There is nothing fundamentally wrong with models, you just have to get them right.

Had email conversations with Kevin. I respect some of his early work. He is correct that models can be useful. He is blinded to the failure of the current generation of models by true belief.

Samuel C Cogar
Reply to  MRW
April 13, 2017 5:19 am

What bronze age myth are you referring to?

JohnKnight,

Me thinks Duster was referring to all of the dozens n’ dozens of different Gods, Goddesses and other Sky Pixies with unlimited “magical” powers ….. that humans have been conjuring up ever since their “hunter-gather days” to believe in and worship due to their being afraid of living …… and their fear of dying.

MarkW
Reply to  MRW
April 13, 2017 6:18 am

It really is sad the way people like Duster can only find justification in their lives by ridiculing the beliefs of others, while pretending that his beliefs are unquestionable.

MarkW
Reply to  MRW
April 13, 2017 6:19 am

Samuel, since you are so convinced that they are myths. Prove it.

Gloateus
Reply to  MRW
April 13, 2017 10:57 am

Mark,

It’s easy to show that various creation myths never actually happened. Same goes for the worldwide flood myth from Bronze Age Mesopotamia. To mention but a few.

Bryan A
Reply to  MRW
April 13, 2017 12:17 pm

Careful Gloateus,
Are you proposing to prove the Myth(s) is(are) false or only that the interpretation of the anecdote is incorrect?
2 completely different cans of worms

JohnKnight
Reply to  MRW
April 13, 2017 2:14 pm

Samuel C Cogar,

Due to their being afraid to live? . . You actually believe that? . . Doesn’t seem kinda arbitrary, convenient, and self serving as an explanation for why essentially all people that managed to form a complex society which left any significant trace, apparently believed in supernatural entities?

Not even a hint skepticism crops up, eh? You’re ready to start mocking and belittling people based on that level of explanation? . . Well, I suppose one could imagine this explains why atheist never managed to make any sort of stable civilization . . ; )

JohnKnight
Reply to  MRW
April 13, 2017 2:21 pm

PS~ And why they murdered about a quarter of billion people in just the last century . . before their recent attempts crumbled into smoldering ruins . .

MarkW
Reply to  MRW
April 13, 2017 2:26 pm

Disproving one of the stories does not disprove the entire edifice. Especially few of the believers believe that the stories are to be taken literally.

Gloateus
Reply to  MRW
April 13, 2017 5:17 pm

Bryan A April 13, 2017 at 12:17 pm

Neither Mark nor Duster specifies which myth he or they have in mind.

But IMO there is no creation or worldwide flood myths which in any way conforms to objective reality, as demonstrated by science. I could be wrong, but I’ve studied comparative religion of all the major belief systems of which anthropology has knowledge.

Please suggest one which you think can be shown in some sense “true”. All with which I am familiar are easily shown objectively false, whatever their metaphorical value might be.

Five basic types of creation myth have been recognized. None bears any resemblance to reality. The first creation myth in Genesis, ie the Six Day story, has been classified as of the ex nihilo type, but that’s not strictly true. Neither is the second, totally contradictory myth in Genesis, ie the Adam and Eve story. But both are closer to the “out of nothing” type than to the other four types.

A modern person must be deranged to imagine that any such ancient or tribal myths be literally true. Yet there are people who try to defend such a preposterous, anti-factual position.

JohnKnight
Reply to  MRW
April 13, 2017 8:30 pm

Gloateus,

“But IMO there is no creation or worldwide flood myths which in any way conforms to objective reality, as demonstrated by science.”

Demonstrated? How the hell could that be? All that can mean is speculated, ’cause we can’t (last I heard ; ) travel back through time . . Could be some fine speculating, and be the truth, but it van never be demonstrated (without time travel).

Science is actually a rather limited form of philosophy, even though lately it often masquerades as a magician or conjurer of some sort, as I see this stuff. And that’s why many bite on something like the CAGW so easily, it seems to me; People don’t realize how limited observational science really is, and believe that what scientists are virtual gods, that do things like see into the past . .

JohnKnight
Reply to  MRW
April 13, 2017 9:04 pm

Oh, PS,

“Please suggest one which you think can be shown in some sense “true” ”

Shown is not possible . . without time travel (we only see in the present moment). But, any creation story that involves an all powerful Creator can, by definition, be true, in the logical sense, if given the truth of the Creator. Just as we can create whatever sort of artificial/synthetic world we wish in a computer projection . . Things can be placed somewhere in the synesthetic space, for instance, without having any history at all within the space.

Why people insist on limiting a hypothetical Creator God to only long drawn out creative processes, I have no idea . . we aren’t, and can make an old house in our created world, that was never new . . We could make light in the sky of our created world, from stars that don’t exist . . etc, etc…Yet many act like it’s ridiculous to think a God could do what we can do . . now, in our humble little creations. Weird, from a logical standpoint . .

Samuel C Cogar
Reply to  MRW
April 14, 2017 5:41 am

MarkW – April 13, 2017 at 6:19 am

Samuel, since you are so convinced that they are myths. Prove it.

MarkW, before I can possibly provide you with any said “proofs” of your Religious beliefs being myths …… or the Religious beliefs that you were referring to being myths, ….. it will be necessary for you to first tell me, ……. which one (1) of the several hundred different religions and/or spiritual traditions that are listed on the following cited website, …… that you are an avid believer and worshipper in/of?

List of religions and spiritual traditions
https://en.wikipedia.org/wiki/List_of_religions_and_spiritual_traditions

So, MarkW, …… tell me which one (1) of the “listed” Religions that you chose …… and then tell me why you believe it’s the “true Religion of a Godly Creation” and all the other hundreds are simply false beliefs in/of a false God.

MarkW
Reply to  MRW
April 14, 2017 7:26 am

Samuel, are you honestly going to argue that the fact that there is more than one creation myth proves that they are all false?
Should we add big bang to the myths that are automatically invalidated by this method?

Samuel C Cogar
Reply to  MRW
April 14, 2017 8:44 am

JohnKnight – April 13, 2017 at 2:14 pm

Samuel C Cogar,

Due to their being afraid to live? . . You actually believe that? . .

JohnKnight, your ignorance in/of Religious and/or Biblical history is quite utterly amazing.

Utterly amazing given the fact that you are an avid supported in/of a Religion without having “a clue” about its origin.

Well now, here is your 1st clue, John K, …… every “group Religion” that is based in/on a “Creator God”, is in actuality, ……. based in/on “fear and ignorance”.

And also saidith: JohnKnight

Doesn’t seem kinda arbitrary, convenient, and self serving as an explanation for why essentially all people that managed to form a complex society which left any significant trace, apparently believed in supernatural entities?

Again, ….. your blatant ignorance of world histories is not a reasonable excuse for your silly commentary. So, get another clue, John K, ….. the 500 years of The Dark Ages in western Europe ….. was the direct result of the Church of Rome and its “ruling class” of popes, priests, bishops, etc. forcing the populace to believe in and worship a “supernatural entity”, ……. The God of the Bible.

Samuel C Cogar
Reply to  MRW
April 14, 2017 9:16 am

MarkW – April 14, 2017 at 7:26 am

Samuel, are you honestly going to argue that the fact that there is more than one creation myth proves that they are all false?

Well now, MarkW, ……. “silly is as silly does”, … and it don’t get any sillier than your above stated asinine comment.

It matters not a twit ….. iffen its one (1) creation “myth” or one thousand (1,000) creation “myths”, …….. a “myth” is a “myth” ….. and all “myths” are false.
To wit: http://www.dictionary.com/browse/myth

drednicolson
Reply to  MRW
April 14, 2017 9:58 am

Incorrect. The Dark Ages were the aftereffect of the fall of the Roman Empire and the fallout thereof. For much of western Europe in the 5th century AD, society as they had known it went belly-up. Not unlike the currently popular concept of the zombie apocalypse, just replace fictional zombies with real barbarians. Something like that doesn’t get fixed in a week, you know. The Catholic Church was actually integral in preserving much of the legacy of classical civilization, along with the Byzantines and Muslims. (Web search for “How the Irish Saved Civilization” to see but one example, as if you would bother.)

Seriously, you sound like you get your history of world religion straight from Richard Dawkins.

MarkW
Reply to  MRW
April 14, 2017 1:18 pm

Samuel, I love it when ignorant atheists proceed to tell Christians what it is we believe, and then proceed to get everything wrong.
When you get over your hatred perhaps you should spend a few years educating yourself.
Though I doubt you ever will. Your need to feel superior to someone is just to great for you to risk it.

JohnKnight
Reply to  MRW
April 14, 2017 7:05 pm

“Well now, here is your 1st clue, John K, …… every “group Religion” that is based in/on a “Creator God”, is in actuality, ……. based in/on “fear and ignorance”. ”

Oh . .

From the Wiki;

“Overall, Christians have won a total of 78.3% of all the Nobel Prizes in Peace, 72.5% in Chemistry, 65.3% in Physics, 62% in Medicine….”

Geez, I guess if they weren’t so frightened and ignorant, they might have won them all, eh?

Samuel C Cogar
Reply to  MRW
April 15, 2017 5:59 am

drednicolson – April 14, 2017 at 9:58 am

Incorrect. The Dark Ages were the aftereffect of the fall of the Roman Empire and the fallout thereof.

Shur nuff, …… drednicolson, …… and I suppose you will next be telling me that …….. “the WPA, Income Taxes and Social Security was the aftereffect of the Great Depression of the 1930’s and the recovery of the US economy and WWII”, Brilliant thinking, Dred, brilliant thinking.

Of course The Dark Ages were the aftereffect of the fall of the Roman Empire …… simply because Emperor Constantine had mandated Christianity as the “official state religion” ……. and during the next 600+- years following the fall of the Roman Empire the Church of Rome took control over all facets of the lives of the people in western Europe and their socio-economic status “went to hell in a handbasket” ….. and remained there until the Renaissance.

For much of western Europe in the 5th century AD, society as they had known it went belly-up.

Well “DUH”, of course it started going belly-up in the 5th Century AD …. simply because the Church of Rome wanted the people to remain “barefoot, hungry and ignorant” thus insuring they were 100% loyal to the demands of the Pope. And the worst years of the Dark Ages for the people of western Europe was between 900 and 1600 AD. And ps, during that time it was an automatic “death sentence” if a peasant was caught with a copy of the Bible, even if he/she was incapable of reading it.

The Catholic Church was actually integral in preserving much of the legacy of classical civilization, along with the Byzantines and Muslims.

“DUH” it was not known as the Catholic Church until after the Protestant Reformation in the 16th-century ……. when the protestors (Protestants) got fed-up with the Church’s dictatorship, graft, corruption, etc., that kept the people poor, hungry and destitute.

And the Church of Rome was complacent in destroying more legacy of classical civilization than you can possibly give it credit for preserving. As a matter of historical fact, it was responsible for destroying the last remaining portion of the Library of Alexandria.

Even the modern Catholic Church has kept the “contents” of the Dead Sea Scrolls “under wraps” for the past 40+ years.

Seriously, you sound like you get your history of world religion straight from Richard Dawkins.

Well now, iffen I were as Learning Disabled as you are concerning Biblical Histories, …… then I would surely just keep my thought to myself so that other wouldn’t realize that I was “dumb as a box of rocks” on/about that specific subject..

JohnKnight
Reply to  MRW
April 15, 2017 5:06 pm

Samuel,

“Of course The Dark Ages were the aftereffect of the fall of the Roman Empire …… simply because Emperor Constantine had mandated Christianity as the “official state religion” ……”

That wasn’t till well into the fourth century . . things were not going well long before that . .

” Growing social divisions

The new supreme rulers disposed of the legal fiction of the early Empire (seeing the emperor as but the first among equals); emperors from Aurelian (reigned 270–275) onwards openly styled themselves as dominus et deus, “lord and god”, titles appropriate for a master-slave relationship.[24] An elaborate court ceremonial developed, and obsequious flattery became the order of the day. Under Diocletian, the flow of direct requests to the emperor rapidly reduced and soon ceased altogether. No other form of direct access replaced them, and the emperor received only information filtered through his courtiers.[25]

Official cruelty, supporting extortion and corruption, may also have become more commonplace.[26] While the scale, complexity, and violence of government were unmatched,[27] the emperors lost control over their whole realm insofar as that control came increasingly to be wielded by anyone who paid for it.[28] Meanwhile, the richest senatorial families, immune from most taxation, engrossed more and more of the available wealth and income,[29] while also becoming divorced from any tradition of military excellence.[30] One scholar identifies a great increase in the purchasing power of gold, two and a half fold from 274 to the later fourth century, which may be an index of growing economic inequality between a gold-rich elite and a cash-poor peasantry.[31] ”

( https://en.wikipedia.org/wiki/Fall_of_the_Western_Roman_Empire )

It seems to me that you have a sort of fanciful view of what Christianity is, or should be if the Book is Legit . . including the notion that people are somehow prohibited by God from being led astray or behaving badly, or flat out lying about being Believers, and pretty much the opposite is what is spoken of in the Book.

If you bring the idea that a sort of heaven on earth not breaking out, once Christianity was no longer persecuted, or became “main-stream” in the Roman world, somehow demonstrates it’s evil or pointless or whatever, there’s likely to be the sort of narrow view you seem to have about what played out, it seems to me. Kind of an SJW style reaction, wherein some idyllic fantasy world inhabited by (saintly) victims of the Church’s oppression was stolen from humanity. And you can see it right there, plain as day . . in your imagination . .

Gloateus
Reply to  MRW
April 15, 2017 5:34 pm

JohnKnight April 13, 2017 at 8:30 pm
Of course you don’t have to travel back in time. All you have to do is look at the crust of the earth. There is zero evidence of a global flood some 6500 years ago, and all the evidence in the world against it. The history of life on earth shows that both creation myths in Genesis are as wrong as wrong can be, if you make the ridiculous attempt to imagine them as literally true.
Believing, against all the evidence, that the biblical creation and flood myths are literally true is blasphemy of the lowest order. The sin of bibliolatry is the same as worshiping idols. The Word of God isn’t His but that of men trying to imagine Him. The Work of God, OTOH, the observed universe, is real. Where the alleged Word differs from the observed Work, the latter must always rule.
Otherwise you allege that God is deceitful, deceptive, cruel and incompetent.

Gloateus
Reply to  MRW
April 15, 2017 6:08 pm

I wonder why creationists can post anti-scientific fantasies here without moderation, while my comments on science are moderated.

I guess the CACA spewers are correct that skeptics also d@ny the fact of evolution.

Color me gone from this creationist site.

JohnKnight
Reply to  MRW
April 15, 2017 6:54 pm

“Fact of evolution” just means things gradually change, Glutenous, not that billions of molecules self organize in a mud puddle, and a cell (complete with reproductive capabilities) comes into existence.
I really don’t care about your well crafted double-talk, you’re not going show us that anything remotely like that has been observed, which means it cannot be a scientific fact, period. It’s that clown-ass authority worshiping BS which swung open the door for the CAGW scam, I’m rather sure, and the reason the scammers thought they could just bully through based on “The experts in the field say so” type Siants . . (sounds like science ; )

Samuel C Cogar
Reply to  MRW
April 16, 2017 6:06 am

JohnKnight – April 15, 2017 at 5:06 pm

It seems to me that you have a sort of fanciful view of what Christianity is, or should be if the Book is Legit . . including the notion that people are somehow prohibited by God from being led astray or behaving badly, or flat out lying about being Believers, and pretty much the opposite is what is spoken of in the Book.

John. ….. John. ….. John. ….. iffen the Book you speak of in your above comment is in reference to the Christian Bible, …… then “Yes”, ….. it is the legitimate source document that defines the Christian orthodoxy for all Churches ……. but there is very little of said Bible’s contents or context that can be defined or declared as being “legitimate”.

In fact, the Bible’s contents/context is almost as “legitimate” as the contents/context of a Steven King novel.

John K., me thinks you need to educate yourself as to “who, when & where” …. the very 1st Christian Bible (or Book) was composed, edited and published ….. and the best place to start your education is read and study about the convening of the First Council of Nicaea in 325 AD ….. by “clicking” this url link, to wit: https://en.wikipedia.org/wiki/First_Council_of_Nicaea

And given the fact that today is “Easter Sunday 2017” ….. you might be surprised to learn the “why & how” that the First Council of Nicaea was mandated to decide which Sunday during the Spring season would be declared Easter Sunday.

And John K, ….. give it up, …… I got more Biblical history “answers” than you got questions.

Samuel C Cogar
Reply to  MRW
April 16, 2017 7:18 am

JohnKnight – April 15, 2017 at 6:54 pm

“Fact of evolution” just means things gradually change, —— not that billions of molecules self organize in a mud puddle,

John. ….. John. ….. John. ….. me thinks you are employing your “street corner thuggery” here on this Forum by “shouting” your silly comments at any passer-by you encounter while slouching on your favorite “street corner”.

Your ignorance of the natural world around you is utterly amazing and actually verges on being asinine and idiotic. So “GETTA CLUE”, …… John, …. your above stated “Fact of evolution” relative to you following comment about “billions of molecules not being capable of self-organizing” ….. is FUBAR.

I really don’t care about your well crafted double-talk, you’re not going show us that anything remotely like that has been observed, which means it cannot be a scientific fact, period.

John, you probably won’t look, ….. but iffen you did, I could show you TRILLIONS of actual, factual examples whereby …… “billions of molecules kinda quickly self-organize within the confines of a cocoon” ……. each and every year via a process called metamorphosis ……. where by creepy, crawly, ugly caterpillars “morph” themselves into being beautifully decorated butterflies and moths that can fly through the air with the greatest of ease. And there are dozens of other species of insects that undergo said “metamorphosis”, some requiring the use of a cocoon and some not, ….. with the most pesky and dangerous one being the mosquito.

T’is a shame that “street corner thugs” can not engage in intelligent conversations.

Chimp
Reply to  MRW
April 16, 2017 3:30 pm

JohnKnight April 15, 2017 at 6:54 pm

As creationists frequently do, you’re confusing the fact of evolution with the process of abiogenesis. Abiogenesis is how life got going. Evolution happens after there are reproducing organisms.

You also make the common mistake of imagining that all evolution is gradual. It isn’t. In fact more species have evolved in a single generation than gradually. The evolution of higher classifications such as families, orders and classes however is generally gradual.

Evolution is a scientific fact because it’s an observation, seen over and over again in the lab and in nature. Many speciation events observed in the wild can be recreated in the lab.

MarkW April 13, 2017 at 2:26 pm

It seems to me that the burden of proof is on those who maintain that there are some creation myths which reflect reality. I don’t know of any. But if you do, I’d be all ears to learn about them.

If by the “whole edifice” you mean the Bible, then, yes, there are parts of both testaments that have some historical and archaeological support. But the mythical and legendary stories from before the quasi-historical record begins c. 800 BC nave no to little such support. That’s quite apart from whatever metaphorical, allegorical or symbolic meaning and value the mythical and legendary stories have.

Chimp
Reply to  MRW
April 16, 2017 3:43 pm

Samuel C Cogar April 16, 2017 at 6:06 am
Easter is as a good a day as any to point out that nowhere in the Bible does it claim to be the word of God or in any way scientifically valid, which it obviously is not. There is a passage attributed to Paul, but clearly a forgery, yet accepted into the NT canon, which says what the Bible is good for.
2 Timothy 3:16 reads, “All scripture which is inspired by God is useful for teaching, rebuking, correcting and training in righteousness”, No mention of science, which in 1st Century koine was the study of nature, ie “φύση” (“phuse”), hence “physics” (upsilon is badly transliterated as “y”). By “scripture” the passage means the OT as it was known to Greek-speaking Christians, ie the Septuagint (“Apostles’ Bible”), which differs markedly from later translations. The NT didn’t yet exist at the time 2 Timothy was faked.
The point of Christianity is to walk in the paths of righteousness, not to worship a book for which there is no universally accepted version. There are of course precious few real Christians. Who actually loves his neighbor as himself or his family?

Samuel C Cogar
Reply to  MRW
April 17, 2017 4:45 am

Chimp – April 16, 2017 at 3:43 pm

Samuel C Cogar April 16, 2017 at 6:06 am
Easter is as a good a day as any to point out that nowhere in the Bible does it claim to be the word of God or in any way scientifically valid, which it obviously is not.

Chimp, I thank you for responding ……. and you are correct about Easter …… and Christmas should also be included.

The reason I specifically noted Easter Sunday was for John Knight’s benefit …. and the fact that when the Biblical patriarchs decided to create a “new” Church holiday by choosing a specific day of a specific month so that all Christian churches would know exactly when they were mandated to celebrate or worship the Resurrection or Easter Sunday …… it turned out to be a “big mistake” because their calendar was FUBAR and they didn‘t have a clue why,

So, to solve their problem, said Biblical patriarchs changed their mandate for when Easter was to be celebrated ……. and stipulated it be held on …….. “the 1st Sunday ….. after the 1st Full Moon …. after the Spring Equinox”.

Chimp
Reply to  MRW
April 17, 2017 10:13 am

The method for calculating the date of Easter might be the first known algorithm.

Bede says that Easter comes from the pagan Anglo-Saxon goddess after whom the month corresponding to April was named. The English name for Easter closer to that used in most other languages is Pascha, from the Aramaic via Greek, for the Jewish holiday called Passover in English.

Reply to  rocketscientist
April 12, 2017 8:46 pm

rocketscientist:
Strictly speaking, a climate model does not make a “hypothesis” for it does not make “predictions.” Instead, this model makes “projections.” Unlike predictions, projections are not falsifiable. For a climatologist with backing from the political establishment, to avoid making predictions has the beauty of providing him/her with job security regardless of performance!

Steve Jones
Reply to  Terry Oldberg
April 13, 2017 1:14 am

“Holocausts are the outcome of certainty”…
Quite the reverse, actually. No doubt you believe the Jews’ Holocaust LIES, but millions of people have actually investigated it for themselves, and found it to be impossible.
Why is it ILLEGAL to even question the ‘Holocaust’ in so many European countries? Because it is clearly untrue, propaganda which keeps us all under the Jewish yoke. Do you even know how money is created? Who controls the power to create money in your country? Who gave them that right?

[And we shall stop that line of argument and discussion at this point. .mod]

MarkW
Reply to  Terry Oldberg
April 13, 2017 6:20 am

Mod, if you aren’t going to allow refutation, could at least snip that nonsense completely?

Brook HURD
Reply to  Terry Oldberg
April 13, 2017 10:54 am

Rocketscientist,

That pretty much defines most public sector employees.

Bryan A
Reply to  Terry Oldberg
April 13, 2017 12:18 pm

MarkW

April 13, 2017 at 6:20 am

Mod, if you aren’t going to allow refutation, could at least snip that nonsense completely?

I second that emotion

AP
Reply to  rocketscientist
April 13, 2017 3:47 am

Economists use similar models, to model systems of similar complexity, but do not claim their models are “science”. Why do climate “scientists” reckon they’re special?

Reply to  AP
April 14, 2017 12:36 pm

Because climate science is hijacked by politics. Models were convenient for the extremist left-greens because GCMs allowed them to project a very wide range of changes. The last review of climate models I was given a link to, by an alarmist, projected, “with 95% probability” a CO2 climate sensitivity from 1C to 6.4C. They crave long tail projections (near 6.4C) because it lets them say humanity is despoiling the planet so we must reign people in. We must stop people using energy. [ This is, essentially, Monckton’s recent argument. ]

Reply to  rocketscientist
April 13, 2017 8:54 am

We have 100 years of CO2 and temperature data else how can we possibly say that temperature has risen by 0.6 deg C based on a rise in CO2. I bet if they ran the models from 1917 or earlier then their projection/prediction for 2017 would be way off. So what’s the point of waiting another 100 years?

Reply to  son of mulder
April 13, 2017 2:37 pm

Having recently re-read Anthony’s Surface Stations paper, I reject your assertion that “We have 100 years of CO2 and temperature data”.

This crappy temperature fairy tale should make a mockery of any ‘climate scientist’ who claims to be able to make projections/predictions of any length of time.

chris moffatt
Reply to  rocketscientist
April 13, 2017 2:07 pm

Indeed – the past forty years not counting because their bad data doesn’t agree with the models. It is no mystery why the models don’t produce correct results even when fed correct data, if such a thing is itself possible. They are only computer programs after all; anyone who knows anything about computer programs knows that if there is a single incorrect parameter or value, a single incorrect algorithm, a single omitted process or value, a single error of precision then the results must be incorrect. The models have all these flaws; they can never be correct as currently constituted. Irritatingly even if a model gave a correct result we’d never know if it was for the correct reason or a mere fortuitous agglomeration of errors. Models such as these are a useful research tool nothing more and relying on their output as being correct or even credible is not employing “scientific method”

gnomish
April 12, 2017 3:25 pm

ha! now it’s time for shifting the skeptical narrative in this good-scientist/bad-scientist dialectic.
the scientific method! now with green bleaching crystals!
dishonesty is just an alternate form of science, right?

Pop Piasa
Reply to  gnomish
April 12, 2017 6:47 pm

Do you mean sciencey stuff like the “Ajax white knight”, or “New Fab With (Lemon) Borax”? Don’t forget Calgon, the “Ancient Chinese Secret”.

Sheri
Reply to  gnomish
April 13, 2017 10:54 am

dishonesty is just an alternate form of science, right?

Yes, just like your name is representative of a real creature.

Latitude
April 12, 2017 3:32 pm

What if?……what if you got just one prediction right

Pop Piasa
Reply to  Latitude
April 12, 2017 6:52 pm

“what if you got just one prediction right”
Then I’d collect on the winner of the next triple crown of horse racing.
Hopefully against incredible odds.

Duster
Reply to  Latitude
April 12, 2017 7:47 pm

The importance of a scientific “prediction” is repeatability. One successful prediction is the epistemological equivalent to a lucky guess. More importantly, independent verification, which removes the ability to define “success” from the hands of a single operator or group of operators is essential. There needs to be an agreement among researchers of multiple camps on the nature of a series of “successful predictions.” Anything less is louse picking off the backs of your troop mates. Repeated successful predictions are what we are looking for, or even simply sufficient knowledge to make such predictions. Useful models with adequate “art” should converge. If they don’t, as the OP pointed out, there’s “something missing.” The divergences make it quite clear that the “something missing” is critical. More importantly, if the failure of convergence is attributable to the complexity and sensitivity of the system to initial conditions, that system maybe inherently unpredictable. If it is, then mathematical models and super computers and grants are a waste to time, money and equipment.

John Harmsworth
Reply to  Duster
April 12, 2017 9:33 pm

It’s all , close but no cigar! For gravitation, for instance, the phenomenon was well known even by pre-human apes and the law was accepted even before there was language. “Stuff go up, stuff come down”. That sufficed for several million years, until Isaac Newton got interested in understanding God’s universe. He made a detailed study of the matter but, we should remember, he was only able to describe its relationship with mass-not why there is a relationship. That sufficed for a long time,though,and cannon balls landed closer to their targets. Then Einstein came along, and described why mass affected gravitation (sort of).
The question now is, are the climate scientists smarter than Einstein and the science is settled? Or are they smarter than Newton and just describing an interesting thing without actually understanding it? Or thirdly, they found an apple in their lunchbox and imagined that apple trees grew in there overnight and were bound to destroy the lunchbox if measures aren’t taken.
What the Hell real scientist ever said, “The science is settled”?

Gloateus
Reply to  Duster
April 13, 2017 9:35 am

The great autodidact Oliver Heaviside discussed the possibility of gravitational waves in 1893.

Heaviside O. A gravitational and electromagnetic analogy, Electromagnetic Theory, 1893, vol.1 455-466 Appendix B

Like Faraday, another genius without degrees or even formal scientific education.

Whether Einstein knew of his speculation or not, I don’t know, but probably not. He did however probably know about Poincare’s 1905 proposal that gravitational waves emanate from a body and propagate at the speed of light, as required by the Lorentz transformations. (Heaviside also foreshadowed Lorentz’ work.)

http://www.academie-sciences.fr/pdf/dossiers/Poincare/Poincare_pdf/Poincare_CR1905.pdf

Until gravitation is unified with the other “forces” (if that’s what they are rather than distortions of space-time), it can’t be considered “understood”, IMO.

Reply to  Latitude
April 12, 2017 8:54 pm

Latitude:
Today’s climate models DO NOT make predictions!

Sun Spot
Reply to  Terry Oldberg
April 13, 2017 10:00 am

Terry you are correct sir, they only express a hypothesis (and a failed one at that).

Editor
April 12, 2017 3:33 pm

Thanks for that, Eric. Let me also recommend my open letter to Kevin Trenberth on the same topic of the reversal of the null hypothesis here.

Regards,

w.

Gary Pearse
Reply to  Willis Eschenbach
April 12, 2017 6:03 pm

Willis, a good reread – I remembered it once I got into it. I detect that you had at least a small effect on Dr T. He at least is spouting off magnanimously about uncertainties, but the rest of your suggested list he doesn’t seem to have budged on.

lee
Reply to  Willis Eschenbach
April 12, 2017 8:13 pm

Can people get to near the pinnacle of climate science merely by being a useful idiot?

Duster
Reply to  Willis Eschenbach
April 12, 2017 8:17 pm

Probably neither. Researchers become wedded to their ideas, their funding, and their audience of admirers. One of the distinctions of climate “Science” is that it climbed to international and political prominence on the back of a reified generalization about weather – that is all that the idea of “climate” really is. The practitioners like Trenberth then used another shell game tactic to divert important effort and funds from attempting to further develop a science of meteorology to the backwards approach of trying understand weather via climate. Trenberth and many others dove head first down a rabbit hole of dubious logic and utility and dragged politicians needing platforms and policies, and droves of missionaries and zealots needing a cause that would differentiate them from the rest of hoi poloi and let them be “specialer” than that JW, or those two neatly dressed young men with their bicycles ringing your doorbell on a Saturday morning.

Reply to  Willis Eschenbach
April 12, 2017 9:15 pm

Evil genius of useful idiot? One can gain insights into this issue by coming up to speed on the equivocation fallacy.

Phillip Bratby
Reply to  Willis Eschenbach
April 12, 2017 10:53 pm

Based on his Energy Budget diagram, he is a useful idiot.

AndyE
Reply to  Willis Eschenbach
April 12, 2017 11:26 pm

How does one prove a negative, namely that CO2 will not increase global temperatures greatly?? I propose that would take us 1000 years of reliable, scientific observations. We have made a start – so just relax for next, say, 800 years.

hunter
Reply to  AndyE
April 13, 2017 5:14 am

Since the assertion that CO2 will cause a climate crisis is untrue, why don’t we focus on that instead?

MarkW
Reply to  AndyE
April 13, 2017 6:23 am

We can also review climate history to see if there is any correlation between temperature and CO2 levels.

Reply to  Willis Eschenbach
April 12, 2017 11:27 pm

Phillip Bratby April 12, 2017 at 10:53 pm

Based on his Energy Budget diagram, he is a useful idiot.

Always wondered why we are installing solar panels iso backradiation panels.
Those could deliver twice the amount of power the solar ones do, and 24/7 as well…..

JohnKnight
Reply to  Willis Eschenbach
April 13, 2017 12:31 am

Forest,

“The only thing which bothers me about your pea and thimble explanation for Trenberth’s words is whether you are in fact assigning to malice that which can adequately be explained by stupidity.

There are people who deliberately craft words to mislead. How do we know that Trenberth actually thought it through? Just look at the article now under discussion. That is not the work of a master persuader.”

It seems you are assuming he was acting alone, such that he actually persuaded the whole IPCC/CAGW clan . . That does seem very unlikely, to me ; )

richardscourtney
Reply to  Willis Eschenbach
April 13, 2017 2:58 am

Willis Eschenbach :

In your every fine article at your link you say.

The “null hypothesis” in science is the condition that would result if what you are trying to establish is not true. For example, if your hypothesis is that air pressure affects plant growth rates, the null hypothesis is that air pressure has no effect on plant growth rates. Once you have both hypotheses, then you can see which hypothesis is supported by the evidence.

In climate science, the AGW hypothesis states that human GHG emissions significantly affect the climate. As such, the null hypothesis is that human GHG emissions do not significantly affect the climate, that the climate variations are the result of natural processes. This null hypothesis is what Doctor T wants to reverse.

Actually, the scientific null hypothesis is more general than you state although the reversal desired by Trenberth does amount to what you say.

The matter has some pertinence because it is never possible to obtain evidence that something (e.g. an affect on plant growth) does not exist: it is only possible to show that available methods fail to indicate that something exists.

In all science the Null Hypothesis says it must be assumed a system has not experienced a change unless there is evidence of a change.

The Null Hypothesis is a fundamental scientific principle and forms the basis of all scientific understanding, investigation and interpretation. Indeed, it is the basic principle of experimental procedure where an input to a system is altered to discern a change: if the system is not observed to respond to the alteration then it has to be assumed the system did not respond to the alteration.

In the case of climate science there is a hypothesis that increased greenhouse gases (GHGs, notably CO2) in the air will increase global temperature. There are good reasons to suppose this hypothesis may be true, but the Null Hypothesis says it must be assumed the GHG changes have no effect unless and until increased GHGs are observed to increase global temperature. That is what the scientific method decrees. It does not matter how certain some people may be that the hypothesis is right because observation of reality (i.e. empiricism) trumps all opinions.

Please note that the Null Hypothesis is a hypothesis which exists to be refuted by empirical observation. It is a rejection of the scientific method to assert that one can “choose” any subjective Null Hypothesis one likes. There is only one Null Hypothesis: i.e. it has to be assumed a system has not changed unless it is observed that the system has changed. Hence, Trenberth’s desire to reverse the Null Hypothesis is a rejection of the scientific method.

However, deciding a method which would discern a change may require a detailed statistical specification.

In the case of global climate in the Holocene, no recent climate behaviours are observed to be unprecedented so the Null Hypothesis decrees that the climate system has not changed: i.e. there is no reason to suppose that climate changes now happening have different cause(s) to those of similar climate changes in the past.

Importantly, an effect may be real but not overcome the Null Hypothesis because it is too trivial for the effect to be observable. Human activities have some effect on global temperature for several reasons. An example of an anthropogenic effect on global temperature is the urban heat island (UHI). Cities are warmer than the land around them, so cities cause some warming. But the temperature rise from cities is too small to be detected when averaged over the entire surface of the planet, although this global warming from cities can be estimated by measuring the warming of all cities and their areas.

Clearly, the Null Hypothesis decrees that UHI is not affecting global temperature although there are good reasons to think UHI has some effect. Similarly, it is very probable that AGW from GHG emissions are too trivial to have observable effects.

Empirical evidence indicates that net feedbacks in the climate system are negative and, therefore, any effect of increased CO2 will probably be too small to discern because natural climate variability is much, much larger. This concurs with the empirically determined values of low climate sensitivity.

Empirical – n.b. not model-derived – determinations indicate climate sensitivity is less than 1.0°C for a doubling of atmospheric CO2 equivalent. This is indicated by the studies of
Idso from surface measurements
http://www.warwickhughes.com/papers/Idso_CR_1998.pdf
and Lindzen & Choi from ERBE satellite data
http://www.drroyspencer.com/Lindzen-and-Choi-GRL-2009.pdf
and Gregory from balloon radiosonde data
http://www.friendsofscience.org/assets/documents/OLR&NGF_June2011.pdf

Indeed, because climate sensitivity is observed to be less than 1.0°C for a doubling of CO2 equivalent, it is physically impossible for the man-made global warming to be large enough to be detected (just as the global warming from UHI is too small to be detected). If something exists but is too small to be detected then it only has an abstract existence; it does not have a discernible existence that has effects (observation of the effects would be its detection).

To date there are no discernible effects of AGW. Hence, the Null Hypothesis decrees that AGW does not affect global climate to a discernible degree. That is the ONLY scientific conclusion possible at present.

Richard

Gloateus
Reply to  richardscourtney
April 13, 2017 9:39 am

Well stated.

The CACA hypothesis requires thought as muddy as your statement was clear.

Reply to  richardscourtney
April 14, 2017 7:42 am

It’s worse that that. consider the hypothesis ” We can produce a computer model and run it so as to predict usefully the future climate for the whole planet for the next 100 years. It will predict usefully patterns of temperature, precipitation,wind, drought, cloud, air pressure, jetstream, polar ice, glaciation and sealevel so that mankind can prioritise mitigation so as to maximise the benefits from mankind’s labours.” The Null hypothesis is “No we can’t”. I only ever hear of predicted damaging climate events (weather) a few days before and then they are often wrong. What, when and where will be the first climate catastrophe caused by anthropogenic CO2?

richardscourtney
Reply to  richardscourtney
April 14, 2017 9:30 am

son of mulder:

Yes, I agree.

No model’s predictions should be trusted unless the model has demonstrated forecasting skill. None of the climate models has existed in its present form for 20, 50 or 100 years so it is not possible to assess their predictive capability on the basis of their demonstrated forecasting skill; i.e. they have no demonstrated forecasting skill and, therefore, their predictions are unreliable.

Put bluntly, predictions of the future provided by existing climate models have the same degree of demonstrated reliability as has the casting of chicken bones for predicting the future.

Richard

markl
Reply to  richardscourtney
April 14, 2017 10:38 am

“…predictions of the future provided by existing climate models have the same degree of demonstrated reliability as has the casting of chicken bones for predicting the future.”

Not really. Chicken bones have a proven record of being right sometimes.

drednicolson
Reply to  richardscourtney
April 14, 2017 10:26 am

I’d take the chicken bones. Saves all that time, money, and hardware.

Graemethecat
Reply to  richardscourtney
April 14, 2017 9:59 pm

Thank you for a very clear and cogent explanation of the Null Hypothesis. I would dearly like to read Trenberth’s response as his assertions are in flagrant contradiction with this principle.

richardscourtney
Reply to  richardscourtney
April 15, 2017 2:34 am

Graemethecat:

Trenberth made his assertion that the Null Hypothesis should be reversed in January 2011 and many scientists expressed their shock. He sensibly said nothing in response to the outrage at his suggestion.

However, in this thread Nick Stokes has attempted to justify Trenberth’s assertion by pretending the scientific method does not have a null hypothesis!

In statistics a null hypotheses is anything one wants to choose and Stokes claims the version of null hypotheses used in statistics applies in the scientific method. He knows that is not true because I corrected him when he tried to promote that falsehood once before. But in this thread Stokes’ lie has been supported by an anonymous troll posting as dikranmarsupial, and someone else hiding behind the alias ‘and then there’s physics’, together with the ludicrous Terry Oldberg.

The falsehood promoted by Stokes is ridiculous for several reasons. Not least is the fact that Trenberth could not have suggested a reversal of the scientific null hypothesis if there were not a unique scientific null hypothesis.

Richard

dikranmarsupial
Reply to  richardscourtney
April 16, 2017 12:51 am

richardscourtney writes:

However, in this thread Nick Stokes has attempted to justify Trenberth’s assertion by pretending the scientific method does not have a null hypothesis!

There is no “null hypothesis” in scientific method other than the statistical usage, it is entirely of Richard’s imagination. I checked, the OED only gives the statistical definition, and the earliest example is RA Fisher’s book on experimental design (Fisher is the statistician that coined the phrase), and Google books has an n-gram viewer that shows no usage of the phrase “null hypothesis” before 1930 (around the time the statistical usage was coined). I repeatedly challenged Richard for evidence of a non-statistical usage of “null hypothesis” in scientific method, and all I got was bluster and ad-hominems (see above) but no answer to the challenge.

Nick was right, richard’s unsupported assertion of a “null hypothesis” in scientific method that is not just the usual statistical usage is “twaddle” (as Nick puts it), and the fact that Richard repeatedly ducks the challenge to provide a verifiable reference for such a usage strongly suggests Nick is right.

richardscourtney
Reply to  richardscourtney
April 16, 2017 2:03 am

troll hiding behind the alias dikranmarsupial:

Stop pretending to be stupid.

I yet again point out
The falsehood promoted by Stokes is ridiculous for several reasons. Not least is the fact that Trenberth could not have suggested a reversal of the scientific null hypothesis if there were not a unique scientific null hypothesis.

I am reminded of when it was pointed out that the tropospheric ‘hot spot’ was missing. Warmunist trolls proclaimed that said nothing about man-made global warming because – they claimed – the ‘hot spot’ was an effect of warming from any cause. They merely parroted their claim when it was pointed out that if their claim were true then the absence of the ‘hot spot’ indicates there has been no warming from any cause including human activity.

Richard

dikranmarsupial
Reply to  richardscourtney
April 16, 2017 2:37 am

richardscourtney yet again ducks the challenge to provide a verifiable reference for his claim that there is a usage of “null hypothesis” in scientific method other than the statistical usage.

If someone makes an unsupported assertion, then asking for a verifiable reference to support that assertion is not “trolling”, it is a normal part of scientific discussion. In scientific discussion, the normal practice is to provide the reference, not engage in ad-hominems.

I predict that richard will continue to duck the challenge, and will continue with the ad-hominems, but that of course will just confirm that Richard knows that his definition of “null hypothesis” is of his own imagining, but is too obstinate to admit his error.

BTW I do post pseudonymously, but have revealed my identity here and elsewhere on multiple occasions, so precisely nothing is hidden behind my username. I don’t know (or care) whether richardscourtney is your real name; what matters is validity of your arguments.

richardscourtney
Reply to  richardscourtney
April 17, 2017 1:13 am

troll hiding behind the alias dikranmarsupial:

Everyone who knows anything about the scientific method knows I have written nothing controversial and you are spouting nonsense. I cite the following example which makes the matter clear for non-scientists.

The Michelson-Morley Experiment (MME) was conducted in 1887 which was before statistics formulated its own version of null hypotheses. The MME failed to detect movement of the luminiferous ether.

For the reason I explain in my above account of the scientific Null Hypothesis, the failure of the
MME to detect movement of the luminiferous ether required adoption of the scientific Null Hypothesis that there is no movement of the luminerous ether. And that conclusion that there is no movement of the ether led to most modern physics and our modern electronic communications.

However, there is now some evidence that the MME was inadequate to detect movement of the luminiferous ether that may exist.

So, please now crawl back under your bridge hopefully to permanently stay there.

Richard

dikranmarsupial
Reply to  richardscourtney
April 17, 2017 2:03 am

I wrote

I predict that richard will continue to duck the challenge, and will continue with the ad-hominems, but that of course will just confirm that Richard knows that his definition of “null hypothesis” is of his own imagining, but is too obstinate to admit his error.

Richard wrote

troll hiding behind the alias dikranmarsupial:

How predictable. Note also that I had pointed out that asking for a verifiable reference is not trolling, yet richard continues with that ad-hominem, and that nothing is hidden behind my pseudonym as I have identified myself here and elsewhere quite openly, and yet richard repeats that error as well.

“The Michelson-Morley Experiment (MME) was conducted in 1887 which was before statistics formulated its own version of null hypotheses. The MME failed to detect movement of the luminiferous ether.”

Did Michelson & Morley explicitly use the phrase “null hypothesis”? If not,it is obviously not a verifiable example of a non-statistical usage of the phrase “null hypothesis” in scientific method, and hence is not an answer to the challenge. If they did, give a page reference to the document in which they explicitly used that phrase; failure to do so would confirm that you know they didn’t use that phrase and were just bluffing.

Reply to  dikranmarsupial
April 17, 2017 7:54 am

dikranmarsupial:
Recorded responses by Mr. Courtney to the challenge of presenting a counter-argument suggest the presence in Courtney’s mind of an algorithm to which this mind automatically switches when Courtney is challenged by an opponent to make a counter-argument. This algorithm tells Courtney to “characterize your opponent as a miscreant.”

richardscourtney
Reply to  richardscourtney
April 17, 2017 2:37 am

troll posting as dikranmarsupial:

You have rejected my request for you to crawl back under your bridge.

I have provided sufficient evidence and argument to completely refute your nonsense, and I am confident that everybody can see that. Your idiocy is clear to all when you reject evidence of what was done and why, and you resort to requiring specific words instead (warmunists used that daft requirement to generate the ‘97%’ fallacy).

So, I am confident that anybody can now see through your trolling and I will ignore any more of it.

Richard

dikranmarsupial
Reply to  richardscourtney
April 17, 2017 3:02 am

richardscourtney wrote “I have provided sufficient evidence and argument to completely refute your nonsense”

You have produced no evidence whatsoever for the existence of a scientific usage of “null hypothesis” that predates the statistical usage of that phrase coined by RA Fisher. I on the other hand have produced verifiable evidence that no such usage exists:

(i) the entry in the OED only gives the statistical usage,

(ii) Google books n-gram viewer gives no examples of its use prior to 1930, about the time the phrase was coined by Fisher

and a new one:

(iii) Google scholar finds no papers using the phrase prior to 1930 (the hits it gives were clearly written after 1930 and the algorithm Google uses has obviously got the date wrong). It can however find papers prior to 1930 for less common terms, such as “inverse probability”.

So again, you have to resort to bluster and insult, whereas I actually took the trouble to find out whether there was some truth in your claim.

and you resort to requiring specific words

Evasion – those were the specific words used in your challenge

“I repeat, Perhaps you self-appointed geniuses seeking to redefine the scientific method will explain how the scientific method had its null hypothesis centuries before statistics existed?”

Also, when Nick wrote

A null hypothesis is just an alternative plausible hypothesis adopted in statistical testing to see if it too could explain the observed.

you replied

NO! You know that is a lie because I corrected it when you tried to promulgate it on WUWT on a previous occasion.

If the exact phrase “null hypothesis” has one definition, the statistical one, what Nick wrote is entirely correct and you owe him an apology for calling it a lie. So it is your accusation of dishonesty, and to defend it, you do have to justify your interpretation of Nick’s exact words.

dikranmarsupial
Reply to  richardscourtney
April 17, 2017 3:07 am

HTML tags got a bit confused, I’ll try again:

richardscourtney wrote

“I have provided sufficient evidence and argument to completely refute your nonsense”

You have produced no evidence whatsoever for the existence of a scientific usage of “null hypothesis” that predates the statistical usage of that phrase coined by RA Fisher, just assertion. I on the other hand have produced verifiable evidence that no such usage exists:

(i) the entry in the OED only gives the statistical usage,

(ii) Google books n-gram viewer gives no examples of its use prior to 1930, about the time the phrase was coined by Fisher

and a new one:

(iii) Google scholar finds no papers using the phrase prior to 1930 (the hits it gives were clearly written after 1930 and the algorithm Google uses has obviously got the date wrong). It can however find papers prior to 1930 for less common terms, such as “inverse probability”.

So again, you have to resort to bluster and insult, whereas I actually took the trouble to find out whether there was some truth in your claim.

richard writes:

and you resort to requiring specific words

Evasion – those were the specific words used in your challenge:

I repeat, Perhaps you self-appointed geniuses seeking to redefine the scientific method will explain how the scientific method had its null hypothesis centuries before statistics existed?

If we are talking about the usages of specific terms, like “null hypothesis”, then of course we requite specofoc words!

Also, when Nick wrote

A null hypothesis is just an alternative plausible hypothesis adopted in statistical testing to see if it too could explain the observed.

you replied

NO! You know that is a lie because I corrected it when you tried to promulgate it on WUWT on a previous occasion.

If the exact phrase “null hypothesis” has one definition, the statistical one, what Nick wrote is entirely correct and you owe him an apology for calling it a lie. So it is your accusation of dishonesty, and to defend it, you do have to justify your interpretation of Nick’s exact words.

richardscourtney
Reply to  richardscourtney
April 17, 2017 6:44 am

troll:

You wrote

If the exact phrase “null hypothesis” has one definition, the statistical one

In the unlikely event that you believe that lie then try telling it to Trenberth, not me.

Richard

dikranmarsupial
Reply to  richardscourtney
April 17, 2017 8:06 am

richardscourney wrote

“troll:

You wrote

If the exact phrase “null hypothesis” has one definition, the statistical one

In the unlikely event that you believe that lie then try telling it to Trenberth, not me.

Richard

It isn’t a lie, there is no other definition of “null hypothesis”, as demonstrated by your inability to produce a verifiable reference for one, despite having been repeatedly challenged to substantiate your claim. Everytime you double down like this, you are just confirming that you are too obstinate to admit you are wrong, and I am right on this issue (at least I can, and have, cited verifiable sources to back up my position); not a great advert for “climate skepticism”!

richardscourtney
Reply to  richardscourtney
April 17, 2017 9:36 am

troll:

It is no surprise that you won’t tell Trenberth he was wrong when he made an assertion about the scientific null hypothesis.

YOU KNOW YOUR CLAIM THERE IS NOT ONE IS A LIE.

Richard

dikranmarsupial
Reply to  richardscourtney
April 17, 2017 9:53 am

Richardscourtney wrote

troll:

It is no surprise that you won’t tell Trenberth he was wrong when he made an assertion about the scientific null hypothesis.

YOU KNOW YOUR CLAIM THERE IS NOT ONE IS A LIE.

Richard

There is no scientific “null hypothesis” other than the usual statistical usage, you may think there is, but the fact you can’t find a single verifiable reference to support that should be enough to make you question whether maybe, just maybe, you are wrong. Unfortunately, after being so insulting you have backed yourself into a corner where you can no longer admit you were wrong without making yourself look utterly ridiculous. Being wrong is no big deal, it happens to us all every now and again, not being able to admit you are wrong is another matter entirely.

BTW I did say Trenberth is wrong in wanting to reverse the null hypothesis, just not because of your imaginary definition of a scientific null hypothesis for which you are unable to find a single verifiable reference! ;o)

Chimp
Reply to  richardscourtney
April 17, 2017 10:00 am

Dik,

There most certainly is a scientific null hypothesis, quite apart from its statistical use:

https://explorable.com/null-hypothesis

Significance tests can be applied to experiments testing the null, but ideally aren’t needed.

Chimp
Reply to  richardscourtney
April 17, 2017 10:06 am

For a more detailed description of using the null hypothesis in the scientific method:

http://www.livescience.com/21490-what-is-a-scientific-hypothesis-definition-of-hypothesis.html

dikranmarsupial
Reply to  richardscourtney
April 17, 2017 10:09 am

chimp – they are just using it in the statistical sense (note if you go up a level to “Research Designs” there are lots of statistical topics), however that site directly contradicts Richard anyway as he claims the null hypothesis is always “no change”, whereas the refrence you give is

“The simplistic definition of the null is as the opposite of the alternative hypothesis, H1, although the principle is a little more complex than that.”

I.E. if you are arguing (H1) there is no change [warming] then your null hypothesis (H0) should be that there is a change [warming].

dikranmarsupial
Reply to  richardscourtney
April 17, 2017 10:14 am

Chimp, the second web pages is also talking about hypothesis testing in the statistical sense:

During a test, the scientist may try to prove or disprove just the null hypothesis or test both the null and the alternative hypothesis. If a hypothesis specifies a certain direction, it is called one-tailed hypothesis. This means that the scientist believes that the outcome will be either with effect or without effect. When a hypothesis is created with no prediction to the outcome, it is called a two-tailed hypothesis because there are two possible outcomes.

the “one-tailed” and “two-tailed” refers to the tails of a statistical distribution and makes no sense whatsoever in a non-statistical setting.

Chimp
Reply to  richardscourtney
April 17, 2017 10:19 am

Dikran,

In the case of dangerous man-made global warming, the alternative hypothesis is that there is such a thing. The null hypothesis is that there isn’t, ie that nothing unusual is happening in the climate system that needs a special explanation.

Null: No departure from natural variation has been observed in earth’s climate during the past century.

Alternative: Human activity has caused earth’s climate to warm outside the bounds of normal variation and this effect is dangerous.

The null cannot be rejected, as no evidence supports the assertion of a departure from the norm, let alone a dangerous one.

Chimp
Reply to  richardscourtney
April 17, 2017 10:23 am

Dikran,

As noted, sometimes results of experiments, whether because of poor design or unavoidably, by the nature of the question, need to be tested for statistical confidence level. This however is not always necessary and ideally not.

No doubt you’re familiar with the quotation attributed to Ernest Rutherford, “If your experiment needs statistics, you ought to have done a better experiment”.

dikranmarsupial
Reply to  richardscourtney
April 17, 2017 10:32 am

chimp none of which has any bearing on whether there is a usage of “null hypothesis” in scientific method other than the usual statistical sense.

You also appear to have missed the fact that I said Trenberth is wrong to want to reverse the null hypothesis, precisely because he is arguing that there is climate change, so his null hypothesis should be that there is no climate change. Climate skeptics on the other hand, who wish to claim there has been a hiatus, are arguing there has been no climate change, so their null hypothesis should be that warming has continued at the same rate (and the apparent hiatus is due to “weather noise”). Of course climate skeptics never do that and instead go on about “no warming since [insert cherry picked start date here]), which is reversing the null hypothesis in exactly the way Trenberth suggests! Sauce for the goose, and all that.

I am well aware of the Rutherford quote, of course science has moved on a bit since then, I doubt you will find many experimental journal papers from CERN that don’t use statistics.

Reply to  Willis Eschenbach
April 13, 2017 7:05 am

He is quoted a lot in the media.
He is a reliable go to guy for the media when they need to be reassured about global warming.
He may be honest, but, he has to play politics to maintain his good standing with the leftists who will not accept the slightest deviation from the party line. Case in point: When have you heard Dr. Hansen quoted lately. Hansen is a new kind of climate denier. This is pure leftist politics*. You know Trenberth is not telling the full truth, and that is a great way of lying.

*If you are puzzled by this, read Commies by Ronald Radosh to understand this situation.

JohnKnight
Reply to  Willis Eschenbach
April 13, 2017 3:09 pm

Forrest,

“So John, do I mark you down as voting that Trenberth was merely a useful idiot?”

Oh no, I consider it a real possibility that he’s an intelligent bullshit artist, put forward because he’s good at it/comes off as honest. I don’t treat such things as if questions on a test, that need a definitive answer to be chosen . . right now.

JohnKnight
Reply to  Willis Eschenbach
April 13, 2017 3:34 pm

PS~ I’ve been a game player all my life, from bridge to basketball, and when you’re trying to deceive, you generally want to appear less . . savvy than you really are.

“Just look at the article now under discussion. That is not the work of a master persuader.”

Why would he want you (people) to see him as a master persuader, if he’s deeply involved in a con job? At this point, if I were involved in that way, I’d be putting out stuff that makes me look kinda naive and zealous . . cause prison sucks ; )

markl
April 12, 2017 3:37 pm

When all else fails…..change the rules! It’s obvious the alarmists are feeling the missing heat and preparing their groundwork for the inevitable come to Jesus meeting that is closing in on them. Without government support they must rely on the scientific community and disclaimers like this will only hurt them.

April 12, 2017 3:38 pm

What a Joke he is.

April 12, 2017 3:38 pm

Rational Wiki has a pretty good outline of “the scientific method” as well as some errors. http://rationalwiki.org/wiki/Scientific_method

Despite the lack of simple linearity in reality, the method has often been codified into stages that make it easier to understand. Essentially, the following five steps make up the scientific method:

1) Observe
2) Hypothesize
3) Predict
4) Test Predictions (in physical sciences this is called Experiment) – Compare the predictions with new empirical evidence (usually experimental evidence, often supported by mathematics). This step is the reason why a hypothesis or theory has to be falsifiable — if there’s nothing to falsify, then the experiment is pointless because it’s guaranteed to tell you nothing new. Information from the experiment can disprove the original hypothesis, which might be refined into a better one. (I’ve left the description on this one.)
5) Reproduce

A few common errors in applying the scientific method which are pertinent in Climate Science:
1) A description of “Pseudoscience”

“All but the first two steps are omitted from the process in pseudosciences … . Pseudosciences do observe the world, and do come up with explanations, but are often unable or unwilling to follow through in testing them more thoroughly. Refining the hypotheses is also undesirable in pseudoscience as this could lead to abandoning the central dogma of the belief … . However, because observations and explanations still form a part of pseudoscience and can be phrased in a scientific style, pseudosciences may mistakenly appear to have scientific authority.”

2) On the importance of skepticism:

“Scientific skepticism is a vital element in the scientific process, ensuring that no new hypothesis is considered a Theory (capped T) until sufficient evidence is provided and other scientists have had their chances to debunk it. Even then, all of science is always considered a “good working model” and the “best understanding we have at the present time.” No scientific idea is ever considered “the final word … .”

3) On the subject of Objectivity and Bias

“The scientific method helps us pursue the ideal of scientific objectivity, protecting against bias that could lead to false conclusions. Bias, in the sense of inclinations or preconceptions, is part of being human, and has a role in scientific inquiry insofar as it guides what questions to ask and how to ask them. At the same time bias leads to championing a particular conclusion a priori, independent of evidence, belief, not necessarily reality. The scientific method explicitly seeks to remove bias through rigorous hypothesis testing and reproducing results.”

Two forms of bias are particularly significant:

A) Unintentional short circuiting of the scientific method — “In order to look for “data” you need to have a model or “structure” of how the world works. The problem as James Burke pointed out in the “Worlds Without End” episode of Day the Universe Changed that structure can drive every part of your research even what you accept as reliable data. …

“Burke points out one of the reasons the Piltdown hoax lasted as long at it did was it fitted the then prevalent structure of finding a human like skull with an ape-like face. In fact, in 1913, David Waterston of King’s College London stated in Nature that the find and an ape mandible and human skull and French paleontologist Marcellin Boule said the same thing in 1915. In 1923 Franz Weidenreich stated after careful examination that the Piltdown find was a modern human cranium and an orangutan jaw with filed-down teeth, but because Piltdown fit the structure so well other scientists let the model drive their thinking rather than the evidence itself.

B) Cheating the scientific method

Pseudoscientists have discovered an obvious way to ‘cheat’ the scientific method. It goes like this:

1) Pick a personal belief that you already ‘know’ is true, but for which you want ‘proof’.
2) Perform some related observations or experiments, and note the results.
3) Generate a hypothesis that shoehorns said results into your personal belief.
4) Falsely claim that your personal belief predicts the particular results, and that the observations/experiment confirmed your suspicions.

This is a blatant perversion of the scientific method, but to someone not versed in science, fallacies, or psychology, it might seem similar enough to be accepted as legitimate.

I think of the pseudosciences whenever I remember Trenberth deciding that Man-made Climate Change increased hurricanes despite having no evidence to support that conclusion. Landsea (the hurricane expert on the IPCC) resigned from the IPCC after that pronouncement, yet many people still cling to this unproven belief.

Reply to  lorcanbonda
April 12, 2017 9:22 pm

Right on. Currently, global warming climatology is pseudoscientific. The UNIPCC proves this contention in the opening paragraphs of the report of Working Group I, Assessment Report 4. The IPCC asserts that in the modern era falsifiability has been replaced by peer review. WRONG!

Stephen Richards
Reply to  lorcanbonda
April 13, 2017 1:35 am

I prefer The Feynman method. Simple, clear, precise

Gloateus
April 12, 2017 3:43 pm

If Trump can’t fire Kev and Gav, maybe at least he can banish them to the Arctic Ocean to conduct actual scientific observations, rather than running GIGO models in the posh surroundings of Boulder and Manhattan.

Better yet would be to deport them as cr!minal aliens to their native misty isles, the better to conduct atmospheric water vapo(u)r experiments.

These post-modern charlatans have lost all sight of the scientific method as it has proven tried and true since 1543.

toorightmate
Reply to  Gloateus
April 12, 2017 5:02 pm

Isn’t the Arctic Ocean like a sauna these days?
I think I saw a model that said that would be the case – and models don’t lie.

MRW
Reply to  toorightmate
April 12, 2017 6:00 pm

Isn’t the Arctic Ocean like a sauna these days?

Yes, provided you recognize that the warmth is going from, say, 225K to 245K (20 degrees C) and not cracking freezing.

old construction worker
Reply to  toorightmate
April 12, 2017 6:15 pm

‘and models don’t lie.” I bet you read that on the internet at real science so it must be true.

Reply to  toorightmate
April 12, 2017 9:28 pm

Rather than lying, a climate model equivocates, Few bloggers are aware of the fact that there is an important difference between the two concepts.

Greg Cavanagh
Reply to  Gloateus
April 12, 2017 5:16 pm

Keep those two away from any data collection.
Who would believe anything written or recorded by those two? It would be a complete waste of time and effort to get them there.

Michael Jankowski
April 12, 2017 3:43 pm

‘…The models are not perfect and involve approximations. But because of their complexity and sophistication, they are so much better than any “back-of-the envelope” guesses, and the shortcomings and limitations are known…’

Shortcomings and limitations are known…just not advertised.

And if a model can track global temperature observations relatively well but is failing on hemispheric, continental, and even regional levels, it’s a physically unrealistic piece of crap. If you get to the “right” answer by the wrong means, so what?

Gloateus
Reply to  Michael Jankowski
April 12, 2017 3:45 pm

Especially if the models have failed in their near-term predictions, as the GCMs have so epicly.

Janice Moore
Reply to  Gloateus
April 12, 2017 4:03 pm

Maybe ol’ Travesty hasn’t gotten the word that Bob Tisdale is now giving away Climate Models Fail for Free (T. clearly has not read it):comment image

@ TT — Climate models are unfit for purpose — show NO SKILL WHATSOEVER.

Read Bob Tisdale’s book and be much less ignorant than you are now, TT.

(For downloading/ordering of Amazon kindle version, see this page: https://bobtisdale.wordpress.com/2013/09/24/new-book-climate-models-fail/ )

Gloateus
Reply to  Gloateus
April 12, 2017 4:38 pm

I never see that quotation from Kiwi Kev without thinking of the Southern Hemisphere Spanish word for transvestite, ie travesti. NOAA “science” is science in drag.

Reply to  Gloateus
April 12, 2017 9:31 pm

Gloateus:
When you dysyr that the models make “predictions” you become a part of the problem rather than a part of the solution to this problem. The solution is disambiguation.

Johann Wundersamer
Reply to  Gloateus
April 14, 2017 5:35 pm

“Terry Oldberg on April 12, 2017 at 9:31 pm”

Yes, Terry Oldberg.

Johann Wundersamer
Reply to  Gloateus
April 14, 2017 5:58 pm

so what.

Editor
Reply to  Michael Jankowski
April 12, 2017 8:05 pm

Yes, some shortcomings and limitations are known. As I and others have explained in some detail, the models can never work. That is a known shortcoming.

Reply to  Michael Jankowski
April 12, 2017 10:44 pm

Michael:
You have the wrong idea. Whether the model “can track global temperature observations relatively well” is irrelevant. Relevant is whether the representations of this model are truthful.

Hivemind
Reply to  Terry Oldberg
April 12, 2017 11:22 pm

Relevant is whether I can get grants from the government for more research. The amount of money depends on how bad the coming disaster is. The is why every model blows the disaster up into truly biblical proportions.

Reply to  Michael Jankowski
April 12, 2017 10:57 pm

‘…The models are not perfect and involve approximations. But because of their complexity and sophistication, they are so much more utterly useless than any “back-of-the envelope” guesses,’

TFTFY

MarkW
Reply to  Leo Smith
April 13, 2017 6:31 am

Complexity and sophistication are proof of reliability?
In every endeavor I have ever been involved in, we strive for simplicity. Too much complexity is evidence that you haven’t thought the problem through well enough.

Tom Halla
April 12, 2017 3:43 pm

The major point is that the estimate of the sensitivity for CO2 doubling has not improved since 1979. Perhaps the very models they are going on are basically wrong?

Janice Moore
April 12, 2017 3:44 pm

…. climate models { } carry out “what-if”experiments. ….
all very legitimate …. there is a great deal of data { } millions of daily observations …,
…. published …. peer review …. There is no doubt whatsoever ….

“Player Queen” Travesty Trenberth
Tragedy of Science, Act 694, scene 1

The lady doth protest too much, methinks.
Queen
Tragedy of Hamlet, Act III, scene 2

Johann Wundersamer
Reply to  Janice Moore
April 15, 2017 3:04 am

Yesss.

DCA
April 12, 2017 3:46 pm

“Whatever the missing or mishandled factor is, it has a big influence on global climate. The evidence for this is the embarrassingly broad range of estimates for climate sensitivity to a doubling of CO2, and the failure of those estimates to converge.”

Didn’t someone a while back post a graph showing a trend in CO2 senitvity studies?

Mike Flynn
April 12, 2017 3:48 pm

Raving looney. A climate model is a tool? Trenberth and his ilk are the tools.

If 1000 model runs give 1000 different results, at least 999 are wrong – probably 1000, but even people like Trenberth can get lucky.

So there’s a 99.9% chance that any individual product of or by this tool is incorrect.

Climate is the average of weather. There is no climate science. Collective delusional psychosis might describe the climatological affliction.

Cut off their funding. Their science is settled. They can run their amateurish and uselessly ineffective models to the end of time – in their own time!

Cheers.

Dave Fair
Reply to  Mike Flynn
April 13, 2017 10:44 pm

Those Russians are colluding with the Trump Administration to “deny” CAGW. The only IPCC climate model to come close to showing the low CO2-equivalent sensitivity is the Russian one. It is so bad that the Russian Model is the only one that closely tracks actual temperatures. Alt-Facts!

The Russians are hacking the IPCC! They are trying to influence the IPCC’s democratic processes and undermine the consensus. Resist!

Bruce Cobb
April 12, 2017 3:52 pm

Always fun when a Climatist waxes grandiloquently on their faux definition of science.

April 12, 2017 3:52 pm

Trenberth et al 2011jcli24 Figure 10

This popular balance graphic and assorted variations are based on a power flux, W/m^2. A W is not energy, but energy over time, i.e. 3.4 Btu/eng h or 3.6 kJ/SI h. The 342 W/m^2 ISR is determined by spreading the average discular 1,368 W/m^2 solar irradiance/constant over the spherical ToA surface area. (1,368/4 =342) There is no consideration of the elliptical orbit (perihelion = 1,415 W/m^2 to aphelion = 1,323 W/m^2) or day or night or seasons or tropospheric thickness or energy diffusion due to oblique incidence, etc. This popular balance models the earth as a ball suspended in a hot fluid with heat/energy/power entering evenly over the entire ToA spherical surface. This is not even close to how the real earth energy balance works. Everybody uses it. Everybody should know better.

An example of a real heat balance based on Btu/h is as follows. Basically (Incoming Solar Radiation spread over the earth’s cross sectional area, Btu/h) = (U*A*dT et. al. leaving the lit side perpendicular to the spherical surface ToA, Btu/h) + (U*A*dT et. al. leaving the dark side perpendicular to spherical surface area ToA, Btu/h) The atmosphere is just a simple HVAC/heat flow/balance/insulation problem.

http://writerbeat.com/articles/14306-Greenhouse—We-don-t-need-no-stinkin-greenhouse-Warning-science-ahead-

http://writerbeat.com/articles/15582-To-be-33C-or-not-to-be-33C

Janice Moore
Reply to  Nicholas Schroeder
April 12, 2017 4:27 pm

Thank you for sharing your helpful analysis with us, Mr. Schroeder. You write with the lucidity of a true master of your subject. That a non-scientist like I could understand you makes you one of the finest things in the world to be: a teacher.

I enjoyed reading both of the links above (ugh, on that first one, you get some really disgustingly slimy, low-information, toad-trolls).

Janice Moore
April 12, 2017 3:55 pm

Richard Feynman deserves to be heard here (as he has been haunting Travesty Trenberth for YEARS, now, SHOUTING HIS HEAD OFF trying to make Mr. T. hear him, but T just slams down the window and pulls the draperies with an angry snap belying his nonchalant, “Must be the wind.”)

Scientific Method — Richard Feynman

(youtube)

It doesn’t make any difference how beautiful your guess is.
It doesn’t matter how smart you are,
who made the guess or what his name is.

If it disagrees with experiment:

it’s wrong.”

(at 00:50 on above video)

Gloateus
Reply to  Janice Moore
April 12, 2017 4:36 pm

But in post-Modern “science”, if the observations disagree with your guess, then you change the “data” rather than the hypothesis.

Reply to  Gloateus
April 12, 2017 9:34 pm

Gloateus:
Your description of the phenomenon is inaccurate and misleading. In post-modern “science” you avoid making a guess aka “prediction” in favor of making a non-guess aka “projection.”

markl
Reply to  Terry Oldberg
April 13, 2017 8:30 am

Pedant.

Gloateus
Reply to  Gloateus
April 13, 2017 10:13 am

Terry,

I stand corrected.

Projections have the inestimable advantage of not requiring confirmation or falsification. They’re falsified, of course, but in the common parlance sense, not the scientific meaning of the term.

Reply to  Janice Moore
April 12, 2017 4:39 pm

Climate Science demonstrates Feynman’s “You can not prove a vague theory wrong.” (5:11)

Gloateus
Reply to  Douglas Kubler
April 12, 2017 4:42 pm

Especially when it’s not only vague, but its definitions and goalposts constantly change.

Its prediction of ECS can’t be definitively shown false until c. AD 2100.

Perfect to keep the gravy train chugging along for the rest of the climate banditos’ careers, if their travesties of science can be so dignified.

Reply to  Douglas Kubler
April 12, 2017 10:51 pm

Gloateus

No. It’s “prediction” can never be definitively be shown false because it is not truly a prediction.

Mindert Eiting
Reply to  Douglas Kubler
April 12, 2017 11:22 pm

About the difference between prediction and projection. Take a ball of so much weight and accelerate it with a cannon with so much force and note where the ball lands. If we have realized weight and force, Newton’s formula predicts the landing place. We could also take the formula and print in a voluminous book thousands of weights and forces and the corresponding landing places together. All these are called projections or scenarios. Let’s fire the cannon and note a landing place. Only a few lines in the book mention the observed place together with a couple of weights and forces. This does not refute all the other lines and because we did not realize weight and force in advance, nothing is refuted at all. Therefore, climate projections are just very complicated formulas to be applied at thousands of initial conditions by computers running at tremendous speed. This is post-Feynman science, which only needs trust and consensus.

RACookPE1978
Editor
Reply to  Mindert Eiting
April 12, 2017 11:50 pm

Except for the “fact” that the “climate scientists” are not including cannon tube wear, air friction, Coriolis rolling of the earth during flight, air pressure and temperature, air friction properly at all elevations of flight, cross-winds, and the curvature of the earth, the motion of the earth during flight, powder temperature, gun barrel temperature, …..

They are perfectly accurate: For a cannonball fired under perfect conditions in a vacuum on a flat plane.

Reply to  Mindert Eiting
April 13, 2017 10:09 am

Mindert Eiting:

Newton’s formula predicts the landing place but with error. For falsifiability of the theory of the landing place the error must be modelled. To model it successfully the model builder will need samples drawn from the associated study’s statistical population. The “projection” idea dispenses with the need for this population in a logically flawed way.

Pop Piasa
Reply to  Janice Moore
April 12, 2017 7:02 pm

That goes for “Distinguished Seniors”, as well as the rest of the junior “Scientific Community”.

But, were talking about a trend birth here.

knr
Reply to  Janice Moore
April 13, 2017 12:57 am

Great shame he is no longer with us , although politically he may have inclined to be a AGW proponent. He was more than good enough scientists to call out the BS on his own side and would have had no issues with turning Trenberth over .
We need more like him .

April 12, 2017 4:03 pm

I have no respect for Trenberth despite his knowledge, He has tried to reverse the null hypothesis in order to bolster hist climate claims. Bad science!

http://landscapesandcycles.net/trenberth-reverses-null-hypothesis-for-droughts.html

Janice Moore
Reply to  Jim Steele
April 12, 2017 4:07 pm
Gloateus
Reply to  Jim Steele
April 12, 2017 4:35 pm

Not just bad science. Pseudoscience. Anti-science. Government-mandated “science” at its worst.

Right up there with N@zi eugenics.

Jer0me
Reply to  Gloateus
April 12, 2017 4:57 pm

I assume you would include the extensive eugenics research in the US with that?

Pop Piasa
Reply to  Gloateus
April 12, 2017 7:07 pm

Better to draw an analogy from Lysenkoist genetics, maybe?

richardscourtney
Reply to  Gloateus
April 13, 2017 6:16 am

dikranmarsupial:

I refuted Stokes’ attempt to replace the null hypothesis which is part of the scientific method with the definition of null hypothesis adopted by statistics.

He, you and ‘… and then there’s physics’ replied with nonsense so I asked

I correctly stated the null hypothesis used by science in my rebuttal of the attempt by Stokes to replace the scientific null hypothesis with the definition of a null hypothesis used in statistics. You claim my rebuttal is “twaddle”.

Perhaps you self-appointed geniuses seeking to redefine the scientific method will explain how the scientific method had its null hypothesis centuries before statistics existed?

and you have responded

richardscourtney I gave links to two journal papers pointing out that what you suggest is, at best, naive statistical thinking (Gigerenizer calls it “mindless statistics”, which is harsh, but ultimately fair as it is indicative of not having thought about the purpose of the test, which is to enforce self-skepticism), and I also explained why and when it is bad statistical practice. … etc.

Obviously, an ability to read is NOT one of your strengths.

I repeat, Perhaps you self-appointed geniuses seeking to redefine the scientific method will explain how the scientific method had its null hypothesis centuries before statistics existed?

Richard

Reply to  richardscourtney
April 13, 2017 6:31 am

Richard, don’t you know

richardscourtney I gave links to two journal papers pointing out that what you suggest is, at best, naive statistical thinking (Gigerenizer calls it “mindless statistics”, which is harsh, but ultimately fair as it is indicative of not having thought about the purpose of the test, which is to enforce self-skepticism), and I also explained why and when it is bad statistical practice. … etc.

Perhaps you self-appointed geniuses seeking to redefine the scientific method will explain how the scientific method had its null hypothesis centuries before statistics existed?

Obviously, they at best were doing ” naive ‘mindless'” science, Duh!

/sarc

Gloateus
Reply to  Gloateus
April 13, 2017 11:56 am

Thousands of years in the Holocene have been warmer than AD 2016. About 16,000 years during the previous Eemian interglacial were also hotter. A lot. Without benefit of a Neanderthal industrial age.

Reply to  Gloateus
April 13, 2017 12:39 pm

Gloateus April 13, 2017 at 11:56 am

Thousands of years in the Holocene have been warmer than AD 2016. About 16,000 years during the previous Eemian interglacial were also hotter. A lot. Without benefit of a Neanderthal industrial age.

More impressive:
Earth (deep oceans) has been cooling down at least 15K since the last peak around 85 million years ago, starting the current ice age when it became cold enough for ice to develop near the poles.
To get out of this ice age the oceans have to warm 5K or so.
To get the temperatures the dinosaurs liked it should warm some 10K above current temps.
So let’s hope the Earth IS warming to get us out of this ice age. Unfortunately nothing we can do to help.

Gloateus
Reply to  Gloateus
April 13, 2017 12:52 pm

Ben,

True.

I don’t know if the PETM was warmer than the peak of Cretaceous Period atmospheric heat, but at least for ~55 million years the planet has been cooling. The oceans were hotter during the Cretaceous, thanks to active seafloor spreading and generally warmer conditions. Mean sea level was so “catastrophic”, that the Great Plains harbored such marine reptilian denizens of the (not so) deep as mosasaurs and plesiosaurs.

During the Eocene, the planet started to cool. Then at the Oligocene boundary, ~34 Ma, the present (Cenozoic Era) ice house commenced. South America and Australia were separated from Antarctica by deep ocean channels, and glaciation commenced. After the uplift of the Isthmus of Panama, ~ 3.o Ma, the Northern Hemisphere joined the Southern in carrying continental ice sheets.

Reply to  Gloateus
April 13, 2017 2:45 pm

Gloateus April 13, 2017 at 12:52 pm

I don’t know if the PETM was warmer than the peak of Cretaceous Period atmospheric heat, but at least for ~55 million years the planet has been cooling.

Using deep ocean temperature reconstructions (benthic foraminifera) 84 mya was ~3K warmer than the PETM.
Surface must have been considerably warmer.

I have a few geology related questions. Would you help me out?
email is: ben at wtrs dot nl

Nick Stokes
Reply to  Jim Steele
April 12, 2017 6:43 pm

” He has tried to reverse the null hypothesis in order to bolster hist climate claims.”
Often repeated here, but it is scientific gibberish. A null hypothesis is just an alternative plausible hypothesis adopted in statistical testing to see if it too could explain the observed. The only people who “try to reverse” a null hypothesis are people here who claim that a failure to reject it somehow proves it.

As far as testing goes, Real Climate now maintains a page where they compare model results with observed. Here is CMIP3:

http://www.realclimate.org/images/cmp_cmip3_2016.png

If the null hypothesis is zero trend, the models are doing a lot better.

Janice Moore
Reply to  Nick Stokes
April 12, 2017 7:08 pm

Re: “failure to reject” the null hypothesis {i.e., that all observed “climate change” is well within the bounds of natural variation, i.e., that human CO2 is not driving climate to shift out of those bounds}

is a mis-statement of the issue.

The null hypothesis is the prima facie case. (presumed true unless disproven)

Thus, the burden of proof lies squarely on the AGWers.

They have an even heavier burden, now — for there is ANTI-correlation evidence which they now must overcome:

CO2 UP. WARMING STOPPED.**

(a.k.a. “the missing heat”)

**

On several different data sets, there has been no statistically significant warming for between 0 and 23 years according to Nick’s criteria. Cl stands for the confidence limits at the 95% level.

The details for several sets are below.

For UAH6.0: Since December 1993: Cl from -0.009 to 1.776
This is 23 years and 3 months.
For RSS: Since October 1994: Cl from -0.006 to 1.768 This is 22 years and 5 months.
….
For Hadsst3: Since May 1997: Cl from -0.015 to 2.078 This is 19 years and 10 months.
….

(Werner Brozek and J. T. Facts, https://wattsupwiththat.com/2017/04/11/la-nina-puzzle-now-includes-february-and-march-data/ )

Nick Stokes
Reply to  Nick Stokes
April 12, 2017 7:23 pm

“The null hypothesis is the prima facie case. (presumed true unless disproven)”
That is faith-based “science”. In fact it has no preferred status. Its use in statistical testing is just to say that if you have hypothesis A which you are interested in and could explain the observation, you should also test hypothesis B (null). If there is a faint chance that the results could be explained by B, then you can’t deduce that the results prove A. But A could still explain the results too.

Butch
Reply to  Nick Stokes
April 12, 2017 7:32 pm

…If the past historical temperature data does not match your models, then change the past historical temperature data to match your models…If the present temperature data does not match your models, then change the present temperature data to match your models ..Presto ….now the temperature models of the future are accurate !! …….D’OH !!

Chris Hanley
Reply to  Nick Stokes
April 12, 2017 7:47 pm

That’s not how Wikipedia defines it, Wiki being an impeccable authority on climate-related matters of course:
’… the term “null hypothesis” is a general statement or default position that there is no relationship between two measured phenomena, or no association among groups …’.
Not just an alternative hypothesis but the default hypothesis — i.e. ’innocent until proved guilty’:
“… rejecting or disproving the null hypothesis—and thus concluding that there are grounds for believing that there is a relationship between two phenomena (e.g. that a potential treatment has a measurable effect)—is a central task in the modern practice of science …”.
The null hypothesis doesn’t imply “zero trend”, it doesn’t imply anything.

Curious George
Reply to  Nick Stokes
April 12, 2017 8:01 pm

Nick, a question of the “burden of proof” returns frequently. I showed that a climate model used a wrong value for a latent heat of water vaporization. Potentially, model results become unreliable after 100 modelled hours. Authors never corrected the error; it was my task to prove that the wrong value actually harmed their results.

The team which spent $50 million (my estimate) on the model development refuses a responsibility to correct a glaring error. Instead, they keep running a faulty model (it is in a CMIP5 ensemble).

Whose responsibility is it to make a correction? Mine, or their? That’s what a null hypothesis means.

Javert Chip
Reply to  Nick Stokes
April 12, 2017 8:06 pm

Nick

Just exactly what data are we looking at here? Adjusted (1 or more times) or unadjusted?

Nick Stokes
Reply to  Nick Stokes
April 12, 2017 8:17 pm

Forrest,
“The null hypothesis is not JUST an alternative plausible hypothesis. It is the complement of the hypothesis being tested. Together they exhaust all possibilities.”
Would it were so. But it isn’t at all. I don’t think you know anything about statistical testing. You can never get such a complement. The weakness of a statistical test result is that it can never exclude other hypotheses. In fact, of course, hypotheses are usually not discrete. You can say that the results are compatible with 2 W/m2 forcing, but not zero. But what about 1?

As for Trenberth, what he actually said was:

Given that global warming is “unequivocal”, to quote the 2007 IPCC report, the null hypothesis should now be reversed, thereby placing the burden of proof on showing that there is no human influence. Such a null hypothesis is trickier because one has to hypothesize something specific, such as “precipitation has increased by 5%” and then prove that it hasn’t. Because of large natural variability, the first approach results in an outcome suggesting that it is appropriate to conclude that there is no increase in precipitation by human influences, although the correct interpretation is that there is simply not enough evidence (not a long enough time series). However, the second approach also concludes that one cannot say there is not a 5% increase in precipitation. Given that global warming is happening and is pervasive, the first approach should no longer be used. As a whole the community is making too many type II errors.

And he’s right. The point of having a null hypothesis is that rejecting it should be meaningful. Rejecting that there is zero trend, say, is not meaningful. No serious scientists thought it would be. But rejecting that there is a 2C/century trend, as (not exactly) predicted by IPCC, would be meaningful. Declaring something to be a null hypothesis does not add to your knowledge that it is true. It’s just that finding it false would have more impact.

Gloateus
Reply to  Nick Stokes
April 12, 2017 8:24 pm

Only in your imagination is the null hypothesis “zero trend”.

The null hypothesis that nothing has happened in the climate during the past (fill in the blank) which requires a special explanation, such as increasing CO2. That there should be an up trend coming out of the LIA down trend is in fact part of the null hypothesis. It cannot be shown that whatever has occurred with global climate since AD 1750, 1850 or 1950 is anything out of the ordinary. Except maybe that the depths of the LIA during the Maunder Minimum were exceptionally cold.

Reply to  Nick Stokes
April 12, 2017 8:58 pm

I think Real Climate is cherry picking. CMIP5 does considerably worse than CMIP3. Here is Steve McIntyre’s version of this plot.

https://twitter.com/ClimateAudit/status/830124744716464128/photo/1?ref_src=twsrc%5Etfw

I would trust Steve more than Schmidt who has a track record of defending the indefensible, particularly in paleoclimate. Basically, it appears that 2017 will return to temperature values near the bottom of the AOGCM range.

Nick Stokes
Reply to  Nick Stokes
April 12, 2017 9:36 pm

CH,
“The null hypothesis doesn’t imply “zero trend”, it doesn’t imply anything.”
If it doesn’t imply anything, it is useless. The only point of having a null hypothesis is that it could be an explanation of your observation.

Gl,
“Only in your imagination is the null hypothesis “zero trend”.”
That is what is usually tested. When people here talk of “no significant warming” they mean that the trend was not significantly different from zero.

CG,
“Whose responsibility is it to make a correction? Mine, or their? That’s what a null hypothesis means.”
No, it doesn’t mean that. The situation that you describe has nothing to do with statistical testing.

FG,
“Nick, there are two problems with your attempt to exculpate Trenberth.”
There is no issue of “exculpation”. Trenberth is merely patiently explaining what should be tested. And the answer is, as he says, a test that will lead to a significant result. A test that does not reject the null hypothesis is a test that failed. It does not prove anything. What T is saying (and I have for years, here) is:
1. If you test te null hypothesis of zero change, you can only get two results
a) Success – the hypothesis is rejected. But what T is saying, and invoking IPCC etc, is that that is a pointless conclusion. Non-zero change is not controversial. The result supports the IPCC, but they didn’t need it.
b) Failure – no useful result.
2. If you test a hypothesis that people actually believe plausible, like 0.2C/dec trend, then results are:
a) success – the hypothesis is rejected. The IPCC was wrong. That is significant – even people here might gain cheer from it
b) failure – the results are consistent with .2C/decade trend. That doesn’t prove anything, they might also be consistent with 1 C/dec. Or even zero.

So if you look through all that, only 2a is a valuable result. So test 2 has the null hypothesis to use.

Reply to  Nick Stokes
April 13, 2017 1:16 am

Burden of proof
it’s such a –
well – burden,
better try putting
the null hypothesis
into reverse,
kinda’ like
tiljandering
or hiding the

decline.

Alcheson
Reply to  Nick Stokes
April 13, 2017 2:00 am

Wow nice one there Nick… glad to see you “guys” have 2016 a full 0.3 degrees [hotter] than the 1998 El Nino warming year. Who cant fit data to models when you keep changing the data??

Nick Stokes
Reply to  Nick Stokes
April 13, 2017 3:25 am

dpy
“I think Real Climate is cherry picking. CMIP5 does considerably worse than CMIP3.”
I gave CMIP3 because it has a longer period of actual prediction (from 2004). RC give CMIP5 as well, and it seems similar to Steve M’s TAS, though he has an implausible dip at the end. he doesn’t say which data he is plotting. For the first quarter of 2017, the surface average exceeds the record annual average for 2016.

But in any case the main point stands. CMIP5 is also a much better predictor of observed than the “null” hypothesis. Temperatures have risen, and CMIP 5 tracks them well.

My version of the CMIP5 plot, as of last December, is here. Here are the three CMIP5 versions – double click to enlarge. Steve’s is on left, monthly data. RC is centre, annual avg data. Mine is on right, 12-month running mean. Allowing for different smoothing, pretty similar.

comment image

richardscourtney
Reply to  Nick Stokes
April 13, 2017 3:25 am

Nick Stokes:

You lie

A null hypothesis is just an alternative plausible hypothesis adopted in statistical testing to see if it too could explain the observed.

NO! You know that is a lie because I corrected it when you tried to promulgate it on WUWT on a previous occasion.

You are attempting to pretend the definition of null hypothesis used by the scientific method should be replaced by the definition of null hypothesis adopted by statistics.

I explain in an above comment that
In all science the Null Hypothesis says it must be assumed a system has not experienced a change unless there is evidence of a change.

Read my explanation and try to learn from it.

And also try to understand the difference between science and pseudoscience.

Science is a method that seeks the closest available approximation to ‘truth’ by seeking evidence that refutes existing understanding and amending or rejecting existing understanding in light of discovered evidence.

Pseudoscience is a method that adopts an existing understanding as being ‘truth’ and seeks evidence which supports that understanding

Richard

dikranmarsupial
Reply to  Nick Stokes
April 13, 2017 4:00 am

richardscourtney wrote “In all science the Null Hypothesis says it must be assumed a system has not experienced a change unless there is evidence of a change. “

No, while most often the null hypothesis is of “no effect”, assuming that the null hypothesis is of this form is an element of the “null ritual” (i.e. “mindless” application of statistical procedures without understanding their actual purpose and meaning). As Gigerenizer puts it

“First, “null” does not refer to a nil mean difference or zero correlation, but to any hypothesis to be “nullified.””

The correct choice of the null hypothesis depends on what it is you are trying to argue. Effectively the null hypothesis is the “devils advocate” hypothesis that you assume to be true a-priori and only procede with your argument if the observations show the null hypothesis is incorrect, which enforces a degree (often only a small degree) of self-skepticism that has become part of scientific method. If you are arguing that there is no global warming, and you adopt a null hypothesis of no trend, then there is no self-skepticism imposed by the test, as you are assuming a-priori that you are right. Most of the time we are arguing for and effect, so a “no effect” null hypothesis is appropriate, but not always.

Sadly this sort of thing is widespread due to the “cookbook” approach to statistics often taught to non-specialists. I can recommend Grant Foster’s book for those wishing to take a more principled approach, which teaches the underlying principles, rather than just a “how to” approach.

Nick Stokes
Reply to  Nick Stokes
April 13, 2017 4:18 am

Richard
“In all science the Null Hypothesis says it must be assumed a system has not experienced a change unless there is evidence of a change.”
You give no authority for this but loud assertion. I do not believe there is such a usage in all science. In fact, I think it is twaddle.

...and Then There's Physics
Reply to  Nick Stokes
April 13, 2017 4:24 am

In fact, I think it is twaddle.

In fact, I’m pretty certain that it is.

hunter
Reply to  Nick Stokes
April 13, 2017 5:16 am

Yes, that Richard Feynman guy is such a big oil paid crank.

richardscourtney
Reply to  Nick Stokes
April 13, 2017 5:24 am

dikranmarsupial,Nick Stokes and …and Then There’s Physics:

I correctly stated the null hypothesis used by science in my rebuttal of the attempt by Stokes to replace the scientific null hypothesis with the definition of a null hypothesis used in statistics. You claim my rebuttal is “twaddle”.

Perhaps you self-appointed geniuses seeking to redefine the scientific method will explain how the scientific method had its null hypothesis centuries before statistics existed?

Richard

dikranmarsupial
Reply to  Nick Stokes
April 13, 2017 6:04 am

richardscourtney I gave links to two journal papers pointing out that what you suggest is, at best, naive statistical thinking (Gigerenizer calls it “mindless statistics”, which is harsh, but ultimately fair as it is indicative of not having thought about the purpose of the test, which is to enforce self-skepticism), and I also explained why and when it is bad statistical practice. One of those papers cites R.A. Fisher as disagreeing with you, and he is most often credited as the originator of statistical hypothesis testing. What more do you want? How about this, pop over to the statistics stack exchange and ask there whether what you propose is correct, and see what the statisticians there tell you.

Just because something is common practice in science, doesn’t mean it is good statistical practice.

richardscourtney
Reply to  Nick Stokes
April 13, 2017 6:21 am

dikranmarsupial:

My reply to your post appeared in the wrong place. Hopefully this copy is in the right place.

I refuted Stokes’ attempt to replace the null hypothesis which is part of the scientific method with the definition of null hypothesis adopted by statistics.

He, you and ‘… and then there’s physics’ replied with nonsense so I asked

I correctly stated the null hypothesis used by science in my rebuttal of the attempt by Stokes to replace the scientific null hypothesis with the definition of a null hypothesis used in statistics. You claim my rebuttal is “twaddle”.

Perhaps you self-appointed geniuses seeking to redefine the scientific method will explain how the scientific method had its null hypothesis centuries before statistics existed?

and you have responded

richardscourtney I gave links to two journal papers pointing out that what you suggest is, at best, naive statistical thinking (Gigerenizer calls it “mindless statistics”, which is harsh, but ultimately fair as it is indicative of not having thought about the purpose of the test, which is to enforce self-skepticism), and I also explained why and when it is bad statistical practice. … etc.

Obviously, an ability to read is NOT one of your strengths.

I repeat,
Perhaps you self-appointed geniuses seeking to redefine the scientific method will explain how the scientific method had its null hypothesis centuries before statistics existed?

Richard

dikranmarsupial
Reply to  Nick Stokes
April 13, 2017 6:35 am

BTW “Perhaps you self-appointed geniuses seeking to redefine the scientific method will explain how the scientific method had its null hypothesis centuries before statistics existed?” is shifting the goal posts, somewhat as richardscourtney originally objected to Nick’s comment “A null hypothesis is just an alternative plausible hypothesis adopted in statistical testing to see if it too could explain the observed. “ (emphasis mine).

I’d genuinely be interested in references to null hypothesis being used in a scientific (but non-statistical) context prior to, say 1580 (i.e. a century before the law of large numbers), not that it would be relevant to the discussion. It might be useful as a hypothesis for the origin of the “null ritual”.

BTW the null ritual is not the only statistical misunderstanding common in science, you can add to that treating the p-value as the probability the null hypothesis is false and thinking there is a 95% probability the true value of a statisic lies in a 95% fequentist confidence interval, both of which crop up rather more often than they should.

dikranmarsupial
Reply to  Nick Stokes
April 13, 2017 6:40 am

richardscourtney Do you agree that the null hypothesis should not automatically be taken to be the hypothesis of no effect in statistical hypothesis testing?

Reply to  Nick Stokes
April 13, 2017 7:13 am

Nick writes

If the null hypothesis is zero trend, the models are doing a lot better.

But we’re warming and we’ve been warming since before anthropogenic CO2 could reasonably be considered responsible. Therefore the null hypothesis would be warming at that rate, not zero trend.

Reply to  Nick Stokes
April 13, 2017 7:18 am

If the null hypothesis is zero trend, the models are doing a lot better.

Only because they don’t consider that the null hypothesis might not be a flat line.
They forget the bulk storage of thermal energy on the planet, moves and has an asymmetrical surface profile, with an asymmetrical land distribution almost guarantees the null hypothesis to not be a flat line.

MarkW
Reply to  Nick Stokes
April 13, 2017 8:31 am

The NULL hypothesis is never zero trend.

MarkW
Reply to  Nick Stokes
April 13, 2017 8:33 am

The belief that absent changes in CO2, nothing would change is not the null hypothesis, it is pseudo science.

Reply to  MarkW
April 13, 2017 9:03 am

MarkW:
Right. In philosophical terms the faulty belief of the IPCC climatologists is an application of the reification fallacy. On the really existing Earth, the global temperature fluctuates but on the reified Earth the global temperature, given the atmospheric CO2 concentration, is a constant.

richardscourtney
Reply to  Nick Stokes
April 13, 2017 9:27 am

dikranmarsupial:

You ask me the irrelevant question

richardscourtney Do you agree that the null hypothesis should not automatically be taken to be the hypothesis of no effect in statistical hypothesis testing?

The “hypothesis of no effect in statistical hypothesis testing” is NOT RELEVANT.
At issue is the attempt by Stokes (that you are supporting) to replace the null hypothesis of the scientific method with the definition of a null hypothesis adopted in statistics.

I repeat,
In all science the Null Hypothesis says it must be assumed a system has not experienced a change unless there is evidence of a change.

Some good science uses statistics but most does not.
Here are some examples of science which did not use any statistics.
Newton I, ‘Philosophiæ Naturalis Principia Mathematica’, 1687
Darwin C, ‘On the Origin of Species by Means of Natural Selection’, 1859
Einstein A, ‘Special Relativity’, Nature, 1905

Richard

kendo2016
Reply to  Nick Stokes
April 13, 2017 9:32 am

Wasn’t it Humpty Dumpty ( lewis carroll :Through The Looking Glass) who said’ When I use a word ,it means just what i choose it to mean’- neither more nor less’
‘The question is’ said Alice ‘whether you can make words mean so many different things.’

richardscourtney
Reply to  Nick Stokes
April 13, 2017 9:38 am

dikranmarsupial:

I assume you posted your irrelevant question as a ‘red herring’. I have twice attempted to provide a rejection of it but both those attempts have vanished.

This is a statement that I have hope will not vanish and says I did not ignore your ‘red herring;.

Richard

Gloateus
Reply to  Nick Stokes
April 13, 2017 10:00 am

Nick,

The null hypothesis in the case of so-called “climate science” is that earth’s climate has not behaved any differently since man-made GHGs started rising than it did before such increases. This is clearly the case, since the climate warmed before WWII without an increase in GHGs, then cooled for the first 32 years after WWII, while CO2 rose. From c. 1977 to 1996, the climate did apparently warm while CO2 continued rising, but the correlation was purely accidental, since for the past 20 years, there has been no significant warming and probably actually cooling.

The alternative hypothesis is CACA, ie Catastrophic Anthropogenic Climate Alarmism. Since no significant effect of man-made GHGs has been observed, the null hypothesis can’t be rejected.

And the alternative hypothesis was born falsified. Its adherents in the first half of the 20th century believed that rising CO2 should produce a warmer world (which they considered a good thing), but it didn’t. Instead, earth cooled from c. 1940 to c. 1977.

Reply to  Nick Stokes
April 13, 2017 10:07 am

Yes, Nick the plots are similar. McIntyre’s point was tweeted earlier in the link.

yes. In March, climate academics loved monthly data, now they prefer annual data.

.
In fact, the recent El Nino spike is going away and we already can see that nothing has changed really. Observations especially for TLT are way below the model mean. In any case, as time goes on, it will get harder and harder to rationalize the trend difference between AOGCM’s and the data.

Gloateus
Reply to  Nick Stokes
April 13, 2017 10:11 am

DPY,

The book cookers at HadCRU, NASA, NOAA and BEST will just keep changing the “data”.

Let’s hope that Trump shuts down GISS, or at least its GASTA-inventing and GIGO climate modeling functions, to reduce the number of liars.

dikranmarsupial
Reply to  Nick Stokes
April 13, 2017 10:31 am

richardscourtney wrote “I assume you posted your irrelevant question as a ‘red herring’.”

The question wasn’t irrelevant or a red-herring. You objected to Nick’s comment (rather intemperately calling it a “lie”)

A null hypothesis is just an alternative plausible hypothesis adopted in statistical testing to see if it too could explain the observed.

(emphasis mine)

I pointed out that actually Nick is right, and assuming that the null hypothesis must be one of “no effect” is “mindless statistics” (in Gigerenizer’s words) known as the “null ritual”.

You then made the challenge:

Perhaps you self-appointed geniuses seeking to redefine the scientific method will explain how the scientific method had its null hypothesis centuries before statistics existed?

Now this is clear goalpost shift as Nick was clearly talking about statistical testing, so if anything is a “red herring”, it is your challenge.

Asking

richardscourtney Do you agree that the null hypothesis should not automatically be taken to be the hypothesis of no effect in statistical hypothesis testing?

Is obviously highly relevant as it is asking you to comment on the topic of statistical testing, whic was the subject of your original complaint. Obviously you can’t say that you do agree, because that would be admitting that what Nick wrote was correct (maybe ironically, as richarcscourtney would say “an ability to read is NOT one of your strengths.”). Of course you can’t say “no” either, as you then have to prove Gigerenizer et al. wrong. So what is the solution? Obviously declare the question irrelevant and a red herring an run away.

Of course if you did answer, we could then discuss the evidence that “scientific method had its null hypothesis centuries before statistics existed?” of which there has been none so far.

richardscourtney
Reply to  Nick Stokes
April 13, 2017 10:46 am

TimTheToolMan:

You point out

Nick writes

If the null hypothesis is zero trend, the models are doing a lot better.

But we’re warming and we’ve been warming since before anthropogenic CO2 could reasonably be considered responsible. Therefore the null hypothesis would be warming at that rate, not zero trend.

I take your point, and if you accept Stokes’ assertion then you are right, but Stokes’ assertion is merely an example of his earlier rejection of the scientific method where he wrote

A null hypothesis is just an alternative plausible hypothesis adopted in statistical testing to see if it too could explain the observed.

Any suggestion of “an alternative plausible hypothesis” being the null hypothesis is pure pseudoscience and deserves to be rejected, not discussed.

THE null hypothesis used in science is
In all science the Null Hypothesis says it must be assumed a system has not experienced a change unless there is evidence of a change.

Science is NOT statistics.
Much science does not use statistics. Statistics is merely a tool that some science uses.

Richard

Nick Stokes
Reply to  Nick Stokes
April 13, 2017 11:43 am

dpy,
“McIntyre’s point was tweeted earlier in the link.”
It’s very hard to follow what McI is doing. Do you know what data set he’s plotting? Do you know who provided that dot which is said to be a 2017 estimate?

But as for his “point”, RC has a long series of CMIP/data comparisons, most recently in 2015. They all use annual data. The case against monthly for plotting is shown by SM’s graph, which shows a 2016 peak and then a blur. What it disguises is that the very recent months (we like monthly, OK?) are very high. There is no indication that “nothing has changed really”. As I said, the average sor far for 2017 is higher than for year 2016.

Gloateus
Reply to  Nick Stokes
April 13, 2017 1:11 pm

Nick,

How could zero trend possibly be the null hypothesis. The most casual observer can plainly see that earth’s climate changes constantly, so that there must always be a trend from the shortest climatic unit, ie 30 years (maybe slight warming), to the longest, ie 4.5 billion years (definitely cooling). If however you start your trend at Snowball Earth events, then earth has been in a warming trend at least since the last one, which ended some 635 Ma.

Hence:

30 years: possible slight warming
300 years: warming
3000 years: cooling
30,000 years: cooling
300,000 years: possibly flat to cooling
3,000,000 years: cooling
30,000,000 years: cooling
300 million years: possibly flat to warming, but flattening as the Cenozoic ice age drags on
3 billion years: probably warming.

Reply to  Nick Stokes
April 13, 2017 3:13 pm

This focus on the null hypothesis is misplaced. Just like John Cook, Trenberth makes a simple statement focused on the wrong question. It’s a bait and switch.

There are thousands of questions in climate change. One of them “are there anthropogenic causes of global warming?” Setting aside the “correlation does not equal causation” fallacy. the answer is most likely “Yes.”

But Treberth says,

“So why does the science community continue to do attribution studies and assume that humans have no influence as a null hypothesis?”

“Attribution science” is not about the question of is warming anthropogenic? It’s how much warming, and what are the contributing factors to that warming (both man-made and natural/positive or negative).

In addition, many people use “attribution science” to mean are weather patterns made worse by warming? In other words, can you attribute a “natural disaster” to anthropogenic causes (natural or man-made.)

Climate science still needs more transparency and it still needs more scientific rigor — particularly with respect to the “official data” and some very shaky science. But that is only one of a thousand questions that needs to be answered wrt what is the value and cost of reducing emissions to a given target.

It’s not even the most important one.

Nick Stokes
Reply to  Nick Stokes
April 13, 2017 4:22 pm

Gloateus,
“How could zero trend possibly be the null hypothesis.”
How could a non-zero trend be? A trend of just 0.1°C/century is 100°C in 100K years. The point of a null hypothesis is that if it could explain the results, then no further explanation is required. If you adopt a non-zero trend as null, then you have to explain what happens to it.

In fact the usual null for trend testing is a trendless stochastic model, and the test is based on the fitted variability of that model. You could include some secular variation, but then you’d have to explain it, and why you think it should be the default.

Put another way, it’s no use testing for the existence of trend by comparing with a model with trend. You’d end up like the scholar who concluded that Shakespeare’s plays were not written by him, but by someone with the same name.

Reply to  Nick Stokes
April 13, 2017 5:12 pm

Nick writes

How could a non-zero trend be?

The idea of the null hypothesis is “no change” not “not changes”. If the “before CO2” situation was warming then to test whether CO2 has had am impact you need to compare it to the “before CO2” situation, not some other situation.

Reply to  TimTheToolMan
April 13, 2017 9:06 pm

The ability of participants in this thread to reach a common conclusion is hampered by failure of these participants to identify what they mean by the term “null hypothesis. According to one source it is “the hypothesis that there is no significant difference between specified populations, any observed difference being due to sampling or experimental error.” Under this definition, for global warming climatology the null hypothesis is an undefined concept pending identification of the population underlying the model.

Gloateus
Reply to  Nick Stokes
April 13, 2017 5:25 pm

Nick Stokes April 13, 2017 at 4:22 pm

I’m having trouble explaining the scientific method to you. Prolonged exposure to “consensus” garbage has apparently warped your analytic powers.

1) There is in fact no trend during the interval of supposed unusually increasing CO2, hence no human signal at all. Since WWII, when CO2 took off, GASTA has been correlated with cooling for the by longest period (~32 years), then briefly with rising temperature, then for a longer period with flat to slight cooling, depending upon whether the “data” are the totally bogus “surface” sets or the more reliable satellite and balloon series.

2) Even had there been warming for the whole ~72 years since VE and VJ Days, there would still be no significant trend, since whatever warming has occurred is well within normal bounds for the Holocene.

Thus there is zero, zip, nada, zilch footprint of human activity in the actual record. Hence, the null hypothesis of nothing out of the ordinary cannot be rejected. Indeed, it must be accepted as valid.

Nick Stokes
Reply to  Nick Stokes
April 13, 2017 6:00 pm

TTTM,
‘you need to compare it to the “before CO2” situation, not some other situation’
No, you need to formulate a null hypothesis. And that needs to be one that, if deemed to be consistent with the data, means that no further explanation is needed. That is the null in Null. If your hypothesis is that there would be trend during the period, then you need to explain why. Just saying there was trend before won’t do. If you can’t explain why, it’s not a proper null.

Suppose we did get to a situation where there was a 2°C rise for several centuries, and there had been such a trend for some decades before. On your basis, you could never say that it was significant.

Reply to  Nick Stokes
April 13, 2017 8:43 pm

Nick Stokes:
According to one source, the “null hypothesis” is the hypothesis that “there is no significant difference between specified populations, any observed difference being due to sampling or experimental error.” Under this definition of “null hypothesis,” the notion of a “null hypothesis” is inapplicable to global warming climatology as no population underlies the associated models.

Gloateus
Reply to  Nick Stokes
April 13, 2017 6:09 pm

Nick,

Once more:

Null hypothesis: Nothing has happened in the past 50, 100 or 150 years the least bit different from what has been repeatedly observed in earth’s climate during the Holocene or any other period of its geologic history.

Alternative hypothesis: Earth has warmed more, or had more extreme weather, or something, that is out of the ordinary.

Since there is no evidence whatsoever for the alt-hy, then the null hypothesis not only can’t be rejected, but is apparently true on its face.

Reply to  Nick Stokes
April 13, 2017 6:13 pm

Nick writes

If you adopt a non-zero trend as null, then you have to explain what happens to it.

Yes. You do. And you need to be able to understand that trend before before trying to understand how CO2 has impacted it because if you cant explain the trend before CO2 impacts, you cant explain what CO2 did to it. That pretty much sums up our situation today. Its all about the CO2 impacts when we dont actually understand what drives our climate.

On the one hand AGW wants CO2 to have large sensitivity to explain historical events such as getting out of ice ages by making CO2 a control knob but on the other hand CO2 has been measured to have lower sensitivity today.

All this points to not actually understanding what drives climate and that the AGW assumptions about CO2’s role, dont fit all the evidence in anything other than an apologetic, compromised, contorted way.

Reply to  Nick Stokes
April 13, 2017 6:19 pm

Nick goes on to say

If your hypothesis is that there would be trend during the period, then you need to explain why.

Its not enough to wave away the trend prior to the period as being “natural variability” so you can use a “no changes” null hypothesis. If you cant explain what caused the trend prior specifically, then you cant know whether the trend would have continued during the period of CO2 and the only null you can reasonably use is “no change” and no “no changes”.

Using “no changes” ie zero trend is a biased argument that does nothing but make the models look better.
If you use a “no change” null the the models look poor.

richardscourtney
Reply to  Nick Stokes
April 14, 2017 12:50 am

dikranmarsupial:

I assume your post at April 13, 2017 at 10:31 am is deliberately mischievous promotion of your ‘red herring’ because I refuse to believe you are actually as stupid as you are asserting yourself to be.

As I said, arguments about the nature of the myriad null hypotheses adopted in statistics are NOT RELEVANT to the null hypothesis used in the scientific method. Trenberth proposed reversing the SCIENTIFIC null hypothesis and Stokes tried to justify that by providing the deliberate lie that there is no scientific null hypothesis but only the myriad null hypotheses adopted in statistics.
As I said, he knows that attempted justification is a lie because I had corrected him when he had previously made a similar attempt to justify Trenberth’s proposal to reject the scientific method.

But you claim I was ‘moving the goal posts’. NO! I was putting the goal posts back where they belong; i.e. in the scientific method.

I explained that the natures of null hypotheses adopted in statistics are not relevant to the null hypothesis of the scientific method. I observed your inability to understand that, and you are still refusing to understand it because you still want to discuss the irrelevant statements of statistical null hypotheses by Gigerenizer et al.. (And, contrary to your assertion, I don’t need to say those statements are “wrong” because THEY ARE NOT RELEVANT. Similarly, the use of baking powder in cooking is not relevant so you don’t need to “prove” what Mary Berry says about that is “wrong”.)

In light of your inability to understand my explanation, I said it is clear that your reading ability “is NOT one of your strengths”. You have objected to my saying that so I withdraw it and replace it with the other possible reason for your inability to understand my explanation; viz. you are too stupid to understand that the natures of null hypotheses adopted in statistics are not relevant to the null hypothesis of the scientific method.

Richard

dikranmarsupial
Reply to  Nick Stokes
April 14, 2017 5:39 am

richardscourtney writes

“As I said, arguments about the nature of the myriad null hypotheses adopted in statistics are NOT RELEVANT to the null hypothesis used in the scientific method. Trenberth proposed reversing the SCIENTIFIC null hypothesis and Stokes tried to justify that by providing the deliberate lie that there is no scientific null hypothesis but only the myriad null hypotheses adopted in statistics.”

Disagreeing with you doesn’t make it a lie.

“I explained that the natures of null hypotheses adopted in statistics are not relevant to the null hypothesis of the scientific method.”

But given the discussion was specifically of statistical comparison of the model runs against the observations, it is the statistical null hypothesis that is relevant to the discussion. The discussion of scientific null hypotheses *is* a red-herring, distracting from the discussion of the correct procedure for performing that test – a goalpost shift.

You have objected to my saying that so I withdraw it and replace it with the other possible reason for your inability to understand my explanation; viz. you are too stupid to understand that the natures of null hypotheses adopted in statistics are not relevant to the null hypothesis of the scientific method.”

Ah yes, the usual way of trying to avoid rational discussion – being needlessly insulting. Yawn. Sorry, I am not going to rise to the bait and respond in kind.

You haven’t even provided evidence that this is a standard usage of “null hypothesis” in science (as opposed to the statistical sense). You claim

“Perhaps you self-appointed geniuses seeking to redefine the scientific method will explain how the scientific method had its null hypothesis centuries before statistics existed?”

O.K. provide evidence to back up this claim. Provide verifiable references to establish that the phrase “null hypothesis” being used in a scientific (but non-statistical) context prior to, say 1580 (i.e. a century before the law of large numbers). If you can’t, admit you were wrong.

dikranmarsupial
Reply to  Nick Stokes
April 14, 2017 7:43 am

The N-gram viewer on Google Books gives essentially zero hits for “null hypothesis” prior to 1930, which oddly enough is around the time that RA Fisher coined the phrase for its statistical usage.

The Oxford English Dictionary only gives the (purely) statistical definition for “null hypothesis”, giving Fisher’s 1935 book on the design of experiments as the earliest quote.

null hypothesis n. Statistics a hypothesis that is the subject of a significance test, esp. the hypothesis that there is no actual difference between specified populations (any apparent difference being due to sampling or experimental error).

This suggest to me that richardscourtney’s calim

You are attempting to pretend the definition of null hypothesis used by the scientific method should be replaced by the definition of null hypothesis adopted by statistics.

In all science the Null Hypothesis says it must be assumed a system has not experienced a change unless there is evidence of a change.

is twaddle, as Nick suggests

You give no authority for this but loud assertion. I do not believe there is such a usage in all science. In fact, I think it is twaddle.

The ball is in your court Richard. Provide verifiable references that back up your assertion (particularly that this usage predates the statistical usage), or admit you were wrong.

richardscourtney
Reply to  Nick Stokes
April 14, 2017 10:42 am

dikranmarsupial:

The ball is NOT in my court. It never was.
You are trying to claim that a fundamental principle of the scientific method does not exist.
I explained the scientific null hypothesis does exist and why it does.

The present situation came about as follows.
1.
Trenberth asserted that the scientific null hypothesis should be reversed.
2.
Stokes attempted to support that by repeating his lie that there is no scientific null hypothesis because there is only the version of a null hypotheses used in statistics where anything can be a null hypothesis; n.b . Stokes’ falsehood is a lie because he was corrected on it previously.
3.
I pointed out Stokes’ lie (It is a silly lie because if the scientific null hypotheses could be anything then Trenberth could not have called for reversal of it).
4.
You joined in with ‘red herrings’ about the statistical null hypotheses.
5.
You are being (deliberately?) stupid in continuing to wave your ‘red herrings’.

Get back when you have an explanation of why you are trying to replace the scientific method with your pseudoscience.

Richard

Reply to  richardscourtney
April 14, 2017 12:58 pm

In common usage, the “null hypothesis” is associated with the phenomenon of sampling error. As a statistical population is not a concept for global warming climatology the notion of a “null hypothesis” does not apply.

dikranmarsupial
Reply to  Nick Stokes
April 14, 2017 10:54 am

richardscourney wrote “I explained the scientific null hypothesis does exist and why it does.”

So why can’t you provide a verifiable reference showing that “null hypothesis” has a non-statistical usage in scientific method. I looked and couldn’t find one. I suspect that no such usage exists, and that it is entirely your invention. Go on, prove me wrong, produce the reference. If you can’t, you owe Nick and apology.

dikranmarsupial
Reply to  Nick Stokes
April 14, 2017 11:10 am

Richardscourtney wrote:

Trenberth asserted that the scientific null hypothesis should be reversed.

No. Trenberth said

“Humans are changing our climate. There is no doubt whatsoever,” said Trenberth. “Questions remain as to the extent of our collective contribution, but it is clear that the effects are not small and have emerged from the noise of natural variability. So why does the science community continue to do attribution studies and assume that humans have no influence as a null hypothesis?”

There is no evidence there that Trenberth is using “null hypothesis” in anything other than the usual statistical usage. Note he us talking about whether the anthropogenic effects have emerged from the noise of natural variability, a classic example of null-hypothesis statistical testing. Trenberth makes no mention of scientific method, so I doubt it is meant in the sense Richard claims (especially since I suspect that Richard made it up in the first place, as demonstrated by his inability/unwillingness to provide a verifiable reference).

FWIW I disagree with Trenberth as well, the null hypothesis depends on the argument you are trying to make, so as to enforce an element of self-skepticism. But the idea that he is violating some established non-statistical definition of the “null hypothesis” is twaddle (as Nick suggests), as there doesn’t seem to be any evidence for the existence of such a usage.

Reply to  Nick Stokes
April 16, 2017 4:09 am

Reading Nick Stokes quote pseudoscience to justify pseudoscience is amusing.

Reply to  Nick Stokes
April 17, 2017 7:00 am

To add to an argument Nick made earlier, he wrote

Suppose we did get to a situation where there was a 2°C rise for several centuries, and there had been such a trend for some decades before. On your basis, you could never say that it was significant.

If the question was “are we warming?” then the null would be an assumption of no warming and the comparison would show warming. In this case we are warming and the question is whether CO2 has impacted the warming.

To answer that question you dont compare to no warming, that’s the wrong null. You compare to the known amount of “non-CO2” warming which climate science cant tell us. So we do the next best thing and use the rate of warming that was seen in the previous “some decades before”

Chris Hanley
April 12, 2017 4:06 pm

Apart from the attribution problem, they ‘tune’ their models to made-up historical data anyway.

Janice Moore
Reply to  Chris Hanley
April 12, 2017 4:09 pm

+1

Pop Piasa
Reply to  Chris Hanley
April 12, 2017 7:14 pm

I’d say it’s more about tweaking history to support the models when you get down to it.

April 12, 2017 4:11 pm

The part of scientific method that Trenberth and his compatriots fear the most is the concept of falsification, where just one failed test is sufficient to disprove a hypothesis. Anyone who thinks that 3C from doubling Co2 is anything but a hypothesis (a sensitivity of 0.8C per W/m^2) requires some remedial science training.

Janice Moore
Reply to  co2isnotevil
April 12, 2017 4:13 pm

Yep.

Since such a guess is NOT falsifiable (in a real world, realistic, experiment), it does not even rise to the level of a useful hypothesis.

Pure conjecture.

Reply to  Janice Moore
April 12, 2017 4:17 pm

Janice,
The absurdly high sensitivity of 0.8C per W/m^2 of forcing is readily falsifiable. All you need to do is measure the sensitivity to 1 W/m^2 of incremental solar energy (after reflection) since the IPCC defines this to be equivalent to 1 W/m^2 of forcing.

Reply to  Janice Moore
April 14, 2017 10:13 pm

Right!

Rick C PE
Reply to  co2isnotevil
April 12, 2017 6:14 pm

Actually, to be proper science, a failed test should also be replicable. Einstein engaged in a bit of hyperbole when he said it would only take one experiment to prove him wrong. It would have taken at least 2 or 3 to confirm that the first was not a fluke. Not that the CAGW has not failed multiple tests already – multiple independent satellite and balloon data sets have already falsified the CGMs.

Reply to  Rick C PE
April 13, 2017 9:21 am

No Einstein wasn’t wrong. It is the one single experiment, it just has to be repeated 2 or 3 times.

Reply to  Matt Bergin
April 13, 2017 9:44 am

No Einstein wasn’t wrong. It is the one single experiment, it just has to be repeated 2 or 3 times.

iirc the telegram said “They moved”

Reply to  co2isnotevil
April 12, 2017 10:05 pm

Only if the prediction is deterministic is one failed test sufficient to disprove the hypothesis. For modern global warming climatology, however,, predictions are not made. Thus there is no hypothesis!

Reply to  Terry Oldberg
April 12, 2017 10:30 pm

Terry,
The claimed sensitivity makes a prediction about how the planet responds to changes in post albedo solar energy and this forms the path of falsification. Since there is a predictable difference in solar insolation per slice of constant latitude, and at mid latitudes, there is no net transfer of energy from the equator when averaged across a whole number of years, we can determine the exact sensitivity by comparing the temperature difference on either side of the mean slice and divide this by the difference in yearly insolation between those slices.

Reply to  co2isnotevil
April 13, 2017 1:52 pm

co2isnoevil:

I’m having trouble decoding your message. If you have a link to a more detailed explanation please provide same.

Reply to  Terry Oldberg
April 14, 2017 7:53 am

Terry,
The IPCC defines forcing as a change in net flux at the top of the troposphere. Co2 can not actually force the system, only the Sun is capable of doing this. What they do then is say that doubling CO2 is EQUIVALENT to 3.7 W/m^2 more solar energy after reflection by clouds. They then claim (without justification) a sensitivity factor of 0.8C +/- 0.4C per W/m^2 of forcing and when you multiply 3.7 times 0.8 you get the 3C warming that they claim.

They do several things to obfuscate the absurdity of the claimed sensitivity. First, they define forcing after reflection, which makes the apparent negative feedback like effect from albedo disappear. Second, they express sensitivity as degrees per W/m^2 which is about as non linear of a metric as you can get since temperature is proportional to degrees raised to the forth power. Third, they claim CO2 is a forcing influence, when in fact it represents a change to the system, not a change to the input energy. Forth, they ‘normalize’ the change in temperature to the equivalent forcing from doubling CO2.

There’s a plot that shows how the planet responds to incremental solar energy here:
https://wattsupwiththat.com/2017/01/05/physical-constraints-on-the-climate-sensitivity/
And the sensitivity is the slope of this relationship which is only about 0.3C per W/m^2 and not the 0.8C claimed. If instead, we plot post albedo solar input vs. temperature, rather than output emissions vs. temperature, the sensitivity becomes only 0.19C per W/m^2.

A more reasonable sensitivity metric is W/m^2 of surface emissions per W/m^2 of forcing. This is very linear as seen below and slightly less than 1 W/m^2 of surface emissions per W/m^2 of input forcing:
http://www.palisad.com/co2/sens/pi/se.png

When the change in surface emissions is converted into a change in temperature, the result from this plot is about 0.19C per W/m^2.

They will try and confuse you by bringing up the non radiant energy passing between the surface and atmosphere. This goes both ways and whatever effect it has is already embodied by the temperature, thus the net energy leaving the surface is the SB emissions at the surface temperature.

Reply to  co2isnotevil
April 14, 2017 9:04 am

co2isnotevil:
My contention is that ECS, TECS or whatever we call it is a a scientifically illegitimate concept because the equilibrium temperature cannot be measured. Do you agree? If not, what is your argument?

Reply to  Terry Oldberg
April 14, 2017 9:38 am

My contention is that ECS, TECS or whatever we call it is a a scientifically illegitimate concept because the equilibrium temperature cannot be measured. Do you agree? If not, what is your argument?

I don’t believe we know the whole earth, but we do have some stations collecting data, and you can look to see what is happening at that location, and with enough stations you can learn some things. but you can’t say you know with any certainty what an area is doing when you don’t measure that area.

Reply to  micro6500
April 14, 2017 9:55 am

Right. You can use some of those stations to measure the local surface air temperature. However, you can’t use any of them to measure the equilibrium temperature because the equilibrium temperature (called the “steady-state temperature” in the engineering literature) cannot be measured.

Reply to  Terry Oldberg
April 14, 2017 11:38 am

Okay, because it’s only in equilibrium for a few moments each day. Same with TOA balance, it has to be a long term average, a year minimum, and that leaves out all the other cycle out to 80 or 100 years, because it is never in equilibrium, the planet spins under the Sun and the surface is asymmetrical. And we don’t have the data to tell anything.

Reply to  Terry Oldberg
April 14, 2017 6:53 pm

The ‘average’ equilibrium temperature can be measured and is just the SB equivalent of the average emission by the surface. In fact, this is what is called the average temperature based on satellite measurements since all we can measure/infer from space is average emissions which are then converted into temperature based on SB and the radiative xfer function of the atmosphere.

Reply to  co2isnotevil
April 14, 2017 8:18 pm

co2isnotevil:
What do you mean by “the SB equivalent of the average emission by the surface” and what is the relationship of this quantity to the “temperature” that is measured by a thermometer?

Reply to  Terry Oldberg
April 15, 2017 7:56 am

Terry,
The Stefan-Boltzmann Law (SB) sets the relationship between temperature and radiant emissions. From satellites, all we can measure is emissions and then, using radiative transfer codes, determine the temperature of the ideal black body surface that would result in the measured emissions. The resulting temperature is almost exactly what you would read from a thermometer on the surface since the surface itself is very close to an ideal BB radiator while the atmosphere between the surface and space makes the planet look gray, i.e. an emissivity < 1, and the smaller the emissivity gets, the warmer the surface gets, given constant solar forcing.

Currently, each W/m^2 of solar forcing results in 1.6 W/m^2 of emissions by the surface. The IPCC sensitivity requires the next W/m^2 of solar forcing to result in over 4 W/m^2 of surface emissions which is impossible under any conditions.

Reply to  co2isnotevil
April 15, 2017 10:09 am

co2isnotevil
Thank you for the clarification. The theory that you describe lacks falsifiability thus being unscientific. To satisfy falsifiability this theory would have to modified such that it identified the underlying statistical population. It would then be possible to attempt cross-validation of this theory. If successfully cross validated, this theory would then be suitable for use in making public policy.

Reply to  Terry Oldberg
April 16, 2017 7:49 am

Terry,
The theory predicts that the planet’s behavior will converge towards an ideal like behavior and this is very testable. If any of the tests failed, then the theory is falsified, however; no test I’ve come up with fails for Earth, the Moon or any other body in the statistical population of planets and moons.

Reply to  co2isnotevil
April 16, 2017 8:53 am

co2isnotevil:

To falsify your theory one would need access to the underlying population but there is no such population. Thus this theory is non-falsifiable and unscientific.

Reply to  Terry Oldberg
April 16, 2017 4:00 pm

Terry,

Population of what exactly? The term ‘underlying population’ is too ambiguous to be meaningful to me in this context. Please be more specific.

There’s the population of sites across the globe measured by satellites (nearly a million pixels covering the planet, most redundantly covered by 2 or more satellites). There’s the population of planets and Moons, each of which conforms to the hypothesis (even Venus). There’s the population of samples of each pixel (over 80K measurements per pixel, per satellite, spanning 3 decades at 4 hour intervals). There’s the population of satellites covering the same pixels (in 3 decades, there has been many generations of satellites with different sensors).

To me, falsification works as follows:
A hypothesis makes a prediction and the prediction fails. If the prediction doesn’t fail, it doesn’t mean the hypothesis is confirmed, but tells you to continue making additional testable predictions.

For example, the hypothesized sensitivity of 0.8C per W/m^2 by the IPCC is equivalent to an increase in surface emissions of about 4.4 W/m^2 per W/m^2 of forcing. This would predict that the average surface emissions should be 240*4.4 = 1056 W/m^2, corresponding to a surface temperature close to the boiling point of water. Obviously, this prediction fails, thus the hypothesized sensitivity is falsified. They would counter by claiming that they also hypothesize that solar forcing is not linear to surface emissions. Again, this is easily tested and if you look here under ‘demonstrations of linearity’, you will see surface emissions plotted against solar input for 3 decades of satellite data and it’s very linear at about 1 W/m^2 of surface emissions per incremental W/m^2 of solar forcing, which is also a test of my hypothesis that the many degrees of freedom in the climate system conspire to drive the behavior towards ideal, which in this case is a surface approximating an ideal black body at the average surface temperature.

http://www.palisad.com/co2/sens

Reply to  co2isnotevil
April 16, 2017 10:49 pm

c02isnoevil:
Good question! Imagine a partition of the time line extending backward in time. Each element of the sampling frame of a study of the global warming phenomenon is logically an element of this partition. Populating each element of this frame is the really existing aka “concrete” Earth on which we live.

Reply to  Terry Oldberg
April 17, 2017 8:43 am

Terry,
OK. If this time line goes back 3 decades with 4 hour samples, where each sample is filled in with redundant measurements from multiple satellites, surely this is enough of a population to establish the sensitivity of the planet to solar forcing within a relatively narrow margin of error.

We are not talking about models here, which is what most studying of the global warming phenomenon supplies, but populating this time line with concrete data whose conformance to physical laws (not models) can be tested. After all, my hypothesis is simply that the planet must honor the laws of physics.

Reply to  co2isnotevil
April 17, 2017 3:36 pm

co2isnotevil:

You don’t disclose what you mean by “sensitivity.” Let’s assume it is TECS. TECS is the ratio between the change in the equilibrium surface air temperature and the change in the logarithm of the atmospheric CO2 concentration. No conceivable satellite can be built that measures the surface air temperature at equilibrium for Earth’s surface is never at equilibrium.

Conceivably a satellite can be built that measures the surface air temperature. Unlike the surface air temperature at equilibrium, however, the surface air temperature is time varying. Thus, a “sensitivity” which, like TECS, is constant rather than being time varying is not a realistic possibility.

Reply to  Terry Oldberg
April 17, 2017 4:20 pm

Terry,
The only sensitivity that matters is the sensitivity of surface emissions to actual forcing from the Sun, where surface emissions are converted into an approximate surface temperature using the SB Law. Ground measurements of the resulting approximate temperature confirm that measurements are both very close and track almost perfectly to the surface emissions inferred by satellite measurements.

Given how forcing is defined by the IPCC, 1 W/m^2 of Co2 ‘forcing’ is EQUIVALENT to 1 W/m^2 of incremental post albedo solar energy. We can accurately measure both surface emissions and post albedo solar input. This is precisely what weather satellites measure where the color temperature of the measured emissions corresponds to the temperature of the surface below. Weather satellites do not have the spectral resolution to accurately determine the color temperature, so instead, radiative transfer codes are applied to see what surface temperature and consequential emissions would result in the total LWIR power measured by the satellite.

You, like many others, have been mislead by the many layers of obfuscation the IPCC applies to definitions of forcing and sensitivity, one of which is to define sensitivity in terms of Co2 concentration. Co2 is not a forcing influence, nor does my definition of sensitivity have anything to do with Co2 and is more closely related to the ‘sensitivity factor’ defined by the IPCC. This quantifies the relationship between actual forcing from the Sun and actual emissions by the surface which is repeatably measured in many ways.

Also, considering the sensitivity constant is to consider its long term average constant, which is demonstrably true. The instantaneous sensitivity is meaningless in the context of the climate and only the average matters, relative to the radiative balance of the planet.

Reply to  co2isnotevil
April 17, 2017 5:41 pm

CO2IsNotEvil
I’m experiencing difficulty in decoding your message. If I wished to assign a numerical value to “the sensitivity of surface emissions to actual forcing from the Sun” what measurements would I make and what would I do with the numbers that resulted from these measurements?

Reply to  Terry Oldberg
April 17, 2017 6:41 pm

Terry I did it by taking the slope of temp and tge calculated clear sky solar forcing as it changes from the seasons.
https://micro6500blog.wordpress.com/2016/05/18/measuring-surface-climate-sensitivity

Reply to  micro6500
April 18, 2017 9:08 am

micro65000
By reading the material with the URL of https://micro6500blog.wordpress.com/2016/05/18/measuring-surface-climate-sensitivity I gained a partial understanding of your “climate sensitivity.” Thanks for sharing this URL with me
In the data that are presented I I note that unlike TECS, this “climate sensitivity” is time-varying. This is the expected result from the fact that the “temperature” is the measured temperature and not the equilibrium temperature..My critique, on the other hand, is of the IPCC claim that TECS is a constant. As the numerator of TECS is the change in the global surface air temperature at equilibrium but equilibrium is never reached on the concrete Earth, the numerator cannot be measured. Thus, one concludes that the IPCC’s claim that TECS is a constant is dogma dressed up by the IPCC to look like science.

Reply to  Terry Oldberg
April 18, 2017 9:47 am

I do try to make sure I point out it’s an effective sensitivity, because it does include the effect of weather.

Reply to  Terry Oldberg
April 18, 2017 12:01 am

Terry,
The numerical value for this comes from the LTE average of 1.61 W/m^2 of surface emissions per W/m^2 of post albedo forcing from the Sun. The dimensionless ratio representing this is the scalar number 1.61. This value arrives as the BB emissions of an ideal BB at the AVERAGE surface temperature (387 W/m^2 @ 287.5K) divided by the AVERAGE post albedo input power (240 W/m^2 @ 255K) where 387/240 = 1.61. Both averages are readily measured by many methods within a precision better than 10% (2-3 W/m^2). This can be trivially converted to the IPCC’s non linear, obfuscating metric expressed as degrees per W/m^2 by applying SB to the surface emissions before and after increasing solar input by 1 W/m^2 and subtracting which results in an approximate sensitivity of 0.3C per W/m^2.

If doubling CO2 is equivalent to 3.7 W/m^2 of solar forcing, then 3.7*0.3 = 1.1C per doubling CO2.

Reply to  co2isnotevil
April 18, 2017 9:19 am

co2isnotevil:
There is no reason for belief in the proposition that 1.61 is a constant. Thus your otherwise meritorious work provides no basis for public policy.

Reply to  Terry Oldberg
April 18, 2017 10:00 am

Terry,
The data is pretty conclusive that the 1.61 ratio has not varied by more than a percent or two in the last 3 decades. Sure, its not exactly constant, but it’s undeniable that its approximately constant and varies over a narrow range. Whether it’s 1.59, 1.63 or somewhere in between is irrelevant to my hypothesis since any value between 1.0 and 2.0 correspond to nearly ideal gray body behavior with an emissivity of between 1.o and 0.0. Keep in mind that the IPCC considers this ratio to be an impossibly high 4.4 (negative emissivity) and there can be no question that this is no basis for public policy. The fact that it’s only about 1.61 means that there’s no need for public policy to address it in the first place.

To point out the obvious, the physical effect of doubling Co2 does increases the 1.61 ratio, but only up to about 1.625; moreover; this ratio can be calculated based on GHG concentrations and average cloud coverage. Note that applying the higher ratio to the equivalent forcing from doubling Co2 counts the GHG effect twice.

Reply to  co2isnotevil
April 18, 2017 10:23 am

Terry,
The data is pretty conclusive that the 1.61 ratio has not varied by more than a percent or two in the last 3 decades. Sure, its not exactly constant, but it’s undeniable that its approximately constant and varies over a narrow range. Whether it’s 1.59, 1.63 or somewhere in between is irrelevant to my hypothesis since any value between 1.0 and 2.0 correspond to nearly ideal gray body behavior with an emissivity of between 1.o and 0.0. Keep in mind that the IPCC considers this ratio to be an impossibly high 4.4 (negative emissivity) and there can be no question that this is no basis for public policy. The fact that it’s only about 1.61 means that there’s no need for public policy to address it in the first place.
To point out the obvious, the physical effect of doubling Co2 does increases the 1.61 ratio, but only up to about 1.625; moreover; this ratio can be calculated based on GHG concentrations and average cloud coverage. Note that applying the higher ratio to the equivalent forcing from doubling Co2 counts the GHG effect twice.

Oh, yeah, because water vapor self adjusts emissions. Can you plot it out over time? it likely changed with the ocean cycles, as at least the water vapor distribution has changed over the last 30 years.

Reply to  micro6500
April 18, 2017 10:47 am

micro6500,

Here’s how the 1.61 value varies over time. Each sample represents the average of 1 month of data.

http://www.palisad.com/co2/tp/gain.png

Reply to  micro6500
April 18, 2017 11:05 am

micro6500
Please note that “nearly” is polysemic thus being unsuitable for use in making an argument.

Reply to  Terry Oldberg
April 18, 2017 11:17 am

Fair enough. But that isn’t what I have based my opinion of the AGW issue on.

Reply to  co2isnotevil
April 18, 2017 10:28 am

I’ll also note, CS to Co2 is likely nowhere near 1.6C/doubling though.

Reply to  micro6500
April 18, 2017 10:49 am

micro6500,
The 1.6 value is the dimensionless ratio of surface emissions to input power and does not have units of temperature.

Reply to  co2isnotevil
April 18, 2017 10:44 am

co2isnotevil

The term “approximately” is polysemic ( has many meanings ). When it changes meaning in the midst of an argument over the reality of CAGW this argument becomes an example of an “equivocation.” Though an equivocation looks like a syllogism it isn’t one. Thus, while it is logically proper to draw a conclusion from a syllogism, it is logically improper to draw a conclusion from an equivocation. To draw such a conclusion is the “equivocation fallacy.” Inadvertent application of this fallacy can be avoided through use in making an argument of monosemic terms.

The language of mathematical statistics provides the required monosemic terminology. In this language, the co2isnotevil climate sensitivity is an example of a “statistic.” In order for this statistic to be used in making policy it must be “stationary. Stationarity is demonstrated through “cross validation” of a statistical “model.” underlying which is a “statistical population.”

Reply to  Terry Oldberg
April 18, 2017 10:59 am

Terry,
By your logic, nothing about the climate can be known to any degree of precision no matter how many measurements and measuring devices are employed. This is why we have error bars which bound uncertainty. As you can see with the plot I responded to micro6500 with, the 1.61 varies over a small range, even when short term (monthly) averages are considered. Across the many relationships among climate variables that I’ve studied, the approximate 1.6 value of the gain has the tightest distribution of data varying by less than +/- 2%.

Reply to  co2isnotevil
April 18, 2017 6:57 pm

co2isnoevil
There is only one logic which is the one I use in making arguments.

Reply to  Terry Oldberg
April 18, 2017 8:03 pm

But your logic doesn’t seem to be based on the scientific method, but instead on some abstract statistical concept where you’re trying to ascertain an absolute that can otherwise be easily mitigated by introducing uncertainty.

Why don’t you explain in precise terms the kind of experiment that can demonstrate what the sensitivity actually is.

Reply to  co2isnotevil
April 18, 2017 8:55 pm

co2isnoevil:

a) the proposition that there are several logics of which one is mine is false and,.
b) there is no kind of experiment that can demonstrate what the sensitivity actually is.

Reply to  Terry Oldberg
April 18, 2017 10:56 pm

Terry,
Arguing with you is like arguing with Siri. Your statement a) in answer to my question has no meaning and b) is demonstrably false as I’ve already explained and will explain again. Whether you’re a bot or not, you likely learn based on the logic outlined on your web site. Recall that my initial hypothesis that degrees of freedom conspire to seek ideal behaviors to minimize entropy is what you call ‘entropy minimax’ and that the path from the measured ideal behavior to the sensitivity per the IPCC definition is a trivial application of Stefan-Boltzmann LAW and deductive logic.

It’s relatively easy to craft experiments to determine if the measured behavior is close to ideal using weather satellite data and I will refer you here for the results of these experiments:

http://www.palisad.com/co2/sens

The measured sensitivity to doubling Co2 becomes 0.9 +/- .3C per W/m^2 based on doubling Co2 being EQUIVALENT to 3.7 +/- 0.4 W/m^2 of forcing. Even the upper bound based on the data is less than the lower bound estimated by the IPCC.

Reply to  co2isnotevil
April 19, 2017 9:14 am

co2isnotevevil:

a) is true; I’ll supply references at your request. b) under the ambiguous terminology that you favor, b) is true and false. It is true in reference to TECS and false in reference to the co2isnotevil climate sensitivity. In the literature of global warming climatology “climate sensitivity” is polysemic. This makes it an excellent vehicle for proving a falsehood as you have just done..

Reply to  Terry Oldberg
April 19, 2017 10:13 am

Terry,

No reference will convert a) from the meaningless sentence it is to something that has meaning. Perhaps you can restate a) using proper grammar?

TECS also has no meaning. There’s either the TCS (transient climate sensitivity) or the ECS (equilibrium climate sensitivity). The TCS is a meaningless abstraction and all that matters is the ECS (LTE sensitivity) which is what I am measuring with the data. The ECS must conform to macroscopic physical laws, while the TCS can randomly appear to violate them providing the violations are accompanied by equal and opposite violations such that the time series average of the TCS is the ECS. It’s these transiently, random perturbations that the TCS attempts to capture. If you think b) is true for the TCS and false for the ECS you are clearly confused. In fact, b) is ONLY true for the ECS and is irrelevant for the TCS.

I think your confusion arises because you don’t believe that 3 decades of data is sufficient to establish the ECS, but in fact, 1 year of data is sufficient to get a reasonable approximation and 3 decades of data produces an answer that’s within a few percent. You also seem to be confused about the fact that the ECS is relatively constant, but in fact it does change and has a computable dependence on system attributes like CO2 concentrations, except that the IPCC defines forcing and sensitivity in such a way that the system is held constant as solar forcing EQUIVALENT to a change in the system is applied.

You also haven’t responded to my assertion that my original hypothesis is a minimax model which you claim eliminates logical errors as a ‘peak performance’ model, whose discovered pattern is conformance to the SB Law. By your own logic, my model precludes the possibility of logical errors.

Reply to  co2isnotevil
April 19, 2017 11:43 am

Why don’t you explain to us your application of the equivocation fallacy. Did you do this for the purpose of misleading the audience for our debate?

Reply to  Terry Oldberg
April 19, 2017 1:05 pm

Terry,

You’ve been evasive, have not answered questions and have used ambiguous, misleading and irrelevant arguments to support your point. This is the classic signature of resorting to the equivocation fallacy to support a position.

Meanwhile, the only imprecise components of my arguments are quantified with deterministic error bars.

I’m no longer interested in continuing this discussion.

Reply to  co2isnotevil
April 19, 2017 2:35 pm

You didn’t answer my question about why you applied the equivocation fallacy. Why did you do so? You declined my offer to provide citations to scholarly works supporting my claims? You announced that you would decline further participation in our debate. From the looks of this situation, having run out of ammunition you are capitulating but lack the decency to admit it.

Michael darby
Reply to  Terry Oldberg
April 14, 2017 8:41 pm

Obviously Terry, you have no concept of Stefan-Boltzman. You may be an expert in playing word games, but your question to co2isnotevil displays your ignorance of the basics of radiative physics.

Reply to  Michael darby
April 14, 2017 9:59 pm

Michael darby:

Though you make an argument this argument is not of the form of a syllogism. Thus there is no logical reason for belief in the proposition that the conclusion of this argument is true. Why don’t you try to frame your argument in the form of a syllogism and report back to us on your success or failure?

Michael darby
Reply to  Terry Oldberg
April 15, 2017 10:35 am

Oldberg, if you are claiming that the Stefan-Boltzman law is not falsifiable, you are wrong. All that would be required to falsify it is a single observation where the radiated power from a black body does not follow J* = σ T ^^4 Find a single counter example, and the law is falsified.

.

Now if you are referring to Co2isnotevil’s emissions claims, your requirement of a “statistical population” is bogus, because he’s referring to direct measurements, not an issue of sampling. If you are looking for validation of this, it’s already been done, as radiosonde measurements have confirmed both MSU and AMSU satellite instrumentation.

Reply to  Michael darby
April 15, 2017 11:39 am

Michael darby:

That the Stefan-Boltzman law is not falsifiable is not my claim. It is the claim that TECS aka ECS is a constant that is at issue. It is this claim that is not falsifiable.

richardscourtney
Reply to  Terry Oldberg
April 15, 2017 10:37 am

Michael darby:

You say to Terry Oldberg

Obviously Terry, you have no concept of Stefan-Boltzman. You may be an expert in playing word games, but your question to co2isnotevil displays your ignorance of the basics of radiative physics.

Terry Oldberg is not expert in anything: he just likes to disrupt threads by posting meaningless gobbledygook which is best ignored.

Richard

Michael Darby
Reply to  Terry Oldberg
April 15, 2017 11:59 am

The historical record provided by ice core data and other proxy measurements gives range bounds for both ECS and TECS. So these bounds provide you with a “statistical population” from which you can sample. You can then falsify by statistical hypothesis testing.

Reply to  Michael Darby
April 15, 2017 1:44 pm

Michael Darby:

Thank you for taking the time to respond. A proxy provides a climatologist with one or more time series. It does not provide a climatologist with a statistical population. Climatologists have yet to identify the statistical populations underlying their climate models. Thus, the claims that are made by these models remain non-falsifiable. More seriously, these claims supply a would be regulator of the climate system with no information about the outcomes of events making regulation impossible.

The elements of a statistical population occupy a sampling frame. Each element of the frame locates a single sampling unit should it be drawn into a sample. For a study of Earth’s climate, the frame would be a partition of the time line into non-overlapping periods of time. Each sampling unit belonging to this population would be the really existing aka “concrete” Earth and its atmosphere in a different period of time.

The complete set of these sampling units would have a common set of dependent variables and a common set of independent variables. Each sampling unit would have a value for each such variable.

The model would center on a pair of state-spaces. One would be this model’s sample space and the other would be its condition space. Claude Shannon’s measure of the intersection between the two state-spaces is called the “mutual information” and is the information that is available to a regulatory agency for the purpose of controlling the climate system. Absent the statistical population the value is nil of the mutual information. Thus, the climate system is insusceptible to control.

Michael darby
Reply to  Terry Oldberg
April 16, 2017 4:38 pm

A time series of a climate parameter, be it temp, wind speed, wind direction or sky conditions, whether from direct measurements or from a proxy comprise a sampling from the statistical population. Not talking about “models” here, just plain old fashioned measurements.

nn
April 12, 2017 4:15 pm

We can’t predict the future. We can only speculate or estimate the past. In general, the scientific domain is characterized by the self-evident knowledge that accuracy is inversely proportional to the product of time and space offsets from an observer’s frame of reference.

Reply to  nn
April 12, 2017 4:45 pm

Or, as I like to say, “time covers its tracks”.

KevinK
Reply to  Bartleby
April 12, 2017 5:52 pm

“Time wounds all heels”

Gloateus
Reply to  Bartleby
April 15, 2017 10:44 am

Kevin,

Would that that were so. But it isn’t. Charlatans prosper in “climate science” for instance, until they retire or die.

Reply to  nn
April 14, 2017 8:31 pm

nn:
That “we can’t predict the future” is inaccurate. That “we can’t gain PERFECT INFORMATION about the outcomes of fhe events of the future” is accurate. To gain IMPERFECT INFORMATION about the outcomes of these events is possible and if accomplished has the benefit of positive expected utility.

Mark from the Midwest
April 12, 2017 4:21 pm

Kind of curious that Trenberth makes no mention of model validation except to reference the comments of Chairman Lamar Smith. Trenberth seems to jump from model construction straight to what-if experiments. This seems to be a consistent fault of so-called scientists in the post-modern age. The model “IS” a de-facto reality because it is developed by somebody who, socially, has the trappings of a “scientist.” Subsequently, can’t do crap to make sense of it for him, post-modern thinking has no basis for validation of a model against reality because the model used to judge reality is, by definition, the reality.

Reply to  Mark from the Midwest
April 14, 2017 10:12 pm

Mark from the Midwest:
Right. Validation of the model is impossible pending identification of the underlying statistical population. For a person in Dr. Trenberth’s shoes it is best not to bring up the topic of “model validation.” The IPCC sanctioned dodge is to shift the topic from “validation” to “evaluation” in a manner that obscures the important differences between the two terms. Few folks are aware of the fact that there is a scientifically crucial difference between “validation” and “evaluation” as the two words are similar sounding..

clipe
April 12, 2017 4:30 pm

Towards the end of the mails the running theme is, ‘Where the hell is global warming anyway?’

1255318331 Stephen Schneider, who has been here before, having predicted a catastrophic global cooling in the 1970s, passes on some bad news: ‘Paul Hudson, BBC’s reporter on climate change, on Friday wrote that there has been no warming since 1998, and that pacific oscillations will force cooling for the next 20-30 years.’ (A goody-goody student, of whom I expect a great career in the climate research field, has alerted him to this, asking, ‘Do you think this merits an op-ed response in the BBC from a scientist?’). Schneider refers to the reporter as ‘this new “IPCC Lead Author”‘, which I think is meant to be sarcastic but I call a mortal insult.

Then he says the weather will be getting hotter soon anyway, because, well, the sun-spots that drive the temperature will be back.

Michael Mann says ‘extremely disappointing to see something like this appear on BBC’ and perhaps they should get the British Met Office to respond. ‘I might ask Richard Black [someone at the BBC he can trust] what’s up here?’

But then 1255352257 all hell breaks loose. Kevin Trenberth responds:

‘Well I have my own article on where the heck is global warming? We are asking that here in Boulder where we have broken records the past two days for the coldest days on record. We had 4 inches of snow. The high the last 2 days was below 30F and the normal is 69F, and it smashed the previous records for these days by 10F. The low was about 18F and also a record low, well below the previous record low. This is January weather …

The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t.

April 12, 2017 4:32 pm

Trenberth blatantly lied to me in writing. Previously posted on WUWT. I wouldn’t believe anything from him.

Gloateus
Reply to  dradb
April 12, 2017 4:40 pm

He, Gavin, Phil Jones, Jimbo Hansen, Mikey Mann, et al, have blatantly lied to the world in writing. But none of them has as yet gone to jail over the theft of trillions and deaths of millions.

mike
Reply to  Gloateus
April 12, 2017 6:48 pm

Yet is the operative word.

MarkW
Reply to  Gloateus
April 13, 2017 8:42 am

They won’t, for the same reason that Hillary won’t go to jail for breaking many laws regarding government records.

Chris
Reply to  Gloateus
April 13, 2017 10:23 am

“Theft of trillions” – oh please, enough with the ridiculous hyperbole.

Gloateus
Reply to  Gloateus
April 13, 2017 11:11 am

Chris,

Not hyperbolic at all. Accurate.

So-called “Green energy”, ie renewables as defined by the robber barons of wind and solar, has cost the world about $250 billion per year for over a decade, plus slightly less during previous decades. Do the arithmetic.

The climate cr!minals want to up the heist to a trillion dollars annually by 2030:

http://www.motherjones.com/blue-marble/2014/04/renewable-energy-spending-bnef

Gloateus
Reply to  Gloateus
April 13, 2017 11:16 am

Chris,

I don’t know why my reply to you is in moderation. Maybe because I used the word “c-r-!-m-i-n-a-l-s” to describe the Green merchants of death, which is entirely justified when you look at the history of wind and solar sc@ms in the US and other countries.

Just in the past decade, well of over two trillion dollars have been wasted on so-called “renewable” energy projects. So, far from being hyperbole, my statement is a matter of mathematical fact.

MarkW
Reply to  Gloateus
April 13, 2017 2:36 pm

Chris when you add up all the money spent by the governments of the world to [force] higher electricity and food prices for billions, then yes, Trillions.

Gloateus
Reply to  Gloateus
April 13, 2017 5:33 pm

You can get to hundreds of billions just with direct expenditure on cooked book “climate science”.

What a titanic waste!

Gloateus
Reply to  Gloateus
April 14, 2017 12:56 pm

Since the rip-off has been going since the 1980s, the total squandered by now must be over three trillion, which means it’s on the order of ten trillion, not just “trillions”. So my statement was conservative rather than hyperbolic.

Richard M
April 12, 2017 4:34 pm

Nothing wrong with the models that some super duper, highfalutin homogenization won’t fix. Got to whip that data into shape.

Yalian
April 12, 2017 4:35 pm

“All models are wrong, some models are useful”. George E. P. Box. http://issues.org/30-2/andrea/

richardscourtney
Reply to  Yalian
April 13, 2017 3:59 am

Yalian:

Not all models are wrong.

A model is right when it makes predictions that are correct to within the measurement error of the modeled parameter.

A model is wrong when it makes predictions that are not correct to within the measurement error of the modeled parameter.

The nonsense that “all models are wrong” is merely a falsehood that pseudoscientists use as an excuse to justify using models that are wrong.

Richard

Butch
April 12, 2017 4:42 pm

…If the past historical temperature data does not match your models, then change the past historical temperature data to match your models…If the present temperature data does not match your models, then change the present temperature data to match your models ..Presto ….now the temperature models of the future are accurate !! D’OH !!

jorgekafkazar
April 12, 2017 4:46 pm

Trenberth, unlike most in Ali Baba’s band of climate thieves, has moments of clarity. This is not one of them.

commieBob
April 12, 2017 4:52 pm

… because of their complexity and sophistication, they are so much better than any “back-of-the envelope” guesses, and the shortcomings and limitations are known.

If you know what you’re doing, you can usually do useful back-of-the-envelope calculations. All the sophistication and complexity in the world won’t help you if you don’t know what you’re doing.

gnomish
Reply to  commieBob
April 12, 2017 5:10 pm

‘complexity and sophistication’ is code for ‘ineffably idiotic’

Janice Moore
Reply to  gnomish
April 12, 2017 6:43 pm

lol +1 🙂

jim
April 12, 2017 4:53 pm

Kevin Trenberth
Draft Contributing Author for the Summary for Policy Makers,
contributing author to Ch 1, a lead author for Ch 3, and
contributing author to Ch 7 of the 4th UN IPCC report on climate change, AR4.)

12 Oct 2009: …we have broken records the past two days for the coldest days on record. (…) and it smashed the previous records for these days by 10F. (…) The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t. (. . .) Our observing system is inadequate. (1255352257.txt)
——————-
Oct 14, 2009: We are not close to balancing the energy budget. The fact that we can not account for what is happening in the climate system makes any consideration of geoengineering quite hopeless as we will never be able to tell if it is successful or not! It is a travesty! (1255523796.txt)
from: http://www.debunkingclimate.com/selectedemails.html

noaaprogrammer
April 12, 2017 5:04 pm

Even if God Himself wrote the computer code with complete knowledge of all the requisite physics for modeling the earth’s climate, and He was only limited by having to run the code on any digital computer built by man, the projections of His model would eventually diverge from the actual climate at some future point. It’s called the butterfly effect. The actual, real model – the Universe – runs with infinite precision. Models run on any digital computer necessarily has finite precision, which eventually causes the divergence between model and reality.

commieBob
Reply to  noaaprogrammer
April 12, 2017 5:17 pm

The computer necessary to adequately model the universe would be larger than the universe.

chilemike
Reply to  commieBob
April 12, 2017 5:47 pm

Of course maybe that’s the computer that’s running the simulation that we exist in. (Cue spooky noises)

David L. Hagen
Reply to  noaaprogrammer
April 12, 2017 5:53 pm

Good to know the Designer decided it was “very good”. Average global temperatures over geological eras have been remarkably stable within relatively narrow bounds compared to annual temperature extremes.comment image

Nick Stokes
Reply to  noaaprogrammer
April 12, 2017 7:58 pm

“the projections of His model would eventually diverge from the actual climate at some future point. It’s called the butterfly effect. The actual, real model – the Universe – runs with infinite precision.”
None of that is true. The divergence of weather from predictions is familiar, and happens aftre a few days. Climate prediction is not an initial value problem. It is a search for an attractor, which is fixed by the balance of forcings and response.

You can say a lot about the tides next year. That doesn’t mean that you measured the present state with high precision and fended off butterflies with digital arithmetic etc. It’s done by knowledge of the forcings and response.

As to “infinite precision”, that is meaningless. There is no way of knowing, and there is all kinds of quantum stuff etc that says it is not true. You can’t say that the universe is determined by an initial state, because you can never measure with certainty that state.

ferdberple
Reply to  Nick Stokes
April 12, 2017 9:46 pm

You can say a lot about the tides next year.
================
not from first principles, because the tides and climate are unpredictable from first principles. that is why climate models cannot calculate future climate.

Instead we calculate the tides next year using the same method that humans used to calculate the seasons, long before they understood what caused the seasons. The method is known as Astrology. The only method yet that has been shown capable of predicting the future state of chaotic systems with any degree of accuracy.

ferdberple
Reply to  Nick Stokes
April 12, 2017 9:48 pm

The method is known as Astrology.
=============
which also explains why the Farmer’s Almanac routinely outperforms climate models.

hunter
Reply to  Nick Stokes
April 13, 2017 5:20 am

And the climate prediction ensemble is like bowl of spaghetti.

Reply to  hunter
April 13, 2017 8:52 am

hunter:

In the semi-official terminology of global warming climatology, each piece of spaghetti is not a “prediction” but rather is a “projection.” None of today’s climate models predict. All of them project.

MarkW
Reply to  Nick Stokes
April 13, 2017 8:43 am

Did you mean astronomy instead of astrology?

MarkW
Reply to  noaaprogrammer
April 13, 2017 6:45 am

Not just the butterfly effect, but rounding error would guarantee that eventually the results of the model are meaningless.
Rounding error increases with each iteration, and these models go through millions of iterations.

April 12, 2017 5:18 pm

I was studying the ACS Climate Change tool kit sections on the single and multilayer theories (what I refer to as the thermal ping-pong ball) of upwelling/downwelling/”back” radiation and after seeing a similar discussion on an MIT online course (specifically says no transmission) have some observations.

These models make no reference to conduction, convection or latent heat processes which leads me to conclude that these models include no molecules, aka a “non-participating media,” aka vacuum. This is a primary conditional for proper application of the S-B BB ideal, i.e. ε = 1.0, equation.

When energy strikes an object or surface there are three possible results: reflection or ρ, absorption or α, transmission or τ and ρ + α + τ = 1.0.

The layered models use only α which according to Kirchhoff is equal to ε. What that really means is that max emissivity can equal but not exceed the energy absorbed. Nothing says emissivity can’t be less that the energy absorbed. If α leaves as conduction/convection/latent than ε will be much less than 1.0.

These grey bodied layered models then exist in a vacuum and are 100% non-reflective, i.e. opaque, surfaces, i.e. just like the atmosphere. NOT!

So the real atmosphere has real molecules meaning a “participatory” media and is 99.96% transparent i.e. non-opaque.

Because of the participating molecules only 63 W/m^2 LWIR of the 160 W/m^2 that make it to the surface leave the surface.

63 W/m^2 and 15 C / 288 K surface gives a net effective ε of about 0.13 when the participating media is considered. (BTW “surface” is NOT the ground, but 1.5 m ABOVE the ground per WMO & IPCC AR5 glossary.)

So the K-T diagram is thermodynamic rubbish, earth as a ball in bucket of hot mush is physical rubbish, the Δ 33 C w/ atmosphere is obvious rubbish, the layered models are unrelated to reality rubbish.

What support does the GHE theory have left besides rabid minions?
I see no reason why GHE theory gets a free pass on the scientific method.

https://www.acs.org/content/acs/en/climatescience/atmosphericwarming.html
http://web.mit.edu/16.unified/www/FALL/thermodynamics/notes/node136.html
http://writerbeat.com/articles/14306-Greenhouse—We-don-t-need-no-stinkin-greenhouse-Warning-science-ahead-
http://writerbeat.com/articles/15582-To-be-33C-or-not-to-be-33C

ScienceABC123
April 12, 2017 5:24 pm

So since we can’t observe “future events” we should give the models the benefit of the doubt??? I’m not following the logic here, primarily because there isn’t any!

Pop Piasa
Reply to  ScienceABC123
April 12, 2017 7:24 pm

But the models come from “Supercomputers”! Not just your Dell Inspiron piece of crap!
This is a prophecy from a Distinguished Senior Scientist, man! That’s heavy sh1t!

Am I supposed to be impressed?

MarkW
Reply to  Pop Piasa
April 13, 2017 8:48 am

Not just impressed, but a little grovelling would be nice.

April 12, 2017 5:30 pm

http://rationalwiki.org/wiki/Scientific_method

1) Observe – Look at the world and find a result that seems curious.
2) Hypothesize – Come up with a possible explanation.
3) Predict – The most important part of a hypothesis or theory is its ability to make predictions that have yet to be observed. A theory that makes no new predictions is scientifically worthless. Predictions must be falsifiable and specific.
4) Test Predictions – Compare the predictions with new empirical evidence. This step is the reason why a hypothesis or theory has to be falsifiable — if there’s nothing to falsify, then the experiment is pointless because it’s guaranteed to tell you nothing new.
5) Reproduce – ensure the result is a true reflection of reality by verifying it with others.

Pseudoscience — All but the first two steps are omitted from the process in pseudosciences. Pseudosciences do observe the world, and do come up with explanations, but are often unable or unwilling to follow through in testing them more thoroughly. Refining the hypotheses is also undesirable in pseudoscience as this could lead to abandoning the central dogma of the belief.

Scientific skepticism is a vital element in the scientific process, ensuring that no new hypothesis is considered a Theory until sufficient evidence is provided and other scientists have had their chances to debunk it. Even then, all of science is always considered a “good working model” and the “best understanding we have at the present time.

Pseudoscientists have discovered an obvious way to ‘cheat’ the scientific method. It goes like this:

1) Pick a personal belief that you already ‘know’ is true, but for which you want ‘proof’.
2) Perform some related observations or experiments, and note the results.
3) Generate a hypothesis that shoehorns said results into your personal belief.
4) Falsely claim that your personal belief predicts the particular results, and that the observations/experiment confirmed your suspicions.

This is a blatant perversion of the scientific method, but to someone not versed in science, fallacies, or psychology, it might seem similar enough to be accepted as legitimate.

Kevin Trenberth may have once been a scientist, but he is now a pseudoscientist. I’ve known this ever since he reported that climate change caused more hurricanes based on nothing.

knr
Reply to  lorcanbonda
April 13, 2017 12:53 am

correction will cause more ‘in the future ‘ or classic ‘heads you lose tails I win ‘

Ross King
April 12, 2017 5:34 pm

Wasn’t Trenberth complicit in ClimateGate? Just asking …. if he[?] was, isn’t he as complicit as the other conspirators in such as the Hockey-Schtick fiasco, e.g., Mann, Jones, et al?
Pls excuse my asking … I just haven’t had time to read *all* the above!!!

TonyL
Reply to  Ross King
April 12, 2017 5:52 pm

Of course he appears in the ClimateGate emails. Thar is where the “travesty” comment comes from, if I remember right. I do not think he came off as one of the bad guys, though.

Janice Moore
Reply to  Ross King
April 12, 2017 6:52 pm

Here ya go, Mr. King:

(copied from the WUWT 10th Anniv. anthology, p. 518)

Dom: “This one is huge. Compare what Trenberth says here : http://fortcollinsteaparty.com/index.php/2009/10/10/dr-william-gray-and-dr-kevin-trenberth-debate-global-warming/ …while exactly at the same moment he was writing:

From: Kevin Trenberth
To: Michael Mann
Subject: Re: BBC U-turn on climate
Date: Mon, 12 Oct 2009 08:57:37 -0600
Cc: Stephen H Schneider , Myles Allen , peter stott , “Philip D. Jones” , Benjamin Santer , Tom Wigley , Thomas R Karl , Gavin Schmidt , James Hansen , Michael Oppenheimer

Hi all
Well I have my own article on where the heck is global warming? We are asking that here in
Boulder where we have broken records the past two days for the coldest days on record. We
had 4 inches of snow. … ***The fact is that we can’t account for the lack of warming at the moment and it is a
travesty that we can’t. …”

(http://wattsupwiththat.com/2009/11/19/breaking-news-story-hadley-cru-has-apparently-been-hacked-hundreds-of-files-released/#comment-227456 )

Janice Moore
Reply to  Ross King
April 13, 2017 6:45 am

{Yes, this is the second time I’ve attempted to post this. It has been over 12 hours, though, so, I’m assuming it was simply “lost” and, thus, I’m trying again, this time, editing for all the possible “bad” words.}

(copied from the WUWT 10th Anniv. anthology)

Dom: “This one is huge. Compare what Trenberth says here : http://fortcollinsteaparty.com/index.php/2009/10/10/dr-william-gray-and-dr-kevin-trenberth-debate-global-warming/ …while exactly at the same moment he was writing:

From: Kevin Trenberth
To: Michael Mann
Subject: Re: BBC U-turn on climate
Date: Mon, 12 Oct 2009 08:57:37 -0600
Cc: {see original comment for list of names — too many spam-bin-triggers in there, IIRC}

Hi all
Well I have my own article on where the heck is global warming? We are asking that here in
Boulder where we have broken records the past two days for the coldest days on record. We
had 4 inches of snow. … ***The fact is that we can’t account for the lack of warming at the moment and it is a travestythat we can’t. …”

(http://wattsupwiththat.com/2009/11/19/breaking-news-story-hadley-cru-has-apparently-been-hacked-hundreds-of-files-released/#comment-227456 )

Reply to  Ross King
April 13, 2017 7:30 am

He was mentioned in ClimateGate, but I don’t think that makes him “complicit”.

More importantly, he was most likely the one who presumably was behind the sacking of the editor of “Remote Sensing Journal”, because the periodical had the gall to publish a paper by Roy Spencer.

To me, that one action is a greater travesty than all of ClimateGate. It’s not that he merely disagreed with Roy Spencer’s paper … he had the editor canned. That casts a pall on any “skeptic” research.

Bruckner8
April 12, 2017 5:40 pm

I hate this logic: it’s complex; therefore it’s legitimate.

ferdberple
Reply to  Bruckner8
April 12, 2017 9:37 pm

it’s complex; therefore it’s legitimate
=======================
The whole history of science shows that almost every complex formula is wrong, and that there is a more accurate, simpler formula that can replace it and deliver better results.

In science, complexity is almost always a guarantee that what you are looking at is wrong. Nature uses very, very simple methods to create what appears an first look to be complex.

For example, this fractal below looks very complex, yet the underlying equation is simplicity in itself.

z = z^2 + c.

However, if you undertook to describe the fractal without the above equation, the result would be very complex, running to tens if not hundreds of pages, and it would still not be correct.
comment image

Bill Illis
April 12, 2017 5:41 pm

Let’s saying you are running one of the big honking supercomputer climate models (based on Hansen’s 1978 assumptions directly built into the code like all of them are). Each run takes several days.

You run your model, it costs $20,000 and you are off the historical climate by a mile. Your model run says we should have gone back into an ice age in 1933.

Well, you try to figure out what went wrong and you discover the water vapor feedback was off.

You fix it and you run your model again. Another $20 G’s, and it comes back and says the Earth went Venus-like in 1977.

You fix it and you run your model again and viola, it now says temperatures will only rise by 0.5C by 2100.

Then you go in and plug all the numbers to result in 3.0C per doubling because that is what it is supposed to say. You spend your last $20,000 for model runs and viola, a perfect IPCC result that gets you many accolades.

You see the issue. There is ZERO transparency in what is “really” happening in this science. You can’t reproduce the result because they are not going to let you run their precious climate model and they are not going to give you a $20,000 budget to try it and prove how weird these models are.

$80,000 later and the model is simply some plugged result to keep everyone on the climate dole happy.

Flawless.

April 12, 2017 5:51 pm

I wonder if climate “scientists” would fly in airplanes designed, built and tested to the standards they set for themselves.

KevinK
April 12, 2017 5:55 pm

If the observations are on your side, pound on the observations,

if the theory is on your side, pound on the theory,

If neither, just pound on the table……..

Cheers, KevinK

Janice Moore
Reply to  KevinK
April 12, 2017 6:49 pm

I’ve just completed Mike’s Nature trick of adding in the real temps
to each series for the last 20 years (ie from 1981 onwards) amd from
1961 for Keith’s to hide the decline. …

Cheers
Phil

(from this WUWT article: https://wattsupwiththat.com/2009/11/19/breaking-news-story-hadley-cru-has-apparently-been-hacked-hundreds-of-files-released/ )

Herbert
April 12, 2017 6:00 pm

Percy W. Bridgman, Harvard Physicist and 1946 Nobel Prize winner-
” I personally do not think that one should speak of making statements about the future.For me, a statement implies the possibility of verifying its truth, and the truth of a statement about the future cannot be verified.”
P.W. Bridgman, ” The Way Things are” (1959),p.69.

clipe
April 12, 2017 6:03 pm

For some reason my comment in both the Test page and in this page have evaporated.

Notepad version

Towards the end of the mails the running theme is, ‘Where the hell is global warming anyway?’ 1255318331 Stephen Schneider, who has been here before, having predicted a catastrophic global cooling in the 1970s, passes on some bad news: ‘Paul Hudson, BBC’s reporter on climate change, on Friday wrote that there has been no warming since 1998, and that pacific oscillations will force cooling for the next 20-30 years.’ (A goody-goody student, of whom I expect a great career in the climate research field, has alerted him to this, asking, ‘Do you think this merits an op-ed response in the BBC from a scientist?’). Schneider refers to the reporter as ‘this new “IPCC Lead Author”‘, which I think is meant to be sarcastic but I call a mortal insult. Then he says the weather will be getting hotter soon anyway, because, well, the sun-spots that drive the temperature will be back. Michael Mann says ‘extremely disappointing to see something like this appear on BBC’ and perhaps they should get the British Met Office to respond. ‘I might ask Richard Black [someone at the BBC he can trust] what’s up here?’ But then 1255352257 all hell breaks loose. Kevin Trenberth responds: ‘Well I have my own article on where the heck is global warming? We are asking that here in Boulder where we have broken records the past two days for the coldest days on record. We had 4 inches of snow. The high the last 2 days was below 30F and the normal is 69F, and it smashed the previous records for these days by 10F. The low was about 18F and also a record low, well below the previous record low. This is January weather … The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t.

http://michaelkelly.artofeurope.com/cru.htm

April 12, 2017 6:10 pm

Event Attribution Science is not science
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2929159

PaulH
April 12, 2017 6:11 pm

Heck, they’re using “powerful supercomputers” to guess/predict/forecast the future. That’s good enough for me!
/snark

AP
Reply to  PaulH
April 13, 2017 4:25 am

If it’s a supercomputer, it can’t be wrong, can it? It’s got the word “super” in the name.

clipe
April 12, 2017 6:14 pm

Hello all. Is no one going to point-out that it’s Kevin, not KevEn?

Yours sincerely, lifelong Kevin.

Reply to  clipe
April 12, 2017 6:43 pm

Fixed, thanks

EarthGround Media presents
April 12, 2017 6:19 pm

“These are all very legitimate questions for scientists to ask and address.” When you say it like that, in this context, it rings clear. No, these are philosophical questions on par with eugenics and euthanasia. Scientists are challenged to be referees in the debate.

NW sage
April 12, 2017 6:20 pm

“Whatever the missing or mishandled factor is, it has a big influence on global climate.”
Shouldn’t this be plural? ie there may be (and probably are) more than one factor. Since we don’t know what is missing – we just know the models/theories don’t match reality – there is no reason to assume a single factor.

JBom
April 12, 2017 6:24 pm

Keven Trenberth is a true embarrassment of Science. Much of the problem is the bulk work of embarrassments populating the NAS, NSB and NSF and the ‘Old Girls Clubs’ of AAAS, AGU, AMS and APS, i.e. 4:20 Stoners simply put. But it is the Stoners in the NAS, NSB, NSF and Old Girls Clubs that through positive feedback brought us to this embarrassing moment in Science.

The “Great Horse Manure Crisis” of 1894 was solved, not intentionally, but by cheap gasoline, easy-to-stamp parts and Henry Ford’s automated assembly line! Google it.

Solving the “Stoner Problem” can be achieved by requiring daily urine testing for substances prohibited by Federal employment.

Solving the “Stoner Problem” will solve the current problem affecting Science. I can not go so far to write that “Climate Science” is in Fact a Science, yet.

MPASSEY
April 12, 2017 6:32 pm

It has been recognized since Francis Bacon that the scientific method produces increments in human knowledge via the process of acquiring observational data, which is to say, by doing experiments.

The calculated, intentional intellectual dishonesty of academic scientists calling a model run an “experiment” is simply stunning.

john UK
Reply to  MPASSEY
April 13, 2017 5:03 am

+1000000.
Call it a scenario in my book, nothing like an experiment.

April 12, 2017 6:39 pm

A theorist (or “natural philosopher” as they used to be called) makes a model from some observations. Ptolemy: “It’s quite evident that the objects in the sky revolve around the Earth, on tracks that double back every so often.”

Some durned observer comes along and breaks your model. Galileo: “Not so! Take a look at Jupiter through this gadget.”

Another theorist comes up with a new model that fits the experimenter’s data. Kepler: “Using the notion of heliocentrism (hat tip to Nicky here), the way the objects in the sky seem to behave is quite simple – here, look at these diagrams of elliptical orbits around the Sun.”

Some durned observer comes along and finds flaws in the new model. Every astronomer who ever paid attention to objects less than one parsec distant: “Hey, Johnny, cool theory – but Mercury doesn’t seem to work that way. Keeps precessing around the place.”

Yet another theorist comes around with yet another new model. Einstein: “Well, this notion of mass actually warping space-time explains the whole thing.”

We’re getting some observers these days finding things that really cannot be explained by Albert…

Every time, though, the same attitude cycle repeats. The theorists are the saints, and the observers are the heretics.

The “climate theorists” have given us their theory: “CO2 increase is the only force that can be responsible for the recent warming, and here is the graph that tells us so, and how much.”

We are obviously heretics for saying “Oh really? Care to take a look at these radiosonde results, and these ocean buoy readings, and these satellite measurements? Don’t seem to match the theory…”

There was a time when heretics in thought weren’t metaphorically (and sometimes literally) burned at the stake. Perhaps we have avoided a return to that time – but listening to the current crop of “saints,” I’m not so sure.

old construction worker
April 12, 2017 6:41 pm

Projections, not predictions. With climate models as tools, we can carry out “what-if” experiments.
A scientist see a frog jump 20ft. Goes home build a computer model that says “what if” the frog had wings it would jump 25ft. The following day the scientist see the same frog jump 25ft and says my computer model predicted that. Tells his fellow scientist frog must have invisible wing. But the frog still bumps it’s A#%.

JohnWho
April 12, 2017 7:04 pm

Trenberth: “The wonderful thing about science is that it is not simply a matter of opinion but that it is based upon evidence and physical principles, often pulled together in some form of “model.””

In all fairness, he did put the word “model” in quotes.

I have a question for all of the “climate scientists”:

Which “model” uses all of the known physical principles of the Earth’s climate system?

Unless I’m mistaken (which certainly is possible) I believe the Climate “modelers” have stated that they do not include all of the known physical principles because, well, it is just too hard.

I then submit that this is evidence that the current climate “models” are not science.

But then, “who” am I?

Reply to  JohnWho
April 12, 2017 8:04 pm

Mark Twain’s wonderful thing about science is more apt here: “one gets such wholesale returns of conjecture out of such a trifling investment of fact.

Reply to  Pat Frank
April 13, 2017 11:30 am

MOD, a couple of my posts with links got caught in spam purgatory. Can you take a look? Thanks. They point to a little WUWT Trenberth history.

Reply to  Pat Frank
April 13, 2017 3:57 pm

Thanks, Mod (CTM? 🙂 )

April 12, 2017 7:07 pm

Some years ago, Anthony featured a Christmas-flavored Kevin Trenberth email from the climategate trove, in which Kevin sent a carol around to the Hockey Team^TM with the libretto doctored to tout their commitment to “the cause.”

“The cause” was the transformation of the western world to environmental wonderfulness through the agency of AGW leverage.

Clear from that email was that Kevin Trenberth and the rest of the team were advocates, rather than scientists. Out to make a case. For those making a case, truth is a convenient incidental.

Kevin also caused Chris Landsea to resign from the IPCC, when he (Kevin) lied about a connection between hurricanes and global warming, during a press conference.

UCAR and Trenberth were totally unrepentant on the event, citing supportive studies based on climate model projections published without any physically valid uncertainty estimates.

Here is Donna Laframboise on that incident, in which she appropriately termed the IPCC as composed of moral midgets.

April 12, 2017 7:17 pm

More from Donna Laframboise on Kevin Trenberth, Chris Landsea, Hurricanes, and the cynical abuse of science for political ends, with special reference to that high-standing group, the Union of Concerned Scientists.

So far as we know, Kenji is the only card-owning member of the UCS in good standing with ethics.

April 12, 2017 7:19 pm

Here’s a model in which I have great faith:

I and a person I choose shall designate an eight-hour work day. I will work 2 hours, and my chosen assistant shall work 6 hours. At the end of the day, I shall receive 75% of the earnings of both my and my assistant’s work efforts.

Let us call the 6/2 ratio of my assistant’s hours and my hours a “forcing”. In other words, I make 75% of the day’s income, because of my forcing of 3 times as much work on my assistant. This is the forcing of my model, and no matter who works with me, we shall use this formula each and every time to determine my income for any given day. Three to one is a given.

This model is flawless, because it works every time. Just insert the hours I work, multiply by three, and then apply the 75% split, no matter how many hours we are talking about.

It’s a very precise formula. You cannot argue with the math.

Kermit Johnson
April 12, 2017 7:29 pm

Where to start??

In summary, CAGW is based almost entirely on computer models. These models are built to curve-fit the (poor quality proxy) historical data. Since the physics is not known, a fudge factor is used in this curve-fitting process. Of course, it is not actually called a fudge factor – it is called a sensitivity factor – that term does not sound quite so bad. If CO2 feedback was the only unknown, there would be one fudge factor common to all models. However, each model has its own fudge factor, and that is why there is such a wide range of these fudge factors.

Now, anyone doing this in the financial or commodity markets, also coupled, non-linear chaotic systems, knows that this is a sure road to bankruptcy. But, climate scientists do not have to be right to get paid. They merely must convince politicians to continue to give them money.

And, please do not respond that, even though they cannot predict weather two weeks out, they can predict a thirty year moving average of weather one hundred years out.

To know why this is so ridiculous, simply read James Gleick’s excellent book “CHAOS: The Making of a New Science” published back in 1987.

Reply to  Kermit Johnson
April 13, 2017 7:15 am

I used to love fudge, but since my indoctrination with the truth about how much fudging goes on in climate science, I tend to avoid it now. Fresh fruit is a better choice anyway, although there CAN be problems with being too fruity too.

There’s a balance, I guess, between fudge and fruit. And this BALANCE is what seems missing in traditional climate “science”.

Reply to  Robert Kernodle
April 13, 2017 9:59 am

Climate Science loves fudge. You could almost say it is addicted to fudge.

Now, it has diabetes. They still can’t stop eating the fudge, though.

Simon Ruszczak
April 12, 2017 8:04 pm

If you’d “followed a scientific method” (rational thinking), you’d know CO2 isn’t a “greenhouse gas”, as proved over a hundred years ago.

Dr. S. Jeevananda Reddy
April 12, 2017 8:29 pm

Power spectrum analysis was carried out using the 21 stations data series of global solar radiation and 8 stations data series of net radiation. The total solar radiation and net radiation intensities show sunspot cycle. This clearly indicates the influence of sunspot cycles on solar and net radiation intensities. Therefore, it is suggested that during the sunspot cycle period there is certain change in the solar radiation emitted by the sun itself; which in turn, is reflected in other atmospheric processes also. Both presented an increasing trend after 1940s at some industrial stations. It is more pronounced in net radiation due to air pollution related urban heat island factor. [S. Jeevananda Reddy, O. A. Juneja & (Miss) S. N. Lahori, 1977, Indian Journal of Radio & Space Physics, 6:60-66 – presented at the Symposium on Eart’s Near Space Environment, 18-21 February 1975, NPL, New Delhi].

This needs for the globe. This is a must for modellers before presenting the results.

Since 90s, eminent professors changed their path to get more papers published and get more funds and more students. They tried to predict the impacts of raise in temperature using another set of poor quality models. In fact I questioned them in international journals.

Dr. S. Jeevananda Reddy

BruceC
April 12, 2017 8:30 pm

“A politician needs the ability to foretell what is going to happen tomorrow, next week, next month, and next year. And to have the ability afterwards to explain why it didn’t happen.”

“The truth is incontrovertible. Malice may attack it, ignorance may deride it, but in the end, there it is.”

“It is a mistake to look too far ahead. Only one link of the chain of destiny can be handled at a time.”

“However beautiful the strategy, you should occasionally look at the results.”

Sir Winston Churchill

April 12, 2017 8:47 pm

They are going to lie and fabricate data until they die. Cut off the money, ignore them and publish their work where it belongs, in The Onion.

Robert of Texas
April 12, 2017 8:59 pm

A person this educated should not have so much trouble seeing the problems with what he is evangelizing. If raw data is changed, you need to publish the raw data, the changes to the data, the processes used to change the data, and the justification for the changes. All of this falls under the scrutiny of both supporters and critics to be debated, scrutinized, and improved. Climate Researchers RESIST being scrutinized, and therefore the data they use is simply not scientifically valid. Garbage in-Garbage out.

Models are used in science to find ways to test a theory. You don’t just assume the model is valid, you go back and test what it is predicting. Climate modeling has FAILED completely in predicting anything, and yet they cannot see the issue. This is bias, plain and simple. You invest so much into a work, you will be damned before abandoning it. Human emotion trumping scientific scrutiny.

You cannot extract detail from data that is noisier than what you are attempting to measure. Past temperature data is greatly flawed, and no amount of tweaking it makes it any better – all you are doing is adding in bias – YOUR bias. Satellite measurements are going to be the cleanest source of data (but still require tuning and tweaking to make up for all sorts of problems). Land temperatures are polluted with so much noise (like heat island effect) that it simply is not going to discriminate between natural and man made warming. There is not enough satellite data to tune a model, so catch-22.

There is WAY too much faith in proxies. Proxies are only as good as all the possible effects into them are understood. Tree rings are a perfect example – they depend on temperature, moisture, wind, height, tree species, access to sun light, and who knows what else. You get different results depending on what side of a tree versus the mountain slope you measure in some cases. Proxies are good for generalizations, but not for exact measurements. We should never believe a proxy is a safe substitute for actual measurements, and yet the entire climate science is built on them.

In science, it is the DEBATE that leads to better models and theories. Running and hiding from the DEBATE is a sure sign that something is very wrong. Hiding behind fake consensus is a political tactic. There is no room in GOOD SCIENCE for politics. Trying to silence skeptics is simply shameful. They should be met head on with good reason and facts, not attacked. Attacking your opponents is another sure sign there is something rotten in the theory.

In 20 to 30 years educated people will look back at the 1990’s-2220’s and shake their heads in wonderment that so many educated people could be so poorly trained in science.

Reply to  Robert of Texas
April 12, 2017 10:11 pm

To the truth that ” Climate modeling has FAILED completely in predicting anything” should be added that “currently existing climate models do not predict.”

hunter
April 12, 2017 9:12 pm

“Projections not predictions” is a deceptive assertion, since cliamte extremists and climate profiteers demand policies be put in place reflecting the most extreme predictions/projections.
Defenders of the consensus only rely on this faux distinction when defending the failure of their policies or predictions/projections. In other words, it is circular and self serving, not a serious argument they offer.

ferdberple
April 12, 2017 9:16 pm

But because of their complexity and sophistication, they are so much better than any “back-of-the envelope” guesses, and the shortcomings and limitations are known.
=============
nonsense. there is zero evidence that complexity makes any system “better”. There is a large body of mathematics and engineering that argues the exact opposite. Complexity increases unreliability. It does not increase accuracy.

The IPCC says the ensemble mean is more accurate than ANY SINGLE climate model. Yet four years ago Willis showed how a very simple “black box” could recreate the ensemble mean. In other words, his very simple climate model was MORE ACCURATE than any other climate model, according to the same criteria the IPCC uses.

Here is what Willis said on this 4 years ago:
“the climate model global average surface temperature results, individually or en masse, can be replicated with over 99% fidelity by a simple, one-line equation.”
https://wattsupwiththat.com/2013/06/03/climate-sensitivity-deconstructed/

Alan Ranger
April 12, 2017 9:23 pm

“Projections, not predictions
With climate models as tools, we can carry out “what-if” experiments. ”

Trenberth is clearly deluded here. All of his “what-if” scenarios should start of with
“What if the models/theory are actually correct, and what if …”

I’m sure he regards the assumed correctness of his models as some sort of axiomatic (dogmatic?) truth; but the most significant errors are coming from the formulation of the models themselves – as demonstrated again and again from the empirical observations. Somebody needs to teach Scientific Method 101 to this climate “scientist”.

Reply to  Alan Ranger
April 12, 2017 10:13 pm

Right!

ferdberple
April 12, 2017 9:24 pm

A genuine expert can always foretell a thing that is 500 years away easier than he can a thing that’s only 500 seconds off.
– Mark Twain, A Connecticut Yankee in King Arthur’s Court

Leo G
April 12, 2017 9:40 pm

“To show precedent for his position Trenberth cites the 2007 report by the Intergovernmental Panel on Climate Change which states that global warming is “unequivocal”, and is “very likely” due to human activities.”
If the IPCC claimed that global warming had only one possible interpretation- ie was unequiviocal- then how can it claim that there was more than one interpretation- ie it was only “very likely” due to human activities, but there was therefor some small likelihood that it was due to other causes.

hunter
Reply to  Leo G
April 13, 2017 5:24 am

Of course Trenberth and pals controlled the content of the IPCC report so isn’t that just dandy.

April 12, 2017 9:50 pm

With climate models as tools, we can carry out “what-if” experiments.

This is their main con trick: redefining speculation as a mere tool. The assumptions going into the models are speculation because they leave so much out. Just because some of it is true (correct science); does not make the whole true. People like Trenberth may have passed Science 101 but they would’ve failed Reason 101. It annoys me how oblique the models are. In contrast, when economist Wynne Godley wanted to explain, and justify, his life’s work: an alternative economic model – the stock-flow consistent model – he published a 530 page book to explain his ‘tool‘.

Reply to  mark4asp
April 12, 2017 10:15 pm

The major con-trick is application of the equivocation fallacy.

April 12, 2017 11:20 pm


is what Climb It Cyan Tits are all about..

J.H.
April 13, 2017 12:03 am

The “climate science” community doesn’t use the Scientific Method…. They use “Mike’s Trick” instead…..

That’s were you take a Proxy record of tree rings that (supposedly)represents temperature over several hundred years, then lop off the last 4 decades of that tree ring Temp Proxy because it’s showing a decline that doesn’t fit with actual observations and splice on the modern Thermometer record to “HIDE THE DECLINE” and instead produce/concoct a “HOCKEY STICK”.

These people have no credibility….. The ClimateGate emails showed that succinctly.

charles nelson
April 13, 2017 12:09 am

Kevin wouldn’t recognise the ‘scientific method’ if it came up and bit him on the arse.

Ian Macdonald
April 13, 2017 12:13 am

“it is clear that the effects are not small and have emerged from the noise of natural variability” -Trenberth

Not so. If all the smoothings and averagings were removed, the peak-to-peak noise amplitude of daily and seasonal temperature changes would be fifty times larger than the trend. Attempting to deduce anything from measurements which are so far below the noise floor of the system, is very poor science.

Reply to  Ian Macdonald
April 13, 2017 9:49 am

Ian Macdonald:

To analogize natural variation to “noise” and manmade global warming to a “signal” in the context of an attempt at establishing control over global warming is a mistake. To establish control the controller must have information about the outcomes of events before these events occur but for this information to reach us via a signal violates the limit on light speed in relativity theory.

Ian Macdonald
Reply to  Terry Oldberg
April 13, 2017 11:09 am

I’m not sure what you mean by c and relativity, but in most scientific circles it’s considered dubious practice to rely on data which lacks a few dB ‘headroom’ above the system noise. Climate data, meanwhile, is about 30dB BELOW the system noise floor. That’s now bad it is.

The reason you don’t see this on the graphs is due to some extreme low-pass filtering.

knr
April 13, 2017 12:48 am

They got a method! , beyond that is started from what they need and making the data ‘supply ‘ the right result .
Trenberth typifies how through climate ‘science’ third rate scientists , who otherwise would have a hard time getting a job in a second rate high school , can raise to the top of their profession . And enjoy a first rate life style .
So given that does any one thing he will hold his hand up anytime and soon and admit they are often simply ‘wrong’ ?

Mike Flynn
April 13, 2017 1:01 am

Over the life of the Earth, CO2 levels have apparently ranged from 100 bar nearly 100% to maybe 0.03 % in 1 bar atmosphere.

Surface temperature has dropped from maybe 5800 K to around 288 K.

The internal regions of the Earth remain molten. All explicable by ordinary known physics.

No reason to suppose that CO2 levels control surface temperature. Trenberth is deluded.

Cheers.

April 13, 2017 1:12 am
April 13, 2017 1:21 am

Trenberth challenges nothing less than the theory of relativity

E=mc^2

In other words atmospheric energy can vary as the function of atmospheric mass or the speed of light.

Which one is the mankind influencing?

Mark Johnson
April 13, 2017 1:23 am

I am really not interested in paying Mr. Trenberth’s salary any longer. Undoubtedly, someone like George Soros will step in to fill the bill, but that will be up to his new patron. Mr. Trenberth appears to lack any sense of humility. He offends me.

Scottish Sceptic
April 13, 2017 1:34 am

He’s the scientific method:

Something may be causing global warming … CO2 is well dispersed so it could cause global temperature change … the satellites, balloon data, sea surface data, show little or no warming – but the Arctic is warming (although Greenland gaining surface ice) … but even if a region is warming it is not global. Conclusion: there is no evidence for current “global warming” so there is no evidence for CO2 warming – instead we must look for regional drivers for regional temperature changes.

Here’s Trenberth’s method

We know CO2 causes warming … we know the satellites, balloon data and sea surface data must be corrupt because they don’t show the warming of NORTHERN Hemisphere LAND (AFTER adjustments to remove rural stations) …. so this very regional warming – in an outlier (or Out-liar?) dataset proves that the globe is warming “as predicted”. The science is settled, the consensus is agreed and anyone who dares quotes the lack of warming from the satellites is a denier of the science.

Sheri
Reply to  Scottish Sceptic
April 13, 2017 10:39 am

I believe you understand!!

Admad
April 13, 2017 1:55 am

Yes. I’m sure Lysenko would have defended his “methods”

April 13, 2017 2:20 am

The only reason this is relevant is because politicians and special interests have chosen to use raw science as if it is verified data that is safe to use.

Climate science is very much like Cosmology. But I don’t see cosmologists demanding that we dramatically change how we live because there may or may not be more dark matter than we thought. Or that dark matter even exists.

michael hart
Reply to  mickyhcorbett75
April 13, 2017 4:49 am

As a discipline it also appears too casual in the way that the fundamental data is subject to significant revisions with a concomitant disregard for how the previous ‘understandings’ were modeled on the now revised data.

Berényi Péter
April 13, 2017 2:56 am

With climate models as tools, we can carry out “what-if” experiments.

The term experiment has a specific meaning in science and running computational models is not covered by it. Therefore what they actually carry out is anything but an experiment.

Terrestrial climate is a non equilibrium irreproducible (chaotic) closed thermodynamic system, with only radiative exchange with its environment. It is far too big to be replicated in the lab, so at this level we are trying to understand a single run of a unique physical entity, an impossible task.

However, we could create other non equilibrium irreproducible (chaotic) closed thermodynamic systems in the lab at will, for example by putting a semitransparent container with a fluid in it onto a thermally insulated rotating table, enclosed in a vacuum chamber with walls cooled by liquid nitrogen and irradiate it with light. This kind of system belongs to one of the last uninvestigated fields of classical physics.

Then set up a computational model of that system and try to predict the effect of changing the infrared absorptivity of the fluid in it or whatever. That’s an experiment.

You can run it as many times as you wish and set its parameters at each run to any specific value, then observe the ensuing state.

AP
Reply to  Berényi Péter
April 13, 2017 4:30 am

I often look into a crystal ball to carry out my experiments. It has the same predictive power as a climate model.

April 13, 2017 3:38 am

ehhh
scratch
I think Trenberth models still have not captured the true nature of what happens TOA…
https://wattsupwiththat.com/2017/04/07/questions-on-the-rate-of-global-carbon-dioxide-increase/comment-page-1/#comment-2474983

I was just talking about that.

Martin A
April 13, 2017 3:52 am

It’s a travesty.

April 13, 2017 5:58 am

The problem with climate science is there is no way to test the core prediction, that the Earth will heat substantially in response to anthropogenic CO2 emissions, other than to wait and see.

Sure you can, you just have to look in the right place, which isn’t after averaging all of the data away.
MEASUREMENTS 🙂comment image

Reply to  micro6500
April 14, 2017 11:35 am

Actually, you just touched on a core truth
I found no warming in the Sh and more warming in the Nh including more ice melt in the arctic…
So, to me, it seems earth’s core has been moving, especially North east, going by the movement of earth’s magnetic north pole. The elephant in the room was all but forgotten…
Come down 1 km into a gold mine here and when you start sweating, you realize how big this elephant really is…

Kermit Johnson
April 13, 2017 6:04 am

I mentioned curve-fitting data with computer models using a fudge factor in an earlier post. I was surprised that there were no comments. After thinking about it, however, I realize that there are extremely few people, even scientists, who understand what the term curve-fitting means when making models of a chaotic system.

This is the problem. Even most scientists have no idea what the pitfalls are when it comes to modeling a complex system. The people in climate science who do understand – will not talk about it. And, it is obvious why they won’t talk about it. I believe it can be called willful ignorance.

richardscourtney
Reply to  Kermit Johnson
April 13, 2017 11:57 am

Kermit Johnson:

You wrote

I mentioned curve-fitting data with computer models using a fudge factor in an earlier post. I was surprised that there were no comments. After thinking about it, however, I realize that there are extremely few people, even scientists, who understand what the term curve-fitting means when making models of a chaotic system.

With respect, I suggest the reason nobody commented is because everybody agreed.

However, since you want comments, I support your post with the following two points.

The fudge factor is assumed values of negative forcing from aerosols. refs.
Courtney RS, ‘An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre’. Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999
and
Kiehl JT, ‘Twentieth century climate model response and climate sensitivity’. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007

There are many variables in climate models and that provides problems when curve fitting because as John von Neumann said of curve fitting

with four parameters I can fit an elephant, with five I can make him wiggle his trunk

Richard

Dalo
April 13, 2017 6:04 am

Why do you say ‘may’ have strayed outside of scientific method?

Call it how it is sir.

paullinsay
April 13, 2017 6:06 am

Enrico Fermi was once asked if he followed the scientific method. “Yes, if I can’t think of someting better”. 🙂

https://en.m.wikipedia.org/wiki/List_of_things_named_after_Enrico_Fermi

April 13, 2017 6:14 am

The scientific method, as widely described and promoted, is based on experiments to test hypotheses.

Many branches of natural science can only observe the world and construct theories and hypotheses, but experimental verification can be difficult to impossible (or it might be theoretically possible, but practically impossible).

Making a climate prediction about effects of CO2 in the atmosphere on future temperature, based on educated guesses, and waiting 25 years to see if your prediction is correct isn’t even an experiment, really because there’s no way of controlling any of the multiple conditions, all of which are varying all the time.

Laboratory simulation would be a valid, scientific approach to testing climate hypotheses or theories. With all the money that’s been thrown at collecting data and making computer models, it would surely be possible to build an atmospheric laboratory where you could simulate observed atmospheric conditions and start varying input conditions, one at a time, and measure their effects. Of course you could only simulate small parts of the atmosphere at any one time, but you could integrate multiple tests into a simulation of the whole atmosphere.

That would be a genuinely scientific approach to climate science. That it hasn’t been done is disappointing pathetic outrageous.

Or has it been done, the results didn’t support AGW, and they got disappeared? In the present-day “climate” of opinion, that would not be surprising.

Berényi Péter
Reply to  Smart Rock
April 13, 2017 8:16 am

Laboratory simulation would be a valid, scientific approach to testing climate hypotheses or theories.

Of course it would be. And you don’t even need to emulate the atmosphere or a part of it, any non-equilibrium irreproducible closed thermodynamic system would suffice.

BTW, a system is irreproducible, if microstates belonging to the same macrostate can evolve into different macrostates in a short time. Chaotic systems, including terrestrial climate, belong to this class.

Physics of these systems is unknown, because not even Jaynes entropy can be defined on them.

see more here

Reply to  Smart Rock
April 13, 2017 8:48 am

Smart Rock:

Though there are many independent variables, each of them varying continuously it is possible in concept to create a statistically validated model. It is possible to do so today though this was impossible 5 decades ago as 5 decades ago we did not have information theory at our disposal but today we do. Information theory makes it possible for a model builder to deal successfully with missing information. Professional climatologists seem to be five decades out of date in their grasp of model building technology.

richardscourtney
Reply to  Smart Rock
April 13, 2017 12:22 pm

Smart Rock:

You say

Making a climate prediction about effects of CO2 in the atmosphere on future temperature, based on educated guesses, and waiting 25 years to see if your prediction is correct isn’t even an experiment, really because there’s no way of controlling any of the multiple conditions, all of which are varying all the time.

Sorry, but such long-term prediction is a valid test of an hypothesis.

For example, Edmund Halley’s prediction in 1705 that the comet now named after him would be seen in 1758. He made this prediction because he hypothesised that the comets seen in 1531, 1607 and 1682 were the same comet and it had a regular orbit which was disturbed by the gravitational attractions of Jupiter and Saturn. His prediction proved correct (after his death in 1742) so his hypothesis was then elevated to a theory which has subsequently obtained much confirming evidence.

Richard

Frank
April 13, 2017 7:21 am

Eric writes: “This failure of climate science to follow the normal scientific progression to more accurate estimates should be a serious concern. This lack of convergence on a central climate sensitivity estimate, after decades of research effort, strongly suggests something is missing from the climate models.”

I disagree. This failure to converge on a narrow estimate for climate sensitivity tells us that IPCC scientists are accurately reporting the uncertainty in their understanding of climate sensitivity. There are a wide variety of parameterizations of climate models that provide equally good (or bad, if you prefer) representations of current climate. The IPCC’s wide confidence interval for climate sensitivity recognizes that they don’t know which parameterization is best.

However, by reporting QUANTITATIVE estimates of projected WARMING related climate change derived ONLY from a selected subset of climate models (an “ensemble of opportunity” chosen by governments), the IPCC is underestimating the uncertainty associated with these projections. Even then, they use their “expert judgment” to report projected warming that formally qualifies as being “very likely” according to models as merely “likely” – not that the public understands the difference.) Therefore, the IPCC acknowledges more uncertainty in equilibrium climate sensitivity than in projected warming. This is partially because there is less uncertainty in TCR than in ECS. As long as CO2 is rising, TCR is a more relevant measure of climate sensitivity that ECS and CO2 rises for most of the century in some scenarios.

Eric continues: “Whatever the missing or mishandled factor is, it has a big influence on global climate. The evidence for this is the embarrassingly broad range of estimates for climate sensitivity to a doubling of CO2, and the failure of those estimates to converge.”

I disagree here also. Models make different projections mostly because they use different parameterizations, not because something is missing from some models. If we knew something was missing from some models, we would simply include all the right things in one model. The problem is that the process of tuning parameters one-by-one can be done in many different ways and does not lead to a unique optimal set of parameters.

A weak analogy: The equation KE + PE = Total energy applies to some physics problems, but there are many different ways total energy can be partitioned. There are many equally good ways to parameterize climate models and we don’t know which is “right”.

Eric concludes: If climate models were capable of producing accurate predictions, if they showed any sign of converging on a reasonable climate sensitivity estimate, if predicted secondary phenomena such as the tropospheric hotspot and sea level rise acceleration were readily observable, there would be a lot less resistance to Trenberth’s apparent demand that climate model projections be accepted as somehow equivalent to empirical observations.

Trenberth has never demanded that “that climate model projections be accepted as somehow equivalent to empirical observations.” There are serious problems and limitations with our models AND with our empirical observations.

The hot-spot: The satellite record shows that the troposphere is warming more slowly than the surface, which is inconsistent with our understanding of the factors that control the lapse rate. One of the following three is therefore incorrect: 1) surface warming, 2) troposphere warming or 3) “our understanding”. The absence of a hot-spot is based depends on the satellite record being correct.

The putative absence of acceleration in SLR: Your link points to data showing that global SLR has varied in the 20th-century. Variation in a rate demands that acceleration and de-acceleration have occurred. Your link shows only that the current rate of SLR is not unprecedented – not that it hasn’t accelerated recently.

Do climate models predict that we should have already been able to unambiguously detect an acceleration in SLR? Actually, climate models predict a wide range of acceleration in the rate of SLR. For RCP 6.0, the rate of SLR in 2100 is projected to be from 4-10 mm/yr (or cm/decade) At the lower end, this is almost NO acceleration from today’s rate of SLR. The midpoint represents an increase of 4 cm/decade or 0.5 cm/decade/decade. That is about a 15% increase per decade, not a big change. (For RCP 8.5, acceleration needs to be twice as big.) The acceleration in the satellite record is not quite statistically significant, but the central estimate for the increase over the past 24 years is 0.66 mm/yr or about a 20% in the current rate. These observations are COMPLETELY CONSISTENT with climate models, they don’t invalidate them.

The upper limit for SLR in the IPCC’s models gets all of the headlines. The lower limit requires very little acceleration. Lack of acceleration will never invalidate AOGCMs. However, the hindcast SLR for the 20th century from all models could be inconsistent with observations. The IPCC publicizes the agreement between project and observed warming, but never the agreement between observed and projected SLR.

Reply to  Frank
April 13, 2017 8:28 am

Frank:
The equilibrium climate sensitivity (TECS) is the ratio of two numbers. The numerator is the change in the surface air temperature at equilibrium aka steady state. The denominator is the change in the logarithm of the atmospheric CO2 concentration. The numerator is insusceptible to measurement. Thus when a numerical value is assigned to TECS this value is not falsifiable. As it is not falsifiable, TECS is not a “scientific” concept.In particular, to assign a value to TECS is not to gain any information that is pertinent to regulation of Earth’s climate.

Frank
Reply to  Terry Oldberg
April 14, 2017 8:34 am

Terry: ECS is a falsifiable “theory”. In its most simple form. we simply need to wait a century or so to measure the numeration of this ratio. It is admittedly hard to do a well controlled experiment on our planet, but we are nearly at the equivalent of a doubling of CO2 from the combined forcing of all rising GHGs. The problem is that rising aerosols have complicated this “experiment”. Energy balance models predict from observations of dF and dT that the best estimate for ECS is about 1.5-2.0 K/doubling, but the confidence interval is wide.

There is another way of approaching ECS and that is to ask how much more heat leaves that planet for every degC the planet warms. That is sometimes called the climate feedback parameter. It is the reciprocal of ECS (measured in W/m2/K). You don’t need to wait a century or more to reach equilibrium when you try to measure the climate feedback parameter from observations.

Reply to  Frank
April 14, 2017 8:54 am

Frank:

The global temperature fluctuates. Thus, by the definition of terms, Earth does not reach an equilibrium temperature.

Reply to  Frank
April 14, 2017 9:21 am

It is admittedly hard to do a well controlled experiment on our planet

I used the slope of temperature as the extratropics go through the seasons. It’s straight forward to calculate a toa energy value for that location, and you have the surface response in temperature.
This does not work for the tropics, at least not as it’s written, so I only run it outside the tropics.
https://micro6500blog.wordpress.com/2016/05/18/measuring-surface-climate-sensitivity/

Frank
Reply to  Terry Oldberg
April 14, 2017 11:16 am

By the definition of ECS, the Earth reaches an equilibrium temperature when the long-term average radiative imbalance at the TOA is zero (or is negligible compared with the forcing that produced a temperature change). That means that the atmosphere and ocean on the average are neither warming nor cooling.

If you want to get picky, the radiative imbalance at the TOA is expected to become negligible before ice caps have fully responded to the new equilibrium temperature. To deal with this problem, climate scientists have created the concept of an “earth system sensitivity” which encompasses millennial changes in ice caps and the temperature change that follows this change in surface albedo. It took about 10 millennia for the rate of sea level rise to slow after the end of the last ice age. On the millennial time scale, Milankovic changes in the Earth’s orbit also become important. ESC has been defined in such a way that these millennial issues are irrelevant.

Global temperature fluctuates. Weigh an object with an accurate scale or balance and the result fluctuates too – from motion, air currents and static electricity. We average measurements of both weight and temperature. The fluctuations in temperature have do have somewhat different causes than the fluctuation in weight: seasons, chaotic fluctuations in wind, water currents, clouds, the 11-year solar cycle, etc. Nevertheless averaging gives a useful answer.

Reply to  Frank
April 14, 2017 12:42 pm

Frank:

Your understanding of the operative principles is not exactly correct. First of all, the “concrete” Earth (the one on which you and I live) possesses a field of temperatures such that at each space point in this field the temperature is generally different. By the definition of terms each such temperature is an “equilibrium temperature” if and only if the magnitude of the heat flux at the associated space point is 0. If the magnitude of the heat flux is 0 at every space point in the field then the field of temperatures is a field of equilibrium temperatures but not otherwise. Thus, given that every temperature is an equilbrium temperature the temperatures at the various space points generally vary.

The “concrete” Earth is never in a state in which every temperature is an equilibrium temperature and each such temperature is identical. It is a kind of “abstract” Earth that is capable of being in this state. One of the many errors in thinking that are made by the global warming climatologists is to confuse the abstract with the concrete Earth. To confuse the two Earth’s is to “reify” the abstract Earth by treating it as if it is the concrete Earth. Reification is regarded as a fallacy.

Frank
Reply to  Terry Oldberg
April 15, 2017 3:43 am

Terry: The fluctuations are individual locations are unimportant. Equilibrium is reached when the global radiative imbalance is zero (or negligible compared with the forcing causing warming.

From a practical point of view, since 93% of any radiative imbalance goes into the ocean, we could monitor our approach to equilibrium with the ARGO array. Current anthropogenic forcing is something like 2.5 W/m2 and the current radiative imbalance according to ARGO is about 0.7 W/m2. Say we follow RCP 6.0. The imbalance presumably is going to rise. When it drops below 0.6 W/m2 (averaged over a decade), we would be 90% of the way to equilibrium warming. That should be good enough to estimate where 100% of equilibrium warming lies and calculate ECS.

Reply to  Frank
April 15, 2017 7:08 am

and the current radiative imbalance according to ARGO is about 0.7 W/m2.

how do they get do a radiation imbalance from ARGO buoys floating in the ocean?

Reply to  Frank
April 15, 2017 11:03 am

Frank:
You seem to think that TECS has a point value. What’s your argument?

richardscourtney
Reply to  Frank
April 13, 2017 12:35 pm

Frank:

You contradict yourself when you write

Eric writes:

“This failure of climate science to follow the normal scientific progression to more accurate estimates should be a serious concern. This lack of convergence on a central climate sensitivity estimate, after decades of research effort, strongly suggests something is missing from the climate models.”

I disagree. This failure to converge on a narrow estimate for climate sensitivity tells us that IPCC scientists are accurately reporting the uncertainty in their understanding of climate sensitivity. There are a wide variety of parameterizations of climate models that provide equally good (or bad, if you prefer) representations of current climate. The IPCC’s wide confidence interval for climate sensitivity recognizes that they don’t know which parameterization is best.

You saying you “disagree” does not refute Eric’s correct statement.

You claiming “they don’t know which parameterization is best” is an assertion that the models don’t include knowledge of “which parameterization is best” (i.e. that knowledge “is missing from the climate models”.

And you are asserting self-delusion when you claim without evidence that you know the “something” which is probably “missing from the climate models”. In reality, all we know is that the “lack of convergence on a central climate sensitivity estimate, after decades of research effort, strongly suggests something is missing from the climate models.”

Richard

Frank
Reply to  richardscourtney
April 14, 2017 8:18 am

I believe Eric was stating his OPINION and I expressed my contradictory opinion. My opinion is even based on some facts; in particular the ensembles of perturbed parameter models described by Stainforth et al and the climateprediction.net group in England. They tested thousands of variations of a simplified model where six (or more?) parameters were chosen at RANDOM from within a physically plausible range. ECS among the ensemble ranged from 1.5 to 11.5 K/doubling. Later they used a panel of eight climate observations (temperature, rainfall, albedo, etc) to systematically pick the best set of parameters. No global optimum could be found: Parameter sets that were good for precipitation would be inferior for albedo or temperature and vice versa. They also tried and failed to find a portion of the physically plausible range for any parameter that consistently gave inferior results (so it could be discarded). Worse of all, they found that parameters interacted in unexpected ways, making one-by-one tuning of parameters in more sophisticated models a dubious process that is unlikely to discover a global optimum. Change the order in which parameters are tuned and you probably will reach a different local optimum.

This work demonstrates that many different future climates are consistent with the laws of physics, an emissions scenario, and a set of parameters that reproduce today’s climate reasonably well.

More sophisticated models may not behave this badly, but it is too computationally expensive to thoroughly explore the parameter space of the sophisticated models used by the IPCC. However, the GFDL group has multiple variation of its basic model with different climate sensitivity. They recently found they could reduce the climate sensitivity of one model by 1 K/doubling by using the entrainment parameterization scheme from a lower-sensitivity model – apparently without reducing the model’s ability to accurately reproduce current climate. As best I can tell, there are likely to be dozens of different parameterizations for a given climate model that are equally good at representing today’s climate and having a wide range of climate sensitivity.

The “something” that is missing from today’s climate models is unambiguous evidence that the parameter set chosen for IPCC reports is superior to other possible parameter sets. Without a way to generate or identify a superior set of parameters, AOGCMs won’t be inconsistent with an ECS of 1.5 or 4.5 K/doubling. This wide range doesn’t invalidate AOGCMs, but it does mean that they don’t provide the useful narrow range of projections policymakers need.

A debate elsewhere I am having with someone about SLR may provide a useful analogy. He is fitting exponential and quadratic models to sea level data and projecting more than 1 m of SLR by the end of the century. However, these models and a linear model all fit the data equally well (R2 of 0.98+). And the 95% confidence interval for the acceleration coefficient for the quadratic model ranges from zero to twice the best estimate for that parameter. Though the three models produce very different central estimates for SLR, the range of futures they project is very wide and overlapping. Part of the problem is here is that simple curve fitting does not model all of the physics needed to explain why sea level is rising. Climate models have a similar problem, they replace cloud microphysics and turbulent fluxes within a grid cell with parameters.

richardscourtney
Reply to  richardscourtney
April 14, 2017 9:20 am

Frank:

Eric stated his evidence-based judgement of what is probably true; i.e. he stated a scientific conclusion. I see no reason to doubt his conclusion.

Richard

Frank
Reply to  richardscourtney
April 14, 2017 12:26 pm

Richard, Eric wrote: “This failure of climate science to follow the normal scientific progression to more accurate estimates should be a serious concern. This lack of convergence on a central climate sensitivity estimate, after decades of research effort, strongly suggests something is missing from the climate models.”

As best I can tell, neither your nor Eric’s words show any understanding about the problem of parameterization and the fundamental reasons why AOGCMs haven’t converged on a narrow range of ECS. He has provided no scientific evidence. He has expressed the OPINION that this is because AOGCM physics is wrong or incomplete. I’m commenting because my reading indicates he is wrong. Some links are given below.

If some models were more complete than others, it would be trivial to include all of the same physics in every model or simply report results only from the models that were “complete”. For examples, only some AOGCMs include the interaction between aerosols and could droplet size and reflectivity – the indirect aerosol effect. However, the aerosol indirect effect is too small to account for the vast differences in model climate sensitivity and those that don’t include it believe it is negligible. If it were a big factor, every model would include it.

The laws of physics are not wrong, but cloud formation and turbulent flow occur on scales much too small to be calculated for each grid of an AOGCM. That means these phenomena must be represented by tunable parameters. The vast differences between models arise from this parameterization. We know that because changing the parameters of one model can change its ECS dramatically without always interfering with its ability to represent current climate accurately. The current method by which models are tuned does NOT yield a set of parameters which represent today’s climate better than any other possible sets. This has been proven by studies where the parameters of models were systematically varied.

http://www.climateprediction.net/climate-science/climate-ensembles/perturbed-physics-ensembles/
http://www.climateprediction.net/wp-content/publications/nature_first_results.pdf
http://www.climateprediction.net/wp-content/publications/ClimateDynamics_Feb2008.pdf

richardscourtney
Reply to  richardscourtney
April 14, 2017 12:35 pm

Frank:

From behind anonymity you write to me

As best I can tell, neither your nor Eric’s words show any understanding about the problem of parameterization and the fundamental reasons why AOGCMs haven’t converged on a narrow range of ECS.

I refer you to this post I made in this thread earlier today.

Get back to me when you have been studying and publishing on the matter for as long as I have.

Richard

Frank
Reply to  richardscourtney
April 14, 2017 2:37 pm

Richard wrote: “Get back to me when you have been studying and publishing on the matter for as long as I have.”

Despite your experience, I suggest that you reply only after you have read the links I provided concerning perturbed parameter ensembles. The comments you linked are totally irrelevant to what what has been learned for systematically exploring model parameterization.

As best I can tell from the abstract alone, your E&E paper has nothing to do with model parameterization.

The paper by Kiehl discusses the fact that different models produce different amounts of forcing from what should be the same inputs of aerosol and GHG change. This indeed may be part of the reason why different models produce different climate sensitivity. This is one reason why Hansen invented the concept of effective radiative forcing. Despite the fact that we commonly believe that doubling CO2 instantaneously slows radiative cooling to space by 3.7 W/m2, different models produce different quantities for this value. However, a model that produces a forcing for doubled CO2 that is bigger or smaller can produce a proportionally bigger or smaller equilibrium warming and therefore have exactly the same ECS.

Among the CMIP3 models, high climate sensitivity was associated with high sensitivity to aerosols cooling, but this is not true for the CMIP5 models.

However, when you take ONE model with ONE input of aerosols and GHGs and then change the model parameters (perturbed parameter ensembles), you get different ECSs. And the modified model parameterization won’t necessarily produce an inferior representation of current climate. IMO, this is the fundamental reason models haven’t converged on a single value for ECS. The compromises that must be made to model climate and weather in grid cells that are large enough to be practical computationally force modelers to use parameters they can’t optimize systematically. And for which an optimum may not exist.

Steve Thayer
April 13, 2017 7:51 am

The refusal of the climate model keepers to correlate their models to measured data tells us all that the purpose of these models is not to predict the future, their purpose is to create alarmism and generate support for more funding of climate change studies and projects. The model keepers could adjust the unknowns in their climate models, like feedback, so their predicted temperature responses match measured data over the last 30 to 50 years, but they don’t. They insist on keeping the parameters in their model like feedback that they can not possibly know or measure at the values they are, even though adjusting them would result in more accurate predictions, because they gain nothing from having more accurate, less alarming model predictions. If they were trying to sell these models as a tool for predicting future temperatures, they would adjust their unknowns completely differently, they would adjust them so that the models make more accurate predictions. But the money generated from these models is from the alarmism they create, so there is no motivation to make them accurate.

krischel
April 13, 2017 9:37 am

The Scientific Method must start with a necessary and sufficient falsifiable hypothesis statement, to wit:

1) a list of observations, which if observed, mean a hypothesis is false;
2) a logical argument that the lack of those falsifications means that a hypothesis must be favored over all others (including the null).

Translation into plain english:

1) tell me what would change your mind;
2) tell me why those if the things that would change your mind aren’t there, the only explanation left is yours.

Gil
April 13, 2017 10:34 am

Uh, oh. There’s that word “distinguished” again – attached as part of Trenberth’s title, just as it is to Mann’s and McKibben’s.

Reply to  Gil
April 13, 2017 8:46 pm

In other words, a movement is afoot that represents a fake scientist to be the real thing. Hear. Hear!

April 13, 2017 10:55 am

Here’s a crude 1 D “model” of the Earth the cold of space , T0 on one side and the heat of the Sun T1 on the other . Take them as Planck power spectra for the respective temperature , eg : 3 and 5800 .

T0 A S A T1

Collapse the spectral filtering of the atmosphere ( a simple */ , in Iverson’s not Moore’s notation ) across the atmospheric spectral layers to a single spectrum , A , over transparency and absorptivityemissivity by wavelength , ( Transparency ; ae ) . S is an opaque surface with a spectrum ( 0 ; ae ) . Probably it’s as simple to implement the full Schwarzschild differential and understand it .

I just want to see the spectral equations , equivalent to what you’d find Intro Heat Transfer if it covered radiant transfer between surfaces of arbitrary spectra , which show how and by how much hotter S becomes than the value computed for the the lumped “seen from the outside” ( A S A ) .

This is obviously a rather simple experimentally quantitatively testable configuration . And experiment trumps all of us .

I’m starting a http://cosy.com/Science/ComputationalEarthPhysics.html page building the computational “audit trail” between parameters at an APL level upon this core . I invite comments over their and subscription to the discussion for those interested in this dimension of the problem .

Sheri
April 13, 2017 11:16 am

“With climate models as tools, we can carry out “what-if” experiments. What if the carbon dioxide in the atmosphere had not increased due to human activities? What if we keep burning fossil fuels and putting more CO2 into the atmosphere?”

Projections are just a bunch of “what if” statements. There is no reason to believe that any one of them will actually happen. Assuming the conditions are properly addressed, and ALL conditions are met, then yes, one will come true. Assuming no future where ALL the conditions in the projection are met, then no, they’re basically just science fiction. The IPCC makes projections and pretends they are something more certain. They are not.

Predictions are based on initial conditions and often assume to changes when they produce a trend line or whatever is being predicted. Initial conditions matter very much in predictions.

As far as I can tell, these are the definitions used by the IPCC and others in climate science. However, many do not use these definitions. While it’s not just semantics, to a large degree, it’s the complete failure of the scientists to accurately state what they are doing. Trenbreth seems correct in his usage, but fails to note that his “projections” are little more than science fiction. It seems very much like using a video game and changing the factors, which then change the outcome. Few would argue that playing a video game and using its outcome is useful science. Yet projections appear to be basiclly just that. People have been totally mislead as to what “science” is involved.

Resourceguy
April 13, 2017 11:27 am

It’s not okay to behave professionally in some paragraphs of an IPCC final report and unprofessional in summary paragraphs that ignore the uncertainty of science. It’s also unprofessional and non-science to look the other way on important documents with that mix the agendas being expressed. Show some backbone.

Tom O
April 13, 2017 11:43 am

Trenberth has some valid thoughts, and some very not valid thoughts. There really is no way of “testing” prediction, so “prediction” needs to be considered as “almost data” until disproven. There is a real need to consider the consequences of what will happen if they are even partially right.

The problem is that the alarmist community has settled on a single facet and said this is the only factor that is immutable. If they opened their minds, then they could start to find that “convergence” spoken of in the article. But when you barricade yourself behind an immutable factor, you are not capable of objective or subjective change.

This is, of course, the essence of either a religious concern or a dogmatic scam. There are those that are wiling to allow this to be a religious thing. I personally think this is a coldly calculated method to depopulate the world through starvation and hypothermia. And the same group that are running this scam are going to go to plan NW if they fail to depopulate the world in this manner. And you probably can guess that plan NW is nuclear war.

April 13, 2017 12:34 pm

“These are just projections! They can’t be evaluated like scientific predictions!”

“We absolutely must spend trillions of dollars because of these projections!”

April 13, 2017 12:44 pm

Some excellent comments here. But why aren’t you making them over at the Conversation, where Trenberth and his coauthor Reto Knutti (who replies to comments) might read them?

I’m banned from commenting at the Conversation, ever since I quoted an article from WUWT and they changed the rules to ban quotations from “sources considered unreliable.” But my good friend Ming Fangjian linked to this article in a comment 15 hours ago, and his comment has stayed up. If everyone here aird their opinions there it would do wonders for Dr Trenberth’s hit rate as a Conversation author, and might enlighten the readers of the Conversation.

richardscourtney
Reply to  Geoff Chambers
April 14, 2017 1:28 am

Geoff Chambers:

You asked

Some excellent comments here. But why aren’t you making them over at the Conversation, where Trenberth and his coauthor Reto Knutti (who replies to comments) might read them?

Then you answered that yourself when you wrote

I’m banned from commenting at the Conversation, ever since I quoted an article from WUWT and they changed the rules to ban quotations from “sources considered unreliable.”

And why do you think anybody here (whose comments would probably be “banned” there) would want to “do wonders for Dr Trenberth’s hit rate as a Conversation author”?

Richard

April 13, 2017 1:02 pm

Re: Scientific method. One alarmist argument goes like this: Over time, each scientific discipline becomes ever more specialized. Scientists are no longer able to crossover disciplines. Scientific practice within a discipline becomes ever more specialized. So only those within that discipline have the in depth knowledge and experience to decide what is and is not acceptable scientific method within, say, ‘climate science’. As such Popper is old hat. He’s a universalist and a world of particularisms. I suppose the post-modern version of this argument celebrates the multiplicity of sciences and philosophies of science.

That was how the argument was put to me. I would counter it by saying that all science is interlinked and disciplines depend on each other. The apparent atomicity of disciples is just a function of how scientific research goes into ever more detailed terrain. Because science is still one thing, Popper is still valid.

Gloateus
Reply to  mark4asp
April 13, 2017 1:14 pm

There is no arcane part of so-called “climate science” requiring initiation into its cultic practices. It’s all worth than worthless, made up GIGO modeling and false assumptions, lacking real science.

Anyone with an undergrad degree in any scientific or engineering degree can easily show the whole corrupt enterprise hopelessly flawed and false.

Gloateus
Reply to  Gloateus
April 13, 2017 1:15 pm

Hence the need for appeal to Druidical authority and the ludicrous 97% lie.

willhaas
April 13, 2017 1:16 pm

If the IPCC really knew what they were doing then they would have only one climate model to contend with rather than a plethora of models. At the very least they would have by now thrown out the worst of the models but they have not done that either. To simulate climate they have started with a weather simulation and have increased spatial and temperal sampling intervals and have hard coded in the concept that adding CO2 to the atmosphere causes warming. Their begging the question makes their simulations worthless. Another concern is that increaseing spatial and temperal sampling intervals may make the simulation slightly unstable so that the results are more a function of the induced numerical instability then of anything else.

The most important thing for the IPCC to do is to make an accurate determination of the climate sensivity of CO2 yet after more than two decades of effort they have been unable to reduce the range of their guesses one iota. It is really a matter of politics and not science. One researcher has pointed out that the oroginal calculatons of the Planck climate sensivity of CO2 is two great by a factor of more than 20 because the calculations neglect that a doubling of CO2 will cause a slight decrease in the dry lapse rate in the troposphere which is a cooling effect. So instead of 1.2 degrees C the climate sensivity of CO2 should be less than .06 degrees C. Then there is the issue of H2O feedback. H2O is a net coolant in the Earth’s atmosphere as evidenced by the fact that the dry lapse rate is significantly less than the dry lapse rate so that rather than amplifying the effect of CO2 by a factor of 3, H2O attenuates the effect of CO2 by a factor of 3 yielding a climate sensivity of less than .02 degrees C for a doubling of CO2. But the IPCC will never consider such low numbers for fear of losing their funding.

The reality is that the radiant greenhouse effect has not been observed anywhere in the solar system. The radiant greenhouse effect is science fiction as is the AGW conjecture.

Reply to  willhaas
April 13, 2017 2:16 pm

and have hard coded in the concept that adding CO2 to the atmosphere causes warming

They code in the added all the spectrums.

That’s not where they fix it, it in
3.3.6 Adjustment of specific humidity to conserve water
http://www.cesm.ucar.edu/models/atm-cam/docs/description/node13.html#SECTION00736000000000000000
This is for the CAM model 3, the NASA’s Model D and I’m pretty sure E had a similar piece of code. They parameterize the code to make sure they get the evaporation they expect. My understanding of old literature I read is the GCM’s, once this was added went from running cold, to running warm. And they used aerosols to tune the runs down. This worked until about 5 years ago when we got good aerosol data, and the tuning they used was not close to reality.

Reply to  micro6500
April 13, 2017 2:22 pm

They code in the added all the spectrums.

They code to add in the energy from all of the different spectrums. They have to.
But if the models match observations, they to have to end up with cooling at night following dew points in the early morning. It’s why the tropics don’t drop much in temp at night, and deserts do.

richardscourtney
Reply to  micro6500
April 14, 2017 1:51 am

micro6500:

You say

My understanding of old literature I read is the GCM’s, once this was added went from running cold, to running warm. And they used aerosols to tune the runs down. This worked until about 5 years ago when we got good aerosol data, and the tuning they used was not close to reality.

Yes, none of the climate models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.

This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcing resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.

In 1999 I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.

The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.

And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).

More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).

Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.

Kiehl says in his paper:

One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.

The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.
Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.

And, importantly, Kiehl’s paper says:

These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.


And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.

Kiehl’s paper can be read here.

Please note Figure 2 in Kiehl’s paper showing data for 9 GCMs and 2 energy balance models.

It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^2 to 2.02 W/m^2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.

In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.

So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.

Richard

basicstats
Reply to  micro6500
April 14, 2017 4:08 am

@richardscourtenay

Good points. Climate models need to be evaluated ‘out of sample’, that is on data not used to calibrate the model parameters. A model can always be fitted to data, but does it stand up against data not used in the estimation of its parameters? A simple concept which seems lost on people like Trenberth, who proceed straight from fitted model to prediction/projection. They also have the gall to claim that having enough parameters to fit some historical data proves their model ‘right’. A possible reason for not appreciating this issue may be the experimental nature of the physical sciences. A theory/model can often be tested by designed experiment, so the idea of having to make provision to test against existing data is less appreciated. But climate research has limited scope for designed experiments (just a big undesigned one!)

Reply to  basicstats
April 14, 2017 9:29 am

basicstats:

You say that the models need to be “evaluated” on out-sample-data. Actually, there is no such sample as the population underlying each of the climate models does not exist. You may be confusing the idea of a global temperature time series with the idea of a statistical population. The time series exists but it is not a population.

Kermit Johnson
Reply to  micro6500
April 14, 2017 5:44 am

@basicstats

Evaluating a model on out-of-sample data is, as you say, necessary. I would only add that, if the model works well on this data, it does not mean that the model will necessarily continue to work well in real time. It does not mean that all independent variables are known and are accurately represented in the model. Of course, the more data available the better chance the model is a good model, and, in fact, the less need for a validation set also. But, isn’t this the one very big problem with climate models – not anywhere near enough data compared to the complexity?

Reply to  Kermit Johnson
April 14, 2017 6:53 am

Evaluating a model on out-of-sample data is, as you say, necessary.

But you have to pay attention to what out of band testing is done. For instance if you’re testing measurements against a theoretical climate field, what exactly do you compare? What process do you have to apply to your measurements prior to doing the test? If you do the same processing to your data you built into the model, you’re not testing anything.

Reply to  Kermit Johnson
April 14, 2017 9:16 am

Kermit Johnson:

There can be no no out-of-sample data but there can also be no in-sample data as the statistical population is not identified. climatologists eliminate their need for probability theory and statistics through the unwarranted claim that the equilibrium climate sensitivity (TECS) is a constant.

April 13, 2017 2:41 pm

humans are NOT changing the climate…….most people making that claim dont even know what the climate is.

Johann Wundersamer
April 13, 2017 7:38 pm

” It is de ned as the change in global mean surface temperature”

–>

It is defined as the change in global mean surface temperature
_______________________________________

“This assessment re ects improved understanding,”

–>

This assessment reflects improved understanding,

Johann Wundersamer
April 13, 2017 7:52 pm

“The evidence for this is the embarrassingly broad range of estimates for climate sensitivity to a doubling of CO2, and the failure of those estimates to converge.”

Yes, The evidence for this is the embarrassingly broad range of estimates for climate sensitivity to a doubling of CO2, and the failure of those estimates to converge + the failure of climate scientists to ignore differing assessments.

Bob Weber
April 13, 2017 8:17 pm

During the bottom of the quietest solar minimum in 100 years in ’08/09, on 12 Oct 2009, Trenberth demonstrates his complete ignorance of the solar cycle influence to his mostly ‘solar stupid’ warmists:

“Hi all
Well I have my own article on where the heck is global warming? We are asking that here in
Boulder where we have broken records the past two days for the coldest days on record. We
had 4 inches of snow. The high the last 2 days was below 30F and the normal is 69F, and it
smashed the previous records for these days by 10F. The low was about 18F and also a
record low, well below the previous record low. This is January weather (see the Rockies
baseball playoff game was canceled on saturday and then played last night in below freezing
weather).”

It was cold because TSI was low and had been very low for over two years by that day. Doh!

That email was sent to all the world famous iconic warmists who we’re all supposed to kowtow to, who clearly live in a bizarre unscientific upside down world where a trace gas ‘nullifies’ solar variability.

The weird thing here is Schneider wrote Trenberth back and said

“On Oct 12, 2009, at 2:32 AM, Stephen H Schneider wrote:

Hi all. Any of you want to explain decadal natural variability and signal to noise and
sampling errors to this new “IPCC Lead Author” from the BBC? As we enter an El Nino year
and as soon, as the sunspots get over their temporary–presumed–vacation worth a few
tenths of a Watt per meter squared reduced forcing, there will likely be another dramatic
upward spike like 1992-2000
.”

He was right then for the same reason I’m right about the same pattern repeating this minimum.

This tells me Schneider deliberately talked out of both sides of his mouth, saying to the public on one hand CO2 was the driver, while knowing, as I discovered myself, that the solar cycle influence controls the temperature series! So the question becomes why lie?

The IPCC science was not scientific in the sense that changes to the system input power were deemed unimportant and ignored before adequate evaluation had taken place – so they reversed the ages old null that the sun controlled the weather and climate, similar to Trenberth’s attribution and null stance, which came decades after Hansen’s and Schneider’s original false AGW claims.

Johann Wundersamer
April 13, 2017 8:43 pm

Eric, your browser displays processing problems with alphanumerics beginning with ‘f’:

de fi ned
re fl ects

/ not the first time /

Cheers – Hans

April 13, 2017 9:37 pm

It seems to me that when a computer program is written to show how climate warms when CO2 increases, will yield results that show that the climate will warm when CO2 increases. A self fulfilling prophecy.

The scientific method starts with a hypothesis. Realty is then compared to the hypothesis. If the real world data don’t agree with the hypothesis (in this case, predictions) then the hypothesis must be rejected. None of the climate computer models match the actual temperature and CO2 levels as measured by satellites and weather balloons.

Further, in proxy reconstructions going back 600 million years, there is no correlation between temperature and CO2.

Schrodinger's Cat
April 14, 2017 1:38 pm

It is claimed that the evidence of the human effects on climate have emerged beyond the background noise of natural variability. This is not true. I remember that when I first took notice of this subject, I was shocked to read that climate scientists were saying that carbon dioxide level was the major influence on our climate.

Intuitively, I realised that this was garbage. History tells of cold periods and warm periods long before human behaviour became a factor. It became clear that the scientists pushing the AGW scare actually knew very little about the natural variability of our climate.

They are not able to attribute causation because, by their own claims, they obviously underestimate natural variability. The temperature hiatus which was neither anticipated or explained and is contrary to the dominance of CO2 is the proof of that.

Dave Fair
Reply to  Schrodinger's Cat
April 14, 2017 3:34 pm

Cat, when the hiatus became a problem for IPCC AR6, temperature record providers attempted a rescue. The problems for them?

1. Even the upwards adjustments could not reach as high as the average of IPCC climate models. It looks like the IPCC will have to use “expert” judgment to reduce near-term model “projections” in AR6, just like they did in AR5.

2. AR6 may just have to acknowledge the satellite and weather balloons results.

3. People learned to ignore the IPCC Summary for Policy Makers (SPM) and looked into the backup data. Behold! The SPM lies. Expect massive circumlocutions in AR6. Good luck to the honest reviewers.

Any person proposing to turn over our lives to the UN SJWs is deluded.

SAMURAI
April 15, 2017 7:38 pm

Under strict adherence to the scientific method, if CAGW’s global warming model mean projections exceed reality by more than a statistically significant 2 standard deviations for a statistically significant duration (15 years), then the hypothesis can effectively be deemed disconfirmed.

CAGW’s hypothetical global warming projections already greatly exceed the disconfirmation criteria, so it has already been disconfirmed with a high confidence:
comment image

Any further increase in disparity and duration simply increases the confidence level of disconfirmation.

CAGW is already dead.

Chimp
Reply to  SAMURAI
April 15, 2017 7:45 pm

The hypothesis was born falsified.

The Druids of the cult try to get around your strictures by using absurdly wide error bars, rather than the average of their ludicrous projections. The margins of error keep getting increased, such that whatever happens can be called “expected”.

Reply to  Chimp
April 15, 2017 8:58 pm

Chimp

It is a prediction that has a degree of statistical significance. A projection does not have a degree of statistical significance. An IPCC climate model makes projections and does not make predictions.

SAMURAI
Reply to  Chimp
April 15, 2017 10:27 pm

Terry– You’re missing the point.

The disparity and duration refers to the 102-model mean projections, which are now hilariously devoid of reality.

If the hypothetical mean projections are devoid of reality, then the hypothesis upon which these models are based is devoid of reality…

Sure, if from tomorrow, the global warming trend should suddenly start exhibiting a 0.3C/decade trend that miraculously continues for the next 15 years, the CAGW hypothesis is still plausible, but failing that highly unlikely event, CAGW is already dead.

CAGW suddenly is facing some stiff physical realities:

1) CO2 forcing is a logarithmic function, meaining that each incremental CO2 increase has less and less of a warming effect.

2) Both the PDO and AMO will be in their respective 30-year cool cycles from 2019 and global temp trends have always fallen when this phenomenon occurs.

3) The weakest solar cycle since 1790 starts from 2021, and the one after that (from 2032) will likely be the weakest since 1645.

There is a high probability these weak solar cycles will cause global cooling, although it’s not known for certain. We’ll see soon enough.

Cheers, mate.

Reply to  SAMURAI
April 16, 2017 10:04 am

Samurai:

The issue seems to be of whether or not CAGW is falsified by the evidence. In dealing with this issue from a logical perspective one is faced with the anomaly that while a “prediction” is a kind of inference, a “projection” is not a kind of inference. One of the consequences is for a “prediction” to have a probability of being true and for a “projection” to lack a probability of being true.

For a “prediction” a value of 0 for the probability of a specified inference signifies the falsity of this inference but as a “projection” lacks a probability it is impossible for it to be falsified by the evidence.

A hint to the IPCC’s purpose in setting up this anomaly is provided by the IPCC in the opening pages of AR4, Report of Working Group 1. The IPCC asserts that in the modern era, falsifiability is replaced by peer-review. Were this assertion true this state of affairs would bestow upon the climatologists the priestly power to determine whether an inference was true or false without making reference to instrument readings. This is the position that was taken by the Church in its conflict with Galileo.

April 16, 2017 10:23 pm

Michael darby
re: your post of April 16 at 4:38 PM
Your argument is similar to one that is made by the IPCC in its various assessment reports. This argument is debunked by arguments made by Vincent Gray (“Spinning the climate”) and myself ( http://wmbriggs.com/post/7923/).