This is How Climate Works – Part 2

Guest essay by Mike Jonas

/Continued from Part 1

3. The Models can never work

In an earlier post, Inside the Climate Computer Models, I explained how the climate computer models, as currently structured, could never work.

Put simply, they are not climate models, they are weather models, because they operate on small(ish) pieces of ocean or atmosphere over very short periods (sometimes as little as 20 minutes). By definition, conditions in a small place over a short time are weather.

clip_image002

Figure 2.1. Ocean and atmosphere subdivision in the models. From the IPCC here.

But because their resolution has been weakened in order to be able to be run over long periods of time (a century or so), they are less accurate than the weather models used by the world’s weather bureaus. Those weather models can forecast conditions no more than a few days ahead. The climate models become inaccurate even sooner. All output from all climate computer models – as currently structured – is a work of fiction.

Confirmation of all of the above was recently provided by (US) National Center for Atmospheric Research (NCAR). They performed 40 climate model runs covering the 180-year period 1920 to 2100. All of the runs were absolutely identical except that “With each simulation, the scientists modified the model’s starting conditions ever so slightly by adjusting the global atmospheric temperature by less than one-trillionth of one degree.“.

The results from the 40 runs were staggeringly different. Predicted temperature changes for North America over the 1963-2012 period were shown, and they differ from each other by several degrees C over large areas. They even disagree on whether areas get warmer or cooler.

clip_image004

Figure 2.2. The last 8 runs by NCAR. Scale is deg C.

Think about it. By changing the model’s initial global temperature by a trillionth of a degree – ridiculously far below the accuracy to which the global temperature can be measured – and without any other changes, the model produced results for major regions that varied by several degrees C. The world’s weather stations will surely never be able to measure global temperature to anything like as small a margin as 0.000000000001 deg C, yet that one microscopic change alone causes a model’s results to change by several times as much as the whole of the 20th century global temperature change. And, of course, there are many other equally important parameters that cannot be established to anything like that kind of accuracy.

This NCAR report shows unequivocally that the climate models in their current form can never predict future climate.

4. The Tuning Disaster

4.1 How the Models were Tuned

 

In another earlier post, How reliable are the climate models?, I explained how the way in which the climate computer models were tuned led to major roles being assigned to the elements of climate that were least understood. Basically, when there was a discrepancy between observation and model results – and there were plenty of those – they fiddled the parameters till they got a match. [Yes, really!]. Anything that was well-understood couldn’t be fiddled with, so things they didn’t understand were used to fill the gaps.

The major problem that they had was that Earth’s climate had warmed much more over the ‘man-made CO2’ period (about 1950 onwards) than could be explained by CO2 alone, as shown in 2.5 (in Part 1). They manipulated the parameters for water vapour and clouds, without checking the physical realities, until they could match 20th-century temperatures. Both factors were portrayed as feedbacks to CO2. Bingo! The models showed all the late 20th-century temperature rise as being caused by CO2.

The modellers assumed that in the longer term cloud cover didn’t change naturally but changed only in reaction to – you guessed it – warming by CO2. So cloud cover as a natural process never participated in the model tuning, and all the warming of the ocean ended up being attributed to CO2. The only way that could happen was by the atmosphere warming the ocean.

No wonder that everything has gone pear-shaped since then.

4.2 Water Vapour Feedback

When the ocean warms – for any reason – there is more evaporation; about 7% more per degree C. Water vapour is a GHG, so that leads to more warming. That is all in the models, and it’s OK (apart from the reason for ocean warming).

But the models only allow for 2-3% more precipitation. In 2.1.1 (in Part 1) I cited evidence that precipitation also increased at the higher rate. The fifth IPCC report also virtually admitted a higher rate: “the global mean surface downward longwave flux is about 10 W m–2 larger than the average in climate models, probably due to insufficient model-simulated cloud cover or lower tropospheric moisture []. This is consistent with a global-mean precipitation rate in the real world somewhat larger than current observational estimates.“. The model tuning process has therefore assigned more warming to water vapour feedback than it should have (the water cycle is part of the water vapour feedback).

Figure 1.2 (in Part 1) shows 78 Wm-2 of latent heat transfer from ocean to atmosphere. Much of that process occurs in the tropics, where the latent heat is transferred to the tops of clouds by tropical storms: the warm moist air is convected up until it is cooled enough for the water vapour to condense, releasing the latent heat and forming clouds, then it sinks until it is precipitated. So the latent heat is released in the cloud-tops. 4% (the difference between the full C-C 7% and the models’ 3%) of 78 Wm-2 is 3.1 Wm-2. When energy is released in the cloud-tops, nearly all of it will radiate upwards or be reflected upwards, so nearly all of it is lost to space.

From the water cycle alone, water vapour feedback is therefore overestimated in the models by something like 3 Wm-2. And it all comes from the way the models are tuned.

This is very significant: downward IR from a doubling of atmospheric CO2 is put at 3.7 Wm-2.

4.3 Cloud Feedback

The IPCC assign even more positive feedback to clouds than they do to water vapour. They even assign more warming to cloud feedback than they assign to CO2 itself. None of it comes from physics, it comes only from the model tuning process where they still needed a lot more warming from CO2 to match the observed global warming.

As illustrated in Figure 1.1 (in Part 1), and as described in Richard Lindzen’s “Iris” hypothesis, cloud feedback to global warming is likely to be negative. [Figure 1.1 is actually empirical confirmation of the “Iris” hypothesis].

It is difficult to overestate the stupidity of tuning climate models without checking the underlying physics, or at least acknowledging the huge uncertainties. To continue to treat models’ output as reliable predictions of future climate, despite multiple contrary evidence being presented, is surely hubris of the first order. It is certainly unscientific.

5. The Non-Linear Climate

At all times, it is necessary to bear in mind that Earth’s climate is a non-linear system.

This does make it rather difficult to unravel, because we are all much more used to linear thinking.

It means that any search for a correlation and any extrapolation of any data is even more dodgy than usual: a pattern which is clearly visible today might disappear in future. On finding the GCR-cloud connection, Laken et al (2010) comment: “However, [two other studies] may be inherently flawed, as they assume a first-order relationship (i.e. presuming that cloud changes consistently accompany GCR changes), when instead, a second-order relationship may be more likely (i.e. that cloud changes only occur with GCR changes if atmospheric conditions are suitable).“.

As the IPCC itself said (AR4 WG1): “we should recognise that we are dealing with a coupled nonlinear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”. I suggest Kip Hansen’s article on WUWT for further reading.

/Continued in Part 3.


Mike Jonas (MA Maths Oxford UK) retired some years ago after nearly 40 years in I.T.


Abbreviations

AMO – Atlantic Multidecadal Oscillation

APS – American Physical Society

AR4 – Fourth IPCC Report

AR5 – Fifth IPCC Report

C – Centigrade or Celsius

C-C – Clausius-Clapeyron relation

CAGW – Catastrophic Anthropogenic Global Warming

CO2 – Carbon Dioxide

ENSO – El Niño Southern Oscillation

EUV – Extreme Ultra-Violet

GCR Galactic Cosmic Ray

GHG – Greenhouse gas

IPCC – Intergovernmental Panel on Climate Change

IR – Infra-Red

ISCCP – International Satellite Cloud Climatology Project

ITO – Into The Ocean [Band of Wavelengths approx 200nm to 1000nm]

NCAR – (US) National Center for Atmospheric Research

nm – Nanometre

PDO – Pacific Decadal Oscillation

ppm – Parts Per Million

SCO – the Sun-Cloud-Ocean hypothesis

SW – Short Wave

THC – Thermohaline Circulation

TSI – Total Solar Irradiance

UAH – The University of Alabama in Huntsville

UV – Ultra-Violet

W/m2 or Wm-2 – Watts per Square Metre

0 0 votes
Article Rating
145 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
co2islife
January 29, 2017 10:37 am

The models will never work, they assume a linear relationship between CO2 and temperature.
Exhibit N: The relationship between CO2 and Temperature simply isn’t linear
https://co2islife.wordpress.com/2017/01/17/climate-science-on-trial-the-forensic-files-exhibit-n/

Reply to  co2islife
January 29, 2017 11:13 am

No, they assume a logarithmic one, degrees per doubling being the scale…if you are gonna be a skeptic, at least learn the science.
And what a ‘linear differential equation’ is as opposed to a non linear one, which is summat else agin…

co2islife
Reply to  Leo Smith
January 29, 2017 11:20 am

Simply look at the results of the models and the formulas contained in those models, they are linear. Also, the data “adjustments” are all made to make the temperature increase more linear. If they understood that it was a logarithmic relationship, they wouldn’t be making linear adjustments. Linear temperatures disprove CO2 is the cause. Either way, CO2 doen’t impact the lower troposphere anyways, so the focus on ground measurements and CO2 is nonsense anyways. Just play around with MODTRAN. CO2 doesn’t impact the lower troposphere, no matter how much CO2 you put in it. Double, triple, etc.

Mindert Eiting
Reply to  Leo Smith
January 30, 2017 2:44 am

Doubling of what? You can double the strength of an earthquake. Here we are doubling a relative amount. Shall we do it? The relative amount of rich people in my city equals .4. If we double that, we get .8 and if we double that we get 1.6. After a few doublings we have more rich people than there are inhabitants. Doubling of relative amounts (percentages or parts per million) is absurd.

co2islife
January 29, 2017 10:38 am

There is no way to make a linear relationship out of this. It simply isn’t linear.
https://co2islife.wordpress.com/2017/01/17/climate-science-on-trial-the-forensic-files-exhibit-n/comment image

MarkW
Reply to  co2islife
January 30, 2017 9:58 am

Over any sufficiently small interval, any curve can be approximated as a straight line.
If you are old enough to remember log tables, you will remember how values were extrapolated from these tables.

co2islife
Reply to  MarkW
January 30, 2017 10:03 am

Yea, I know that. Take a look at MODTRAN, plug in your numbers and you will see, even over the small interval CO2 just doesn’t cut it. Also, 300 to 400 ppm is a pretty large change, yet much of the time of that increase, temperatures were flat.

Reply to  MarkW
January 30, 2017 3:03 pm

“Over any sufficiently small interval, any curve can be approximated as a straight line.”
This is the IPCC’s assumption to obfuscate the 1/T^3 dependency of the sensitivity on temperature and the T^4 relationship between temperature and emissions. The IPCC invokes non linearity to make the system seem more complicated than it is and then invokes approximate linearity to support their absurdly high sensitivity.
co2islife:
Regarding the non linear CO2 influence, this non-linearity is irrelevant to the linear operation of the system relative to input forcing power and the surface emissions. Yes, the effect of incremental CO2 is non linear, but for the purpose of establishing the sensitivity to forcing, it can be considered constant. The sensitivity to doubling CO2 is far from independent of the starting concentration which is not made clear in IPCC reports. And again, CO2 concentrations are not a forcing, but represent a change to the system and can only be said to be EQUIVALENT to some amount of incremental solar forcing while keeping the system constant.
A simple test to determine whether or not something is ‘forcing’, is to consider what effect it would have if the Sun stopped shining. If the answer is none, then it’s not forcing, but part of the response to forcing.

Reply to  co2islife
January 30, 2017 5:28 pm

co2islife
— you need to understand nobody considers the relationship linear. nobody. The IPCC does not consider it linear and the models do not consider it linear. You are presenting evidence which criticizes nobody’s interpretation of the environment. Worse, you think you are making an intelligent point.
Even on the part 1 of this dissertation, Anthony writes —

The direct effect of CO2 – “Climate Sensitivity” – is generally agreed to be about 1 to 1.2 deg C per doubling of CO2. Some studies observe much lower climate sensitivity (eg. here), but to be on the safe side I will use 1.2.

That is a logarithmic response which looks sort of like your graph (but in degrees C, rather than W/m2).
If you plot CO2 on a logarithmic scale of that graph, you will see that it suddenly looks like a line — which is all anybody is saying.

co2islife
Reply to  lorcanbonda
January 30, 2017 6:14 pm

Anyone with a 2nd grade education in econometrics can identify the problems with IPCC models, and why the data is being adjusted the way that they are.comment image?w=840
If the models aren’t linear, they sure do act like it. Either way, the prevailing models don’t accurately reflect reality.

co2islife
January 29, 2017 10:39 am

The linear models are simply a joke.
Climate “Science” on Trial; The Forensic Files: Exhibit C
https://co2islife.wordpress.com/2017/01/16/climate-science-on-trial-the-forensic-files-exhibit-c/comment image?w=840

RockyRoad
Reply to  co2islife
January 29, 2017 10:02 pm

If they utilized a logarithmic curve, it would destroy their meme (and probably their models). ’nuff said.

co2islife
Reply to  RockyRoad
January 30, 2017 3:30 am

Yep, is they truly model CO2 and temperature, their models will have a greatly lowered sensitivity to CO2, and that just won’t work with their political agenda.

Reply to  RockyRoad
January 30, 2017 7:11 am

It isn’t even that, it’s like trying to ascribe a single resistance to transistor while it’s actually changing daily, and it doesn’t change the same every day it changes based on conditions, which change every day.

January 29, 2017 10:46 am

The run away Positive Feedback Loop conjecture never established either..

January 29, 2017 10:49 am

Hot spot,what HOT SPOT?

co2islife
Reply to  Sunsettommy
January 29, 2017 10:54 am

Here is no Hot Spot.
Smoking Gun #19: The Equatorial Upper Tropospheric “Hot Spot” simply doesn’t exist.
https://co2islife.wordpress.com/2017/01/17/climate-science-on-trial-the-forensic-files-exhibit-s/

R. Granados
Reply to  co2islife
January 29, 2017 4:14 pm

Aw damn. Again?! When the theory turns out wrong, the scientists who discover it, are the DE’vUL.
That’s the government employee way of looking at it.

co2islife
Reply to  R. Granados
January 30, 2017 3:29 am

My problem is, all those Government Employees keep their jobs for being so wrong. What incentive is there for them to do things right?

Jeffrey Pitts
January 29, 2017 11:05 am

Does anyone have a link to the video of a scientist’s presentation of what is wrong with GCMs? I can’t remember the guy’s name. The video was about an hour in length and he basically creates his own model with a single equation during the presentation. I was posted on WUWT in the autumn if I remember correctly.

co2islife
Reply to  Jeffrey Pitts
January 29, 2017 11:25 am

This article has a link to the video about half way down. The link mentions the linear relationship.
Exhibit N: The relationship between CO2 and Temperature simply isn’t linear
https://co2islife.wordpress.com/2017/01/17/climate-science-on-trial-the-forensic-files-exhibit-n/

Reply to  co2islife
January 29, 2017 3:25 pm

That’s it. Many thanks.

co2islife
Reply to  axisbbq
January 30, 2017 3:28 am

My pleasure, hope it helps.

Reply to  Jeffrey Pitts
January 29, 2017 1:07 pm

Christopher Essex has a pretty good one. https://www.youtube.com/watch?v=19q1i-wAUpY Believing Six Impossible Things Before Breakfast.

DMA
Reply to  Jeffrey Pitts
January 29, 2017 2:47 pm

I think you might be referring to the one by Pat Frank (https://wattsupwiththat.com/2016/11/22/the-needle-in-the-haystack-pat-franks-devastating-expose-of-climate-model-error/).
If you haven’t seen it yet you should take a look. It is the first time I saw an analysis of error propagation in the models and it was reviewed by really well qualified folks.
He ends with a conclusion that GCMs reveal nothing about human CO2 effects on climate and nothing about future climate conditions.

Tom Dayton
Reply to  DMA
January 29, 2017 3:17 pm
Reply to  DMA
January 29, 2017 3:27 pm

That is the one I was referring to. Thanks!

Neillusion
January 29, 2017 11:14 am

How do they account for the fact that temperature rises before CO2? This alone should invalidate models that try to say otherwise. Billions of Dollars/Euros/pounds wasted, absolutely wasted, totally and utterly given away for nothing, nothing in return. This must be the biggest scam of all time, with the most people ever involved, the most money ever involved and the most utterly ridiculous science divide ever. never was so much lied about by so many. Biggest FRAUD in the history of civilization.

David
January 29, 2017 11:21 am
January 29, 2017 11:38 am

“The Models can never work”
The models made by corrupt “scientists”, who intend to give exactly what their paymasters demand, will never work. You can take that to the bank.
Models based on the correct physics and laws of thermodynamics that are done honestly and use honest historical data have a change of being correct enough to be useful. Even so, the best models based on correct thermodynamics would be a large challenge to make “work”. (and it is all in the definition of what we mean when we say the model “works”)
Since the CO2 delusion is at the base of the ideas of the alarmists and the luke-warmer skeptics as well, I doubt we will see any models “work” in this century. Perhaps never.

Reply to  markstoval
January 29, 2017 11:54 am

MS, actually probably not. The resolution to use physics to describe things like convection cells is 6-7 orders of magnitude computationally intractable. So models must be parameterized. For CMIP5, the ‘experimental design’ required a hindcast from ye2005 back 30 years to 1975. Parameter tuning to best hindcast drags in the attribution problem, as this part two points out. So the ensemble inevitably runs hot.

Reply to  markstoval
January 29, 2017 1:14 pm

“Put simply, they are not climate models, they are weather models, because they operate on small(ish) pieces of ocean or atmosphere over very short periods (sometimes as little as 20 minutes).” There is that, but physics is more important(even though it can’t work either.) The smaller processes involve make the idea of a chain calculation as used ridiculous. Vortices in water, an important part of understanding the mixing and heat flow dissipate at the millimeter level. Atmospherics work down to the molecular level. To actually model what is going on would require something like 6-10 orders of magnitude. Using anything close to current technology one physicist estimate a computer using all matter in the galaxy, or maybe the universe.

Greg
Reply to  philohippous
January 29, 2017 2:54 pm

model tuning to a 30y period in a system known to have a significant 60y periodicity can only be deliberate malfeasance.

asybot
Reply to  markstoval
January 29, 2017 9:19 pm

markstoval January 29, 2017 at 11:38 am
“The Models can never work”
The models made by corrupt “scientists”, who intend to give exactly what their paymasters demand, will never work. You can take that to the bank.
And so they did.

R. Shearer
January 29, 2017 11:38 am

GIGO is fundamental.

January 29, 2017 11:45 am

A minor correction, the caption on figure 2 should indicate that it is the winter trends for the six runs (runs 25-30), the ensemble mean (EM), and the observed (OBS) values.

Editor
Reply to  escavalon
January 30, 2017 11:19 pm

Thx.

January 29, 2017 12:06 pm

Dr. Watts,
I apologize for being out a while. I had a summer run of heat exhaustion and many side affects through Christmas. Could hardly read a newspaper. Now, Popping letters to the editor again.
By the way, my sister-in-law discovered a junk of white quartz in western Virginia. It took about three weeks to figure out I had in my had one/eight of a quartz sphere. Thus, one more clue that at least western Virginia down I 75 to Stone Mountain was a volcanic area at one time
As for this Reading, I will have to read the other parts to gain an understanding…Modeling seems to be based on, anymore, one’s philosophy and or politics. I abhor the IPCC. I hold them responsible for destroying and stopping science for the benefit of trying to rob the US Treasury on the behalf or India. I believe the new US administration is going to reopen the doors of science and sweep out a lot of trash.
I tend to anger people when I simply try to explain that CO2 is heavier than most of our atmosphere and falls to the ground or water. They get so angry. One claimed the air is 75% CO2, Monoxide and Methane. I simply answered, 3% of 1% is CO2. As for the other 2, I am weary of doing alarmists’ research.
If the air is as full of COs as alarmists believe and say, we would be suffocating.
I found a few books that are a good read. One is a non-peer review research of how the Aswan Dam affects the Mediterranean Sea and The Atlantic Ocean by Robert G. Johnson, “Secrets of the Ice Ages.” This is the only argument that successfully explains man’s role in climate change.
“The Mediterranean Was a Desert” by Kenneth J. Hsu. That u is with two dots over each end. The story of the voyage of the Glomar Challenger.
“Through Space and Time” by Sir James Jeans and also know as the Christmas Lectures at the Royal Institution.
“The Ice Finders’ by Edmund Blair Bolles and Bretz” Flood by John Soennichsen.
Most Sincerely
Paul Pierett

H. D. Hoese
Reply to  Paul Pierett
January 29, 2017 5:08 pm

The Nile dam reportedly knocked out the Israeli Sardine fishery.
Oren, O. H. and B. Komarovsky. 1961. The influence of the Nile flood on the shore water of Israel. Ext. Rapp. Proc.-ver. réun C.I.E.S.M.M. 16(3):655-659. (Paper buried cannot remember complete citation); Aleem, A. A. 1972. Effect of river outflow management on marine life. Marine Biology. 15(3):200-208.
The same thing apparently is happening in Cuba, among other places with less obvious effects from loss of freshwater and nutrients. Baisre, J. A. and Z. Arboleya. 2006. Going against the flow: Effects of river damming in Cuban fisheries. Fisheries Research. 81:283-292.

Phil R
Reply to  Paul Pierett
January 30, 2017 6:17 am

Paul Pierett,
As a lifelong resident of Virginia I think I can fairly confidently confirm that I-75 does not run through Virginia. Are you thinking Georgia?

Nick Stokes
January 29, 2017 12:20 pm

“They manipulated the parameters for water vapour and clouds, without checking the physical realities, until they could match 20th-century temperatures.”
You offer no evidence for that, and your description of tuning is a ridiculous caricature.
“The model tuning process has therefore assigned more warming to water vapour feedback than it should have “
None of this is true. Models do not deal with global feedbacks.
“And it all comes from the way the models are tuned.”
You offer no evidence of this. It is completely wrong.

Editor
Reply to  Nick Stokes
January 29, 2017 12:55 pm

Nick Stokes – I present the evidence in the referenced post, https://wattsupwiththat.com/2015/09/17/how-reliable-are-the-climate-models/:-

The fourth IPCC report [para 9.1.3] says : “Results from forward calculations are used for formal detection and attribution analyses. In such studies, a climate model is used to calculate response patterns (‘fingerprints’) for individual forcings or sets of forcings, which are then combined linearly to provide the best fit to the observations.”.

[my bold]
Of course the models deal with the feedbacks. It says so in the IPCC report:

Using feedback parameters from Figure 8.14, it can be estimated that in the presence of water vapour, lapse rate and surface albedo feedbacks, but in the absence of cloud feedbacks, current GCMs would predict a climate sensitivity (±1 standard deviation) of roughly 1.9°C ± 0.15°C (ignoring spread from radiative forcing differences). The mean and standard deviation of climate sensitivity estimates derived from current GCMs are larger (3.2°C ± 0.7°C) essentially because the GCMs all predict a positive cloud feedback (Figure 8.14) but strongly disagree on its magnitude.

AR4 8.6.2.3. [my bold]

Nick Stokes
Reply to  Mike Jonas
January 29, 2017 1:12 pm

Mike,
Your first quote does not refer to tuning. It refers to forming a later combination of results from GCM runs. “Results from forward calculations” . That has nothing to do with how models work.
And neither does the second. It says you can deduce something about cloud feedback from models results. That doesn’t mean that models deal with feedback, any more than firms accountants deal with GDP, even though GDP estimates are based in part on what they calculate. GCMs can’t deal with global relations like feedback. They deal with local relations between elements.

Reply to  Mike Jonas
January 29, 2017 1:17 pm

The IPCC’s model is very simple. Any forcings are transformed into temperatures by multiplying the Radiative Forcing (RF) values by Climate Sensitivity Parameter (CSP). For calcualting transient CS, CSP = 0.5 and for ECS, CSP =1.0. In calculating RF of CO2, IPCC and all GCMs use Myhre et al. formula RF = 5.35 * ln(CO2/280). So there are logarithmic and linear formulas but do not mix them.

Nick Stokes
Reply to  Mike Jonas
January 29, 2017 1:30 pm

“The IPCC’s model is very simple.”
I don’t know where you can find such a “model”, but what you describe is not a GCM.

Editor
Reply to  Mike Jonas
January 29, 2017 2:59 pm

Nick – So they do this linear combination and get the best fit to observation, and then … throw it away?
And the models have all this radiation and clouds and stuff interacting, according to all sorts of parameters, so that the modellers can see how it all behaves, and that’s not dealing with cloud feedback? What’s in the models is cloud feedback (well, their version of it). When you say “you can deduce something about cloud feedback from models results” what you are saying is simply that you look at the model result to see what its cloud feedback adds up to.

Nick Stokes
Reply to  Mike Jonas
January 29, 2017 3:11 pm

Mike,
“and then … throw it away?”
No. Your quote says what they do with it
“Results from forward calculations are used for formal detection and attribution analyses.”
“what you are saying is simply that you look at the model result to see what its cloud feedback adds up to”
I doubt that. I suspect you have to look analytically at quite a lot of model runs.
” that’s not dealing with cloud feedback?”
It’s not. GCMs are models. You build in mechanisms and then see what they do in various circumstances. In the latter, you find out things that you did not create by design in he model. That is what they are useful for.

lee
Reply to  Mike Jonas
January 29, 2017 8:43 pm

Nick. of course we could look at AR5 chapter9, table 9.5 And see what GCM’s do for water vapour feedback. Some use it, some don’t. Those that use it vary from -0.4 to +1.2

lee
Reply to  Mike Jonas
January 29, 2017 9:55 pm

Nick, Sorry that’s clouds. However some do and some don’t use water vapour also.

Reply to  Mike Jonas
January 30, 2017 11:34 am

To Nick Stokes. IPCC’s model is simply this dT = CSP * RF. All t he forcings can be transformed into temperatures by multiplying by climate sensitivity parameter CSP. This model can be found in every Assessment Report and especially in AR4. IPCC has never changed this presentation. GCM is totally different story. They do not calculate CO2 warming effect by spectral calculations but they use simplified presentations to originating from the Myhre’s formula. I

JohnKnight
Reply to  Nick Stokes
January 29, 2017 2:03 pm

Yo, Nick,
I don’t want to rag on you too much, but much of what you say here is blatant hypocrisy in my eyes.
““They manipulated the parameters for water vapour and clouds, without checking the physical realities, until they could match 20th-century temperatures.
“You offer no evidence for that, and your description of tuning is a ridiculous caricature.”
You either offer (right then, in plain English) the reason(s) you feel his description is erroneous, or I just hear some guy bitching, about some other guy bitching, in a display of blatant hypocrisy. I don’t see an expert dressing down a non-expert, if that’s what you intended.

Reply to  JohnKnight
January 29, 2017 2:17 pm

COMMENTS FROM DR. D V HOYT IN 2006 RE CLIMATE MODEL TUNING, TO FALSE-FORCE THE MODELS TO HINDCAST THE GLOBAL COOLING THAT OCCURRED FROM ~1940 TO ~1975:
We’ve known the warmists’ climate models were false alarmist nonsense for a long time.
As I wrote (above) in 2006:
“I suspect that both the climate computer models and the input assumptions are not only inadequate, but in some cases key data is completely fabricated – for example, the alleged aerosol data that forces models to show cooling from ~1940 to ~1975…. …the modelers simply invented data to force their models to history-match; then they claimed that their models actually reproduced past climate change quite well; and then they claimed they could therefore understand climate systems well enough to confidently predict future catastrophic warming?”,
http://wattsupwiththat.com/2009/06/27/new-paper-global-dimming-and-brightening-a-review/#comment-151040
Allan MacRae (03:23:07) 28/06/2009 [excerpt]
Repeating Hoyt : “In none of these studies were any long-term trends found in aerosols, although volcanic events show up quite clearly.”
___________________________
Here is an email received from Douglas Hoyt [in 2009 – my comments in square brackets]:
It [aerosol numbers used in climate models] comes from the modelling work of Charlson where total aerosol optical depth is modeled as being proportional to industrial activity.
[For example, the 1992 paper in Science by Charlson, Hansen et al]
http://www.sciencemag.org/cgi/content/abstract/255/5043/423
or [the 2000 letter report to James Baker from Hansen and Ramaswamy]
http://74.125.95.132/search?q=cache:DjVCJ3s0PeYJ:www-nacip.ucsd.edu/Ltr-Baker.pdf+%22aerosol+optical+depth%22+time+dependence&cd=4&hl=en&ct=clnk&gl=us
where it says [para 2 of covering letter] “aerosols are not measured with an accuracy that allows determination of even the sign of annual or decadal trends of aerosol climate forcing.”
Let’s turn the question on its head and ask to see the raw measurements of atmospheric transmission that support Charlson.
Hint: There aren’t any, as the statement from the workshop above confirms.
__________________________
IN SUMMARY
There are actual measurements by Hoyt and others that show NO trends in atmospheric aerosols, but volcanic events are clearly evident.
So Charlson, Hansen et al ignored these inconvenient aerosol measurements and “cooked up” (fabricated) aerosol data that forced their climate models to better conform to the global cooling that was observed pre~1975.
Voila! Their models could hindcast (model the past) better using this fabricated aerosol data, and therefore must predict the future with accuracy. (NOT)
That is the evidence of fabrication of the aerosol data used in climate models that (falsely) predict catastrophic humanmade global warming.
And we are going to spend trillions and cripple our Western economies based on this fabrication of false data, this model cooking, this nonsense?
*************************************************
Reply
Allan MacRae
September 28, 2015 at 10:34 am
More from Doug Hoyt in 2006:
http://wattsupwiththat.com/2009/03/02/cooler-heads-at-noaa-coming-around-to-natural-variability/#comments
[excerpt]
Answer: Probably no. Please see Douglas Hoyt’s post below. He is the same D.V. Hoyt who authored/co-authored the four papers referenced below.
http://www.climateaudit.org/?p=755
Douglas Hoyt:
July 22nd, 2006 at 5:37 am
Measurements of aerosols did not begin in the 1970s. There were measurements before then, but not so well organized. However, there were a number of pyrheliometric measurements made and it is possible to extract aerosol information from them by the method described in:
Hoyt, D. V., 1979. The apparent atmospheric transmission using the pyrheliometric ratioing techniques. Appl. Optics, 18, 2530-2531.
The pyrheliometric ratioing technique is very insensitive to any changes in calibration of the instruments and very sensitive to aerosol changes.
Here are three papers using the technique:
Hoyt, D. V. and C. Frohlich, 1983. Atmospheric transmission at Davos, Switzerland, 1909-1979. Climatic Change, 5, 61-72.
Hoyt, D. V., C. P. Turner, and R. D. Evans, 1980. Trends in atmospheric transmission at three locations in the United States from 1940 to 1977. Mon. Wea. Rev., 108, 1430-1439.
Hoyt, D. V., 1979. Pyrheliometric and circumsolar sky radiation measurements by the Smithsonian Astrophysical Observatory from 1923 to 1954. Tellus, 31, 217-229.
In none of these studies were any long-term trends found in aerosols, although volcanic events show up quite clearly. There are other studies from Belgium, Ireland, and Hawaii that reach the same conclusions. It is significant that Davos shows no trend whereas the IPCC models show it in the area where the greatest changes in aerosols were occurring.
There are earlier aerosol studies by Hand and in other in Monthly Weather Review going back to the 1880s and these studies also show no trends.
So when MacRae (#321) says: “I suspect that both the climate computer models and the input assumptions are not only inadequate, but in some cases key data is completely fabricated – for example, the alleged aerosol data that forces models to show cooling from ~1940 to ~1975. Isn’t it true that there was little or no quality aerosol data collected during 1940-1975, and the modelers simply invented data to force their models to history-match; then they claimed that their models actually reproduced past climate change quite well; and then they claimed they could therefore understand climate systems well enough to confidently predict future catastrophic warming?”, he close to the truth.
_____________________________________________________________________________
Douglas Hoyt:
July 22nd, 2006 at 10:37 am
Re #328
“Are you the same D.V. Hoyt who wrote the three referenced papers?” Yes.
“Can you please briefly describe the pyrheliometric technique, and how the historic data samples are obtained?”
The technique uses pyrheliometers to look at the sun on clear days. Measurements are made at air mass 5, 4, 3, and 2. The ratios 4/5, 3/4, and 2/3 are found and averaged. The number gives a relative measure of atmospheric transmission and is insensitive to water vapor amount, ozone, solar extraterrestrial irradiance changes, etc. It is also insensitive to any changes in the calibration of the instruments. The ratioing minimizes the spurious responses leaving only the responses to aerosols.
I have data for about 30 locations worldwide going back to the turn of the century. Preliminary analysis shows no trend anywhere, except maybe Japan. There is no funding to do complete checks.
_________________________________________
Here is a list of publications by Douglas V Hoyt. He is highly credible.
http://www.warwickhughes.com/hoyt/bio.htm

Bartemis
Reply to  JohnKnight
January 29, 2017 2:29 pm

Yeah, makes me think of that Monte Python skit:
Character 1: An argument is not the same as contradiction.
Character 2: It can be.
Character 1: No, it can’t! An argument is a collective series of statements to establish a definite proposition.
Character 2: No it isn’t.
https://youtu.be/kQFKtI6gn9Y

Nick Stokes
Reply to  JohnKnight
January 29, 2017 3:15 pm

“blatant hypocrisy”
No. Mike has made an unsubstantiated assertion of fact. It is up to him to substantiate it. Trying to cover all the possible ways he might have come to express that wrong claim is a mug’s game. If an argument is put to support the claim, that can be dealt with.

afonzarelli
Reply to  JohnKnight
January 29, 2017 3:49 pm

Nick, Mike put a link to an “earlier post” about tuning. (is it necessary to rehash every post he’s made on the subject?) My dim recollection is that he’s got an even lengthier, more detailed one than that from several years ago…

Nick Stokes
Reply to  JohnKnight
January 29, 2017 5:05 pm

Fonz,
“Mike put a link to an “earlier post” about tuning”
But it isn’t about tuning. Neither the word nor the practice are mentioned. And it isn’t mentioned at the link at the top of the page either.
I do remember someone, perhaps Mike, writing about tuning somewhere. But these are specific claims, and rather central to the post. I’m surprised that the skeptics here are satisfied to know that it is all justified somewhere, don’t care where or how. I’m not, because I know it is wrong.

JohnKnight
Reply to  JohnKnight
January 30, 2017 12:13 pm

Nick,
blatant hypocrisy
“No. Mike has made an unsubstantiated assertion of fact. It is up to him to substantiate it.”
This is a complex matter, as I see it, and I really meant what I said about not wanting to rag on you too much in this regard . . for if it were up to me people would not speak of what they believe to be so, without mentioning it’s what they believe to be so, which is what I “read into” the assertion in question here. I “have to” read that into a great many statements of fact I observe, because people do it all the time . . so to speak . . it seems to me ; )
I told you how it appeared to me, in this case, not so much as a defense of what Mr. Jonas did along the lines we are discussing, but as a form of advice. Readers in general seem (to me ; ) to be used to considered opinions being stated as “facts”, and I advise against doing so, within a complaint about someone else doing so. (“… your description of tuning is a ridiculous caricature.”)

afonzarelli
Reply to  Nick Stokes
January 29, 2017 3:32 pm

Nick, after a comment like this one, i think you deserve a “nick” name courtesy of yer humble fonz. How about “NICK TOKES” (as in, whatcha smokin’ man?!)… ☺

Darrell Demick
Reply to  afonzarelli
January 30, 2017 1:50 pm

“Skankhunt42” comes to mind, but I have already given that nickname to Griff ……

Kurt
Reply to  Nick Stokes
January 29, 2017 6:33 pm

This is a quote from a 2016 article in Science:
“Indeed, whether climate scientists like to admit it or not, nearly every model has been calibrated precisely to the 20th century climate records—otherwise it would have ended up in the trash. “It’s fair to say all models have tuned it,” says Isaac Held.”
I remember reading something similar about six months ago from a climate modeler who indicated that any model that didn’t replicate the observed 20th century upward trend in temperatures would be scrapped.
Granted, you can quibble about whether it’s the “water vapor and cloud” parameters that are being tuned, but clearly the models are being adjusted to match the feature that confirms the beliefs of the modelers.

Reply to  Nick Stokes
January 30, 2017 7:19 am

Nick, here
http://www.cesm.ucar.edu/models/atm-cam/docs/description/node13.html#SECTION00736000000000000000
This is where they parameterize the water/air boundary, and adjust the amount of water vapor that is emitted. In reality a lot of molecules break away, but fall back, it is this that they set. And it alone makes a big difference on how warm the models run. If I understand my model history, back in the 70’s or so they all ran cold, as mentioned in OP, this is the hack they added, they bump up the percentage of water that leaves verses drops back.

Nick Stokes
Reply to  micro6500
January 30, 2017 10:30 am

Micro,
No, that isn’t tuning. The section is headed
“Adjustment of specific humidity to conserve water”
It is just enforcing mass conservation. It isn’t tuning to any observations. Mass conservation is one of the basic principles underlying solution.

Reply to  Nick Stokes
January 30, 2017 10:48 am

I’ll just let the last section speak for itself.

3.3.7 Further discussion
There are still aspects of the numerical formulation in the finite volume dynamical core that can be further improved. For example, the choice of the horizontal grid, the computational efficiency of the split-explicit time marching scheme, the choice of the various monotonicity constraints, and how the conservation of total energy is achieved.
The impact of the non-linear diffusion associated with the monotonicity constraint is difficult to assess. All discrete schemes must address the problem of subgrid-scale mixing. The finite-volume algorithm contains a non-linear diffusion that mixes strongly when monotonicity principles are locally violated. However, the effect of nonlinear diffusion due to the imposed monotonicity constraint diminishes quickly as the resolution matches better to the spatial structure of the flow. In other numerical schemes, however, an explicit (and tunable) linear diffusion is often added to the equations to provide the subgrid-scale mixing as well as to smooth and/or stabilize the time marching.
The finite-volume dynamical core as implemented in CAM and described here conserves the dry air and all other tracer mass exactly without a “mass fixer”. The vertical Lagrangian discretization and the associated remapping conserves the total energy exactly. The only remaining issue regarding conservation of the total energy is the horizontal discretization and the use of the “diffusive” transport scheme with monotonicity constraint. To compensate for the loss of total energy due to horizontal discretization, we apply a global fixer to add the loss in kinetic energy due to “diffusion” back to the thermodynamic equation so that the total energy is conserved. However, it should be noted that even without the “energy fixer” the loss in total energy (in flux unit) is found to be less than 2 $ (W/m^{2} $) with the 2 degrees resolution, and much smaller with higher resolution. In the future, we may consider using the total energy as a transported prognostic variable so that the total energy could be automatically conserved.

The problem is that in the water conservation modules they internally allow rel humidity > 100%, that is in the NASA GCM’s, and this is the replacement section from CMIP.

Nick Stokes
Reply to  micro6500
January 30, 2017 11:40 am

micro,
There is a key thing missing – comparison with external measured data. That is what tuning is about. So what measured data is being used for tuning here?what

Reply to  Nick Stokes
January 30, 2017 12:27 pm

There is a key thing missing – comparison with external measured data. That is what tuning is about. So what measured data is being used for tuning here?what

I would respectfully point out I didn’t design the parameterization, nor try to confirm it with actual measurements.
I suggest, for those people who think that models run hot as I do, it is exactly this parameterization that is the cause, as its goal is to maintains the regulation of water vapor generation in the model. And if there isn’t any additional water vapor feedback in the models, there isn’t any CAGW.

emsnews
January 29, 2017 12:26 pm

It still amuses me to see how everyone still is thinking we will have to worry about Interglacial Weather systems when the likelihood of another Ice Age looms in the future.

Reply to  emsnews
January 29, 2017 12:36 pm

emsnews,
We are in an ice age NOW. Have been for around 2.6 million years.
What you need to say is Glaciation instead:
“It still amuses me to see how everyone still is thinking we will have to worry about Interglacial Weather systems when the likelihood of another GLACIATION looms in the future”
Either an INTERGLACIAL or a GLACIAL period,one or the other.

Reply to  Sunsettommy
January 29, 2017 12:49 pm

The Alarmists say, “Maybe not”.

Gloateus Maximus
Reply to  Sunsettommy
January 30, 2017 7:56 am

The Pleistocene NH ice sheets began forming 2.6 Ma, but in Antarctica ~34 Ma. Thus the Cenozoic ice house dates from the early Oligocene.

robinedwards36
January 29, 2017 12:32 pm

Paul, the fact that CO2 (Mol Wt 44) is heavier than air, which has a mol wt of about 31, does not mean that it will tend to descend towards ground level. Once mixed with air it cannot spontaneously un-mix. Gasses do not work like that. Your friends have a good excuse to get angry!

Bartemis
Reply to  robinedwards36
January 29, 2017 2:14 pm

Yes, and no. With turbulent convection, the gases become well mixed. But, within the atmospheric boundary layer, calm conditions can trap heavier gases near to the surface where they can be absorbed.

Reply to  Bartemis
January 30, 2017 7:43 am

Nice switch there Bart, “Once mixed with air it cannot spontaneously un-mix. Gasses do not work like that”
Is not contradicted by the statement that in calm conditions locally generated CO2 doesn’t mix as fast!

Bartemis
Reply to  robinedwards36
January 30, 2017 9:41 am

No switch. Just pointing out that what was stated is not entirely correct. If heavier gases did not tend to settle lower within the boundary layer, we would not have smog problems. We would not be measuring CO2 in Mauna Loa when we could measure it more easily from the Griffith Observatory.

Editor
January 29, 2017 1:05 pm

Correction: In the post, the wording “there is more evaporation; about 7% more per degree C” is slack. It is correct if there is also 7% more precipitation, but the original Clausius-Clapeyron “7%” refers to saturation water vapour pressure, not to evaporation. h/t Frank (https://wattsupwiththat.com/2017/01/28/this-is-how-climate-works-part-1/#comment-2411341).

January 29, 2017 1:19 pm

NCAR’s 40 model runs to show the effects of small parameter changes is being lauded as being useful to other researchers, rather than showing how poor the modelling is.

“It took a village to make this ensemble happen and for it to be useful to and usable by the broad climate community,” Kay said. “The result is a large number of ensemble members, in a state-of-the-art climate model, with outputs asked for by the community, that is publicly available and relatively easy to access — it’s no wonder it’s getting so much use.”
Scientists have so far relied on the CESM Large Ensemble to study everything from oxygen levels in the ocean to potential geoengineering scenarios to possible changes in the frequency of moisture-laden atmospheric rivers making landfall. In fact, so many researchers have found the Large Ensemble so useful that Kay and Deser were honored with the 2016 CESM Distinguished Achievement Award, which recognizes significant contributions to the climate modeling community.
The award citation noted the pair was chosen because “the Large Ensemble represents one of NCAR’s most significant contributions to the U.S. climate research community. … At a scientific level, the utility of the Large Ensemble cannot be overstated.”

The assumption seems to be that:
– the model correctly emulates North American climate
– ALL of the start parameters are absolutely correct, even to a trillionth of a degree of temperature (presumably at every point in North America)
– the outputs are DATA

The dataset generated during the project, which is freely available, has already proven to be a tremendous resource for researchers across the globe who are interested in how natural climate variability and human-caused climate change interact.

The hubris, it hurts.

Armed with 40 different simulations, scientists can characterize the range of historic natural variability.

Nick Stokes
Reply to  John in Oz
January 29, 2017 1:28 pm

“ALL of the start parameters are absolutely correct, even to a trillionth of a degree of temperature (presumably at every point in North America)”
Quite the contrary. They are looking for something (the climate attractor) which does not depend on initial conditions. And that is just as well because, as you suggest, they actually don’t know those very well.
In fact, GCMs (and CFD calcs) usually are set running well before the period of interest, even though less is known about the earlier initial state. It is more important to let physical inconsistencies from the initial state settle, than to preserve initial information (which you can’t anyway).

Nick Stokes
January 29, 2017 1:21 pm

“Think about it. By changing the model’s initial global temperature by a trillionth of a degree”
Well, think about it. Think. There have been threads on WUWT saying how climate is a chaotic system, with essentially no dependence on in initial conditions after any realistic interval. Instead, the focus is on finding attractors, as I set out here. For that, you actually rely on non-dependence on initial conditions. And that is true in all fluid dynamics, not just GCMs. That is why they do these runs with just any small initial perturbations. They get each time a different trajectory with different (and maybe not fast) convergence to the attractor (climate).

Editor
Reply to  Nick Stokes
January 29, 2017 1:58 pm

Nick – If you’re dealing with this kind of system, you can’t benefit from bottom-up modelling. Bottom-up modelling can’t ever tell you where the butterfly’s wing flap will lead to. You have to scrap the bottom-up stuff. After that, your model is only as good as its parameterisations. Actually, your model only ever was as good as its parameterisations, because the butterfly wing flap in your model only goes where your parameterisations allow.

Tom Dayton
Reply to  Mike Jonas
January 29, 2017 2:13 pm

Mike, you need to learn about chaos. I suggest you start by reading the Basic tabbed pane here, then the Intermediate tabbed pane: https://skepticalscience.com/chaos-theory-global-warming-can-climate-be-predicted.htm. Then read Nick’s post that he linked for you: https://moyhu.blogspot.com.au/2016/11/lorenz-attractors-fluids-chaos-and.html. Then read Science of Doom’s series: https://scienceofdoom.com/roadmap/natural-variability-and-chaos/

Tom Dayton
Reply to  Mike Jonas
January 29, 2017 2:31 pm

Mike: On the RealClimate.org site, enter “chaos” in the Search field to get a list of good posts.

Bartemis
Reply to  Nick Stokes
January 29, 2017 2:21 pm

But, is there an attractor in the equations as coded? There does not appear to be a particularly strong one. The outcomes range far and wide. Claiming that the measurements fall within that very wide range, even if only at the very bottom edge, is applying the Texas Sharpshooter’s Fallacy – if the GCMs cover any potential outcome, then they have no explanatory power, and cannot be validated.

Nick Stokes
Reply to  Bartemis
January 29, 2017 3:22 pm

“But, is there an attractor in the equations as coded?”
You can only learn that by trying. And that tells you about how unique it is. Same with CFD. The attractor has many facets, and notably some are hard to pin down. The GCMs seem to be able to operate rather similarly over a substantial range of absolute temperature – ie you can add a constant offset and it will take a long time to restore. But the trend forced by change in GHGs, for example, is more sharply characterised.

Pop Piasa
Reply to  Bartemis
January 29, 2017 3:34 pm

There is no predicting the volcanoes, fires and dust storms which randomly occur. If these are not factored in, the model is of a process in a vacuum and has no relevance to reality.

Bartemis
Reply to  Bartemis
January 29, 2017 7:44 pm

“But the trend forced by change in GHGs, for example, is more sharply characterised.”
And, wrong. But, this is not a chaotic attractor in any case. It is a deterministic cause and effect in the GCMs.

Ragnaar
Reply to  Bartemis
January 29, 2017 8:42 pm

Bartemis:
I have a question for Nick Stokes on this thread similar to yours. Without attractors in the equations, does chaotic, two regime behavior reveal itself? If we get two or three distributions of conditions over the course of the run, I guess conceptually we are seeing the attractor.

Editor
Reply to  Bartemis
January 29, 2017 11:08 pm

Thx, Bartemis – nail – head. If 40 completely identical[*] runs give 40 results that vary by more than the last century’s changes, then I don’t care how beautiful their theory is, the facts show that their model is useless for climate prediction.
[*] in any real-world sense.
I did a t-o-h calc while in Tassie the other day: if a police officer goes to the Liawenee police station in winter and lights the fire, the global temperature goes up a trillionth of a degree, and then some. So if Colorado is very hot or cold in 2100, blame the climate models and the Tasmanian police.

paqyfelyc
Reply to  Nick Stokes
January 29, 2017 3:49 pm

“the focus is on finding attractors”
No it’s not. The attractor (singular) doesn’t depend on initial conditions such like GHG concentration or temperature. It keeps the same, depending only on the system itself.
So the whole IPSS modelling only make sense if the system is changed, in such a way that it change attractor, depending on initial condition of GHG concentration. But even in such a case, you’d need to caracterize beforehand the attractor of the first system, to show the difference. Was it done ? I saw no evidence of that.
Anyway those attractors are just simply impossible to see for a human eye, having far too many parameters. To pretend that you can show them with a few dozen lines or image like fig 2.2 is gross.

Michael Kelly
Reply to  Nick Stokes
January 29, 2017 4:21 pm

Nick – I’ve commented before on the nature of turbulence as it is calculated in numerical solutions of the Navier Stokes equations. It is my opinion (and I emphasize that it is only that at the moment) that numerical solutions introduce chaos where none would exist otherwise. Here is a great example: http://epubs.siam.org/doi/pdf/10.1137/S003614450342911X
The paper illustrates that a pair of autonomous, non-linear ordinary differential equations exhibits chaotic behavior in a solution produced by MATLAB’s Runge-Kutta-Felberg 4-5 order solver. Chaotic behavior is impossible in a system with only two variables. The author then derives the analytical solution (which is very well-behaved), and subsequently delves into the numerics. I turns out that the RKF-45 integrator, which is an adaptive step-size algorithm, increases its step size to a value that is unstable numerically. The algorithm subsequently responds to large error terms by reducing step size. It is the entirety of the process that produces “attractors,” but they are not attractors in the chaos theory sense.
I’ve found approximate analytical solutions to the Lorenz equations (using homotopy), and they are not sensitive to initial conditions nor chaotic. I have a little more work to do, but I believe that the Lorenz equations have an exact solution. That should clear up any remaining ambiguity.
It may be that the recursive nature of numerical solutions (which differs in kind from analytical solutions) introduces the chaos that resembles turbulence in N-S solutions. Having said that, there is nothing whatsoever in the Navier Stokes equations that would produce turbulence in any analytical solution. I’m not sure what this all means, but it does point to a problem that is much deeper than what we have been pondering.

paqyfelyc
Reply to  Michael Kelly
January 29, 2017 4:41 pm

“Chaotic behavior is impossible in a system with only two variables. ”
you bet … The logistic map exhibit chaotic behevior with a single variable.
In the real world, the double pendulum has only two variables but usually is chaotic nonetheless. However, for some values of parameters, the system do have analytical approximate solutions, being not chaotic.

Nick Stokes
Reply to  Michael Kelly
January 29, 2017 7:24 pm

Mike Kelly,
“Here is a great example”
It is an interesting example, and I’ll look in more detail. But at first look it goes like this. They concede that the time stepping gets into a region where it doesn’t resolve properly, and spurious growing solutions result. That is not very surprising; but what is more surprising is that chaos appears. I think the reason for this is that in RK45, you don’t have just two variables any more. The intermediate variables bump this up to eight or ten, and once the resolution is lost, the system is no longer emulating the de. It is a nonlinear recurrence relation with at least eight variables. So chaos is quite possible.
So what about GCMs. The control on timestep is the Courant condition, which basically insists that any spatial waves that can be propagated are adequately resolved. This can fail, and the solution will blow up. Otherwise it does not get into the regime of creating spurious chaos. The real N-S chaos is enough.
I wrote a series of posts about this late last year. The third is here; the earlier ones, of varying correctness, are linked.
I’m skeptical about an analytic solution to Lorenz, let alone N-S. But I’m all ears.

Michael Kelly
Reply to  Michael Kelly
January 29, 2017 7:55 pm

Nick – The CFL criterion is one that ensures stability in the sense that information will be propagated to each node (or cell) prior to any change due to adjacent cell changes. But the CFL condition isn’t the only guarantor of stability. In fact, the climate modelers have built in to their models algorithms to damp out wild oscillations caused by incorrect initial conditions. Those become part of the solution of the PDEs, in a manner not accounted for in the overall mathematical theory.
You noted yourself that the RK-45 solution introduces more variables. I agree. In fact, it introduces the current value of each dependent variable, which is nowhere present in an analytical solution. I believe that any recursive function differs in kind (significantly) from a solution involving continuous, differentiable functions. They are not the same thing, and aren’t in the same universe. That is different from the question of whether the Navier Stokes equations truly describe fluid flow (I don’t believe they do). But I think it shows that one cannot prove their connection to reality one way or another via numerical methods.

Michael Kelly
Reply to  Michael Kelly
January 30, 2017 10:41 am

pacyfelyc – The Poincaré–Bendixson theorem shows that differential equations in either one or two variables are never chaotic. The theorem doesn’t apply to discrete systems, which can be chaotic with only one variable (as the logistic map example shows). That is part of my thinking on this subject. The fact discrete, recursive equations can produce chaos should be a clue that discretizing differential equations (and solving them recursively) may introduce chaos where none is present.

Bartemis
Reply to  Michael Kelly
January 30, 2017 8:07 pm

Michael – It was obvious to me that the logistic map example was inapplicable as you were clearly speaking of continuous time systems. The example of the double pendulum is not so obvious.
One can easily google locations on the web that say the double pendulum is chaotic, but it appears perhaps that it is described more precisely by the third condition of the Poincaré–Bendixson theorem

a connected set composed of a finite number of fixed points together with homoclinic and heteroclinic orbits connecting these.

OTOH, it can be very sensitive to initial conditions, which I think is owing to the existence of unstable equilibria. Thoughts?

Mike Flynn
Reply to  Nick Stokes
January 29, 2017 4:42 pm

Nick Stokes,
Your understanding of chaos is deeply flawed. Even for initial inputs to the the 3 Lorenz equations you refer to in your blog link, there are infinitely many values which result in zero, infinitely many which result in stability, and infinitely many which result in chaos and strange attractors.
The Skeptical Science link you provide is likewise deeply flawed.
Climate (the average of weather over time) prediction is impossible. This has been explicitly stated by the IPCC – “In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”
Even delusional climatologists are forced to acknowledge reality on occasion. The fact that they then proceed to ignore reality, doesn’t turn fiction into fact.
Chaos rules! No useful climate predictions, better than a twelve year old could do, are possible.
Cheers.

Nick Stokes
Reply to  Mike Flynn
January 29, 2017 7:00 pm

Mike,
“Your understanding of chaos is deeply flawed.”
I have been dealing with, and publishing in, computational fluid dynamics and differential equations for most of my professional life. And you?
“Climate (the average of weather over time) prediction is impossible. This has been explicitly stated by the IPCC”
This is par tof a never-ending story where skeptics, convinced that scientists are lying through their teeth, seize on occasions where someone has blurted out the truth. That requires ripping out of context, but also, often, just not reading properly. Anything that sounds about right will do.
Here they didn’t say that climate prediction was inmpossible. They said that prediction of future climate states is not possible. A climate state is the collection of climate variables at a point in time. And to see the obviousness of what they are saying, you need only look at a spaghetti plot of CMIP results. They don’t agree on any climate state. But they doo agree, more or less and given trime, on the climate.
The full quote (not ripped out of context) makes this clear:

In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles.

And that is exactly what projects like CMIP5 do. They collect ensembles and study their statistics.

Mike Flynn
Reply to  Mike Flynn
January 29, 2017 10:21 pm

Nick Stokes,
Your appeal to your own authority is less than convincing. The undistinguished mathematician Gavin Schmidt apparently believes he is a scientist. And you?
Maybe you could actually quote me, and point out any factual errors.
Merely saying “But they doo agree, more or less and given trime, on the climate.” might give an impression of sloppiness, carelessness, and practical irrelevance.
Whether or not climatologists, given time, more or less agree on the climate, is pointless. Climate is the average of weather. Easily calculated by an average twelve year old child.
Climatology is less useful than astrology, to some. At least astrological projections are far cheaper than climatological projections, and equally valid, it would appear.
Cheers.

Reply to  Mike Flynn
January 30, 2017 2:44 am

Mike, Nick
My own view of chaos in climate is not that it means “you can’t predict climate”. This is a council of despair. Although there is a non-deterministic element to chaos, I believe that using the correct mathematics of nonlinear and chaotic systems (and understanding what “chaos” actually means – the boundary of chaos is more interesting and important than chaos / turbulence itself) – that useful predictions can be made.
Even the “acute sensitivity to initial conditions” is not a very fruitful avenue to pursue in relation to climate and chaos. Such high sensitivity is not unique to chaotic systems and doesn’t get us very far in terms of analysing climate.
Chaos means that climate can change through internal dynamics without always needing a net forcing from outside. a.k.a. Lorenz’ key paper “Deterministic Nonperiodic Flow” (1963).
However, except for short term predictions, models should be analogies and examples of what can happen, rather than claiming to predict actual outcomes, since chaotic dynamics may indeed close this door.

Mike Flynn
Reply to  Mike Flynn
January 30, 2017 5:26 am

ptolemy2,
I assume you mean “counsel of despair”, rather than “council of despair”, but no matter.
I disagree with your belief that using mathematics – correct or otherwise – can lead to useful predictions of future climate states. By “useful”, I mean better than you, I, or any reasonably competent 12 year old, can do – using naive predictive methodology, based on what has been observed in the past.
Possibly, so called climatologists use “incorrect” mathematics, or maybe share a collective delusion based on inflated opinions of their intellects, abilities, and importance to society. Their track record is dismal, with no measurable benefit of any kind to humanity in general, to date.
I am sure you will provide evidence to the contrary, if such exists.
Cheers.

brambo4
Reply to  Nick Stokes
January 29, 2017 5:29 pm

Nick,
Is there any literature where I can read someone who actually works on GCMs defend their product against criticisms like the above.
Thanks

Nick Stokes
Reply to  brambo4
January 29, 2017 7:02 pm

The AR5 is a good place to start, beginning with Chapter 9. They answer informed criticisms.

Ragnaar
Reply to  Nick Stokes
January 29, 2017 8:27 pm

Nick Stokes:
You said attractor and your link above show plots mostly similar to the 2 winged butterfly. Is it fair to say that the GCMs have at least 2 attractors in practice? A warming GMST would be one and a cooling GMST would be another. A pause could be regular jumping between the 2. Sea ice has 2 attractors, growing and receding. Each grid cell has 2 attractors, warming or cooling. Is it true to say, none of this needs to be guided by Lorenz type equations as the behavior reveals itself with the results?
BTW, I agree with spinning the GCMs up, no problem with that. You spin up an airplane wing by going fast down the runway. Halfway down the runway conditions 10 seconds earlier no longer matter. What matters is the indicated air speed.

Nick Stokes
Reply to  Ragnaar
January 29, 2017 10:15 pm

Ragnaar,
“Is it fair to say that the GCMs have at least 2 attractors in practice?”,/i>
Well, I would say the Lorenz system has just one attractor, of butterfly shape (but 1 butterfly). But maybe there is an analogy with switching glacial states.
In fact, the Lorenz system has 3 special points (origin is one, and one for each wing), resulting from the fact that if you differentiate and solve, you get a cubic. That’s about as simple as it gets. But it’s possible that more complex systems could have many points but only two or three that mattered.
I think your idea about the wing is right. When you start to think of real fluid flows, it’s hard to think what initial state might mean.

Hivemind
January 29, 2017 1:33 pm

“But the models only allow for 2-3% more precipitation”
This is something I have seen in project management. You’re given a bottom line for the budget by a previous project manager, or your boss. But when you go through the figuring, you discover that it is short. So you rejig all the numbers in the budget so it will come to the right total. Sometimes it happens the other way, the budget is too high, in which case you don’t give the money back, you rejig the numbers so it seems you will spend it all.
Then something happens, eg contracts are let for a different amount, so you have to go through and rejig the rejigged numbers. Long story short, the budget bears less and less resemblance to reality and more to justifying the number you had in the first case.
Just like with global warming.

paqyfelyc
January 29, 2017 4:03 pm

The trouble with tuning:
“With four parameters I can fit an elephant, and with five I can make him wiggle his trunk. ”
That’s exactly what IPCC models do : fit climate data to a snake (with much more than 5 parameters), and then have it wiggle the head.

Tom Dayton
Reply to  paqyfelyc
January 29, 2017 7:41 pm

paqyfelyc: “Tuning” does not mean what you think it does: http://www.realclimate.org/index.php/archives/2008/11/faq-on-climate-models/

January 29, 2017 4:36 pm

I would like to correct a few things in this article.
First precipitation. The climate models have almost exactly the same increase in precipitation as there is water vapor. It can increase at the same rate because water vapor cycles through the atmosphere 40 times per year and if there is a 22% increase in water vapor by 2100, precipitation can increase by 21.9% and the math still works.
Secondly, The feedback impact from water vapor is close to 3 times higher than it is from clouds in the theory. The article says that clouds are higher.
Generally okay but the facts should be double-checked.

co2islife
January 29, 2017 4:45 pm

OK, here is another climate related article. Shoot holes in it if you can. It explains why the physics of the CO2 molecule pretty much rule out CO2 as the cause of climate change.
Climate “Science” on Trial; CO2 is a Weak GHG, it has no DiPole
https://co2islife.wordpress.com/2017/01/30/climate-science-on-trial-co2-is-a-weak-ghg-it-has-no-dipole/

Tom Dayton
Reply to  co2islife
January 29, 2017 5:18 pm

CO2 lacks a dipole only in its ground state. What matters for IR absorption is its dipole moment during bond stretching. I’m glad to have satisfied your request for hole shooting. http://butane.chem.uiuc.edu/pshapley/GenChem1/L15/web-L15.pdf

co2islife
Reply to  Tom Dayton
January 29, 2017 5:27 pm

I think that is what the article covered. CO2 is a weak dipole. That is what the article reviewed.

Tom Dayton
Reply to  co2islife
January 29, 2017 5:51 pm

Wow, co2islife, your reality distortion field is strong. CO2 is a strong greenhouse gas, despite having no dipole moment in its ground state. Remove your tin foil hat before reading.

co2islife
Reply to  Tom Dayton
January 29, 2017 5:55 pm

Wrong, did you even read the article? Every one of those charts pretty much proves CO2 is essentially meaningless. Address a single graphic in that article. CO2 has the physical structure of the weakest of weak greenhouse gases. Bending is the weakest movement, and that is all CO2 does, a weak bend. 15 microns is very very low energy IR. There is nothing about CO2 that makes it a strong and potent GHG. Once again, point out where a single graph I produced is wrong.

Reply to  Tom Dayton
January 30, 2017 8:11 am

co2islife January 29, 2017 at 5:55 pm
Wrong, did you even read the article? Every one of those charts pretty much proves CO2 is essentially meaningless. Address a single graphic in that article. CO2 has the physical structure of the weakest of weak greenhouse gases. Bending is the weakest movement, and that is all CO2 does, a weak bend. 15 microns is very very low energy IR. There is nothing about CO2 that makes it a strong and potent GHG. Once again, point out where a single graph I produced is wrong.

Bending is not the weakest motion. In its ground state CO2 does have a dipole (just not a permanent one), in the ground state CO2 is constantly vibrating and only for a minuscule part of that vibration does it pass through a zero dipole state. CO2 has 4 vibrational modes, two bending and two oscillations. 15 microns just happens to be very close to the maximum in the Earth’s IR emission spectrum and so CO2 is ideally situated to be a strong absorber of that radiation. The Asymmetric stretch of CO2 occurs at 2400 cm-1 (~4microns) which is way out on the tails of the earth’s energy distribution and so isn’t significant (but maybe on Venus).

Tom Dayton
Reply to  co2islife
January 29, 2017 6:51 pm

co2islife: So you believe the U. of Illinois Urbana-Champaign Chemistry 102 class lecture note I gave you is wrong when it says, after explaining CO2’s dipole moments, that “Carbon dioxide doesn’t have a molecular dipole in its ground state. However, some CO2 vibrations produce a structure with a molecular dipole. Because of this, CO2 strongly absorbs infrared radiation.”? H2O is a feedback, not a forcing; CO2 is a forcing.

co2islife
Reply to  Tom Dayton
January 30, 2017 6:21 pm

Tom, CO2 doesn’t have a permanent dipole, that is pointed out in the article. Didn’t you read it? Facts are, “bending” is the lowest energy movements of all the movements, also pointed out in the article. Facts are, CO2 is a very very very weak GHG as the article points out. The IPCC’s own models prove that the consensus belief way overstates the power of GHG. You may want to read the article again and study the charts and visit SpectralCal.
Climate “Science” on Trial; CO2 is a Weak GHG, it has no Permanent DiPole
https://co2islife.wordpress.com/2017/01/30/climate-science-on-trial-co2-is-a-weak-ghg-it-has-no-dipole/comment image

co2islife
Reply to  Tom Dayton
January 30, 2017 6:56 pm

Tom, here are the graphics you seem to have missed. Remember, the only movement relevant to the GHG effect is the bending at 13 to 18 microns. That is it. Bending takes a lot less energy than compressing and stretching.comment imagecomment imagecomment image

Tom Dayton
Reply to  co2islife
January 29, 2017 7:16 pm

co2islife: See AR5, Chapter 8, Figure 8.6 on page 677, for radiative forcings of CO2 and other gases, major and minor: http://www.ipcc.ch/report/ar5/wg1/

co2islife
Reply to  Tom Dayton
January 30, 2017 6:27 pm

Tom, you don’t seem to understand that all these “theories” have been tested…at great expense…and failed miserably. They have failed, do not pass go. What good is a link to an organization that produces garbage like this? In any real world, this is game over for your beloved “theory.”comment image?w=840

co2islife
Reply to  Tom Dayton
January 30, 2017 7:00 pm

Tom, H2O simply overwhelms CO2. CO2 is redundant, H2O does all the work.comment image

Reply to  Tom Dayton
February 1, 2017 8:57 am

co2islife January 30, 2017 at 7:00 pm
Tom, H2O simply overwhelms CO2. CO2 is redundant, H2O does all the work.

As your graph shows that would be true if the Earth’s surface temperature was 700K, fortunately it is not!
CO2 dominates in the region where the Earth emits, 10-20µm

Tom Dayton
Reply to  co2islife
January 29, 2017 7:30 pm

co2islife: An explanation of the fallacy of CO2 saturation and overlap with absorption by other gasses: http://www.realclimate.org/index.php/archives/2007/06/a-saturated-gassy-argument-part-ii/comment-page-9/

Reply to  Tom Dayton
January 30, 2017 8:39 am

Tom,
The spectral data is clear about the interaction between CO2 and H2O absorption, as some fraction of this absorbed energy is re-emitted as a Planck spectrum by condensed liquid water. It’s not much, but we see it as more than a 3db reduction in 15u energy emitted by the planet (about 4 db) as compared to saturated lines where there is no overlap. The 15u energy is reduced slightly as the transparent window emissions are increased slightly and this explains the ‘excess’ attenuation in the 15u band. None the less, the net split between absorbed energy going into space and returning to the surface is still 50/50.
BTW, this is another validated prediction of my hypothesis that the macroscopic behavior of the planet must conform to the laws of physics. You would think that this should be obvious, but then again, it doesn’t fit the narrative.
https://wattsupwiththat.com/2017/01/05/physical-constraints-on-the-climate-sensitivity/

co2islife
Reply to  Tom Dayton
January 30, 2017 6:29 pm

Real climate? Run by the guy that created the “Hockeystick?”
Smoking Gun #15: Climate “Science” Temperature Reconstructions are not reproducible outside the “Peer Review” community
https://co2islife.wordpress.com/2017/01/17/climate-science-on-trial-the-forensic-files-exhibit-o/

Reply to  Tom Dayton
January 30, 2017 7:36 pm

Hardly a refutation of a fallacy. They dig up a HITRAN image showing a fog of P and R branch rotational lines around the vibrational transitions. They neglect to mention that the P branch rotations are destructive, that is, the reduce the energy of the molecule.
comment image
Check out the Y axis units in Gavin’s graphic below. “Absorption” is an unfortunately loaded and basically illiterate term in the seemingly deliberately confusing and arcane world of spectroscopy. In language it is customary to keep the endings congruent. The opposite of transmission should be absorption. Nope. The opposite of transmission is absorptANCE. Absorption is mathematically defined as a unitless concept essentially equal to “optical depth”.
This unitless concept completely ignores INTENSITY.comment image
Below is a table of the relative intensities of CO2 vibrational transitions.comment image

Reply to  Tom Dayton
February 1, 2017 9:33 pm

gymnosperm January 30, 2017 at 7:36 pm
Hardly a refutation of a fallacy. They dig up a HITRAN image showing a fog of P and R branch rotational lines around the vibrational transitions. They neglect to mention that the P branch rotations are destructive, that is, the reduce the energy of the molecule.

There’s a good reason for that, it isn’t true!
The P branch represents transitions from the ground vibrational state to the upper vibrational state, i.e. an increase in energy. The fine structure represents the small difference in energy increase caused by the change in the rotational level accompanying the vibrational change.
P branch: (v=0, j) -> (v=1, j-1)
Q branch: (v=0,j) -> (v=1, j)
R branch: (v=0,j) -> (v=1, j+1)
All of the branches represent an increase in energy of the molecule between ~640cm-1 and ~700cm-1.

willhaas
January 29, 2017 8:35 pm

Apparently they hard code in that an increase in CO2 causes warming so the simulation models beg the question and their results are of no value.
H2O feedback has to be negative as evidenced by the fact that the wet lapse rate is significantly less than the dry lapse rate. The feedback also has to have been negative for the Earth’s climate to have been as stable as it has over at least the past 500 million years because we are here.
In reality, doubling the amount of CO2 in the atmosphere will diminish the dry lapse rate just a tad, enough to reduce the climate sensivity of CO2 by more than a factor of 20.
The weather service that I access only projects the climate for 10 days with roughly 50% confidence and everyday they are having to adjust their predictions. That gives one an idea as to how good the weather simualtions are. For climate they apparenty start with a weaher simulation and increase the spatial and temporal sampling intervals and add code to simulate how they expect that CO2 affects climate. The increase in the sampling intervals could make the simulations unstable so their results may be dominated by the instability.

Johann Wundersamer
January 29, 2017 9:21 pm

v’

Chris in oz
January 30, 2017 1:59 am

What matters is whether the models have predictive capability or not. If they don’t, then unless you know why they don’t, the argument about how the models work is a bit pointless. They simply don’t have predictive capability – but the business cases for all the investments to prevent the warming do. They are real money. Thus there is an eternal disconnect: you can calculate the discount rate for the cost of the investments against the predicted or projected future temperatures and economic consequences, which you are practically unable to predict or project in the first place.

January 30, 2017 7:03 am

It is the thermal conductivity, i.e. composite insulative properties of that square column of air that warms the earth per Q = U * A * dT same as the walls of your house. The alleged 33 C difference between atmosphere/no atmosphere is flat bogus.
BTW NASA defines ToA as 100 km or 62 miles. It’s 68 miles between Colorado Springs and Denver. Contemplate that for a moment.
That’s not just thin, that’s ludicrous thin.
33 C refutation found at following link.
http://writerbeat.com/?search=schroeder&category=all&followers=all

Reply to  Nicholas Schroeder
January 30, 2017 8:29 am

An important omission of the 33C claim is that the system is incrementally reflecting significant power from clouds, ice and snow, which are all a direct response to the forcing. So while its warming by 33C, it’s also cooling by about half this amount. Without clouds. ice and snow, the average surface temperature would be close to about 270K and not 255K.

Reply to  co2isnotevil
January 30, 2017 10:59 am

So what would the earth be like without an atmosphere?
The average solar constant is 1,368 W/m^2 with an S-B BB temperature of 390 K or 17 C higher than the boiling point of water under sea level atmospheric pressure, which would no longer exist. The oceans would boil away removing the google tons of pressure that keep the molten core in place. The molten core would push through flooding the surface with dark magma changing both emissivity and albedo. With no atmosphere a steady rain of meteorites would pulverize the surface to dust same as the moon. The earth would be much like the moon with a similar albedo (0.12) and large swings in surface temperature from lit to dark sides.
No clouds, no vegetation, no snow, no ice a completely different albedo, certainly not the current 30%. No molecules means no convection, conduction, latent energy and surface absorption/radiation would be anybody’s guess. Whatever the conditions of the earth would be without an atmosphere, it is most certainly NOT 240 W/m^2 and 255K per ACS et. al..
Or for that matter – 270 K.

Reply to  Nicholas Schroeder
January 30, 2017 3:03 pm

“The average solar constant is 1,368 W/m^2 with an S-B BB temperature of 390 K or 17 C higher than the boiling point of water under sea level atmospheric pressure, which would no longer exist. ”
No. This is already the case and the oceans are just fine. At the equator under clear skies, the noon time solar input is about equal to the solar constant.
To determine the planet wide average, you need to divide 1368 by 4. Then subtract the roughly 12% corresponding the albedo of the Moon which is what the Earth would have without an atmosphere and convert to an EQUIVALENT AVERAGE temperature with SB. The Moon has much higher day time highs only because it’s day is about 28 Earth days long. If Earth rotated that slowly, noon time temperatures would be far higher, although night time temperatures would be a lot cooler.

January 30, 2017 8:25 am

“The results from the 40 runs were staggeringly different.”
I’ve done an awful lot of modelling of many kinds and it’s been my experience that when you change the initial conditions and the answers do not coverage to the same values, there is definitely something wrong with the model, more often than not, an uninitialized variable. Varying initial conditions and verifying a consistent result is one of the first smoke tests I would run on any model.
Regarding the climate as a coupled non linear system, this really only affects the path from one equilibrium state to another, not what the next equilibrium state will be consequential to some change. When comparing power in and power out at a macroscopic scale, the climate system is quite linear. You expect this since Joules are Joules and each can do an equivalent amount of work and warming the planet takes work.
This is a plot of post albedo power input vs power emitted by the planet. Each little dot is 1 month of data for each 2.5 degree slice of latitude directly from or derived from ISCCP data produced by GISS. There’s about 3 decade of data.
http://www.palisad.com/co2/sens/pi/po.png
The system becomes even more linear (especially towards the equator) when post albedo input power is plotted against the surface emissions equivalent to its temperature.
http://www.palisad.com/co2/sens/pi/se.png
More demonstrations of linearity are here along with the relationships between many different climate system variables:
http://www.palisad.com/co2/sens
I’ll bet anything that if you extract the same data from a GCM, few, if any, of these measured relationships will be met. The proper way to tune a GCM is to tune it to match these macroscopic relationships and not to tune it to match ‘expectations’.

Reply to  co2isnotevil
January 30, 2017 8:58 am

The system becomes even more linear (especially towards the equator)

Because it’s cooling profile will be more dominated by the slower rate of cooling.

Reply to  micro6500
January 30, 2017 9:07 am

micro6500,
“Because it’s cooling profile will be more dominated by the slower rate of cooling.”
Perhaps. My take on this is from a more macroscopic perspective, where if the emissions of the surface in equilibrium with the Sun is non linear to the forcing from the Sun, then there will be larger changes in entropy as the system transitions from one equilibrium state to another. A natural system with sufficient degrees of freedom (in this case, clouds, or more precisely the ratio of cloud area to cloud height) will self organize to minimize the change entropy when transitioning between states. This is basically a requirement of the Second Law.

Reply to  co2isnotevil
January 30, 2017 9:54 am

George

Perhaps. My take on this is from a more macroscopic perspective, where if the emissions of the surface in equilibrium with the Sun is non linear to the forcing from the Sun, then there will be larger changes in entropy as the system transitions from one equilibrium state to another.

This is what I have been showing you, the two states for clear skies calm days, and the transition between the two states, high cooling rate, and low cooing rate, and why they do it.
And how it feeds into your results,

Svend Ferdinandsen
January 30, 2017 2:12 pm

Climate, in all its ways, is statistics of weather over some time, mostly difined as 30 years.
So to define a climate, you have to measure the weather over this time scale. It is the same for predictions (projections) in the future, you need to start with the weather, then you can make the statistics.
Is it clear now that these projections are of very low value?
Anyway, it is all based on an assumed warming by more CO2, so why dont they just figure that out with simple energy (power) calculations. Why involve the unpredictable weather. We have weather anywhere on the globe at any temperature, so let it be what it is, weather.

January 30, 2017 4:49 pm

“This NCAR report shows unequivocally that the climate models in their current form can never predict future climate.”
Mike
Climate system is chaotic and inherently unpredictable. Climate models are not intended to predict future climate. The best they can do is ‘predict’ the statistics of climate – the probability distributions of some key climate variables. The key issue is whether CO2 can produce a statistically significant effect or its effect is indistinguishable from natural variability.

Keith J
January 30, 2017 5:50 pm

All models are wrong. Some models are useful.
Pre Copernican models of the solar system with geocentricity are absurdly complex and today, quite comical with some planets moving at relativistic velocity. But the astrologers were adamant in their adjustments required to make the observations fit the dogma.

Michael Kelly
January 31, 2017 12:50 pm

Bartemis – The number of “dimensions” can be confusing. The motion of the double pendulum can be described by two physical variables (the angles of each arm), but that isn’t the same as the number of degrees of freedom. In the Hamiltonian formulation of a problem, the number of degrees of freedom associated with N variables is 2N-1. The double pendulum thus has 3 degrees of freedom. A bigger puzzler would be the Duffing oscillator, which is chaotic with one degree of freedom (it’s just a little harder to determine the real number of degrees of freedom).

josh
February 1, 2017 10:59 am

The author is effectively lying. His Figure 2.2 shows only 8 of 30 runs in North America for the boreal winter trend over 34 years, that is not even close to being the global average. Go here for the actual paper that data
set comes from:
http://journals.ametsoc.org/doi/full/10.1175/BAMS-D-13-00255.1
Compare Fig.4 and 5, you will see that this region was cherry-picked because the NA Northwest was particularly sensitive to fluctuations, but the global picture is clear, warming was predicted in all runs of the model and warming has been observed as predicted. This is best illustrated by Fig. 2 of the same paper.
http://journals.ametsoc.org/na101/home/literatum/publisher/ams/journals/content/bams/2015/15200477-96.8/bams-d-13-00255.1/20150904/images/large/bams-d-13-00255.1-f2.jpeg
The grey lines show the range of ensemble runs, the red shows observations (before the recent spike in heat that returned us to near the ensemble mean). The CMES runs show that 1) all runs predict global warming
2) year to year variability in any particular run will deviate from the ensemble mean and observations are in line with this, 3) local climate will vary around the global mean- in high latitudes this can even mean cooling in a few regions although on longer time scales this becomes less likely.
Predicting climate is like predicting a casino. No one can tell you with 100% certainty that a particular player will be up or down 30 years from now, but we can tell you that the house always makes money in the long run.

PaulH
February 2, 2017 7:31 am

Among other things, that adjustment of a trillionth of a degree seems to cause what looks like a propagation of floating point arithmetic errors in the simulations.

Michael S. Kelly
Reply to  PaulH
February 2, 2017 10:13 pm

I’ve seen threads where someone asks if two different runs with identical initial conditions on the same machine would produce the same result. The answer has been a resounding “NO” from all of the experts, even skeptical experts. But it isn’t necessarily so. Though one would expect the same code running on the same machine with the same initial conditions to always produce the same results (and I’m not talking about Lorenz’ experience), the fact is that there are random bit errors in machines. Very rare, but in a calculation as large as a 100 year climate prediction, one has on the order of 1E18 floating point operations. A bit error rate of 1E-12 is difficult to achieve in any machine, but one having such huge memory as the supercomputers running climate models would be unable to avoid them. A single-event-upset anywhere along the line would give totally different results between two runs of the same model for the same initial conditions, and one would expect thousands of them in such a big run. However, given the cost of a run, I doubt if anyone has ever checked this. Instead, there is probably incessant tweaking of both the model and the initial conditions between runs. I doubt if this effect has ever even been considered.

Michael S. Kelly
Reply to  Michael S. Kelly
February 2, 2017 10:15 pm

Sorry, I meant to say a”resounding “YES.” All machines running the same code with the same initial conditions, according to the experts, should produce exactly the same results. I don’t think that’s true.