Guest essay by Mike Jonas
/Continued from Part 1
3. The Models can never work
In an earlier post, Inside the Climate Computer Models, I explained how the climate computer models, as currently structured, could never work.
Put simply, they are not climate models, they are weather models, because they operate on small(ish) pieces of ocean or atmosphere over very short periods (sometimes as little as 20 minutes). By definition, conditions in a small place over a short time are weather.
Figure 2.1. Ocean and atmosphere subdivision in the models. From the IPCC here.
But because their resolution has been weakened in order to be able to be run over long periods of time (a century or so), they are less accurate than the weather models used by the world’s weather bureaus. Those weather models can forecast conditions no more than a few days ahead. The climate models become inaccurate even sooner. All output from all climate computer models – as currently structured – is a work of fiction.
Confirmation of all of the above was recently provided by (US) National Center for Atmospheric Research (NCAR). They performed 40 climate model runs covering the 180-year period 1920 to 2100. All of the runs were absolutely identical except that “With each simulation, the scientists modified the model’s starting conditions ever so slightly by adjusting the global atmospheric temperature by less than one-trillionth of one degree.“.
The results from the 40 runs were staggeringly different. Predicted temperature changes for North America over the 1963-2012 period were shown, and they differ from each other by several degrees C over large areas. They even disagree on whether areas get warmer or cooler.
Figure 2.2. The last 8 runs by NCAR. Scale is deg C.
Think about it. By changing the model’s initial global temperature by a trillionth of a degree – ridiculously far below the accuracy to which the global temperature can be measured – and without any other changes, the model produced results for major regions that varied by several degrees C. The world’s weather stations will surely never be able to measure global temperature to anything like as small a margin as 0.000000000001 deg C, yet that one microscopic change alone causes a model’s results to change by several times as much as the whole of the 20th century global temperature change. And, of course, there are many other equally important parameters that cannot be established to anything like that kind of accuracy.
This NCAR report shows unequivocally that the climate models in their current form can never predict future climate.
4. The Tuning Disaster
4.1 How the Models were Tuned
In another earlier post, How reliable are the climate models?, I explained how the way in which the climate computer models were tuned led to major roles being assigned to the elements of climate that were least understood. Basically, when there was a discrepancy between observation and model results – and there were plenty of those – they fiddled the parameters till they got a match. [Yes, really!]. Anything that was well-understood couldn’t be fiddled with, so things they didn’t understand were used to fill the gaps.
The major problem that they had was that Earth’s climate had warmed much more over the ‘man-made CO2’ period (about 1950 onwards) than could be explained by CO2 alone, as shown in 2.5 (in Part 1). They manipulated the parameters for water vapour and clouds, without checking the physical realities, until they could match 20th-century temperatures. Both factors were portrayed as feedbacks to CO2. Bingo! The models showed all the late 20th-century temperature rise as being caused by CO2.
The modellers assumed that in the longer term cloud cover didn’t change naturally but changed only in reaction to – you guessed it – warming by CO2. So cloud cover as a natural process never participated in the model tuning, and all the warming of the ocean ended up being attributed to CO2. The only way that could happen was by the atmosphere warming the ocean.
No wonder that everything has gone pear-shaped since then.
4.2 Water Vapour Feedback
When the ocean warms – for any reason – there is more evaporation; about 7% more per degree C. Water vapour is a GHG, so that leads to more warming. That is all in the models, and it’s OK (apart from the reason for ocean warming).
But the models only allow for 2-3% more precipitation. In 2.1.1 (in Part 1) I cited evidence that precipitation also increased at the higher rate. The fifth IPCC report also virtually admitted a higher rate: “the global mean surface downward longwave flux is about 10 W m–2 larger than the average in climate models, probably due to insufficient model-simulated cloud cover or lower tropospheric moisture []. This is consistent with a global-mean precipitation rate in the real world somewhat larger than current observational estimates.“. The model tuning process has therefore assigned more warming to water vapour feedback than it should have (the water cycle is part of the water vapour feedback).
Figure 1.2 (in Part 1) shows 78 Wm-2 of latent heat transfer from ocean to atmosphere. Much of that process occurs in the tropics, where the latent heat is transferred to the tops of clouds by tropical storms: the warm moist air is convected up until it is cooled enough for the water vapour to condense, releasing the latent heat and forming clouds, then it sinks until it is precipitated. So the latent heat is released in the cloud-tops. 4% (the difference between the full C-C 7% and the models’ 3%) of 78 Wm-2 is 3.1 Wm-2. When energy is released in the cloud-tops, nearly all of it will radiate upwards or be reflected upwards, so nearly all of it is lost to space.
From the water cycle alone, water vapour feedback is therefore overestimated in the models by something like 3 Wm-2. And it all comes from the way the models are tuned.
This is very significant: downward IR from a doubling of atmospheric CO2 is put at 3.7 Wm-2.
4.3 Cloud Feedback
The IPCC assign even more positive feedback to clouds than they do to water vapour. They even assign more warming to cloud feedback than they assign to CO2 itself. None of it comes from physics, it comes only from the model tuning process where they still needed a lot more warming from CO2 to match the observed global warming.
As illustrated in Figure 1.1 (in Part 1), and as described in Richard Lindzen’s “Iris” hypothesis, cloud feedback to global warming is likely to be negative. [Figure 1.1 is actually empirical confirmation of the “Iris” hypothesis].
It is difficult to overestate the stupidity of tuning climate models without checking the underlying physics, or at least acknowledging the huge uncertainties. To continue to treat models’ output as reliable predictions of future climate, despite multiple contrary evidence being presented, is surely hubris of the first order. It is certainly unscientific.
5. The Non-Linear Climate
At all times, it is necessary to bear in mind that Earth’s climate is a non-linear system.
This does make it rather difficult to unravel, because we are all much more used to linear thinking.
It means that any search for a correlation and any extrapolation of any data is even more dodgy than usual: a pattern which is clearly visible today might disappear in future. On finding the GCR-cloud connection, Laken et al (2010) comment: “However, [two other studies] may be inherently flawed, as they assume a first-order relationship (i.e. presuming that cloud changes consistently accompany GCR changes), when instead, a second-order relationship may be more likely (i.e. that cloud changes only occur with GCR changes if atmospheric conditions are suitable).“.
As the IPCC itself said (AR4 WG1): “we should recognise that we are dealing with a coupled nonlinear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”. I suggest Kip Hansen’s article on WUWT for further reading.
/Continued in Part 3.
Mike Jonas (MA Maths Oxford UK) retired some years ago after nearly 40 years in I.T.
Abbreviations
AMO – Atlantic Multidecadal Oscillation
APS – American Physical Society
AR4 – Fourth IPCC Report
AR5 – Fifth IPCC Report
C – Centigrade or Celsius
C-C – Clausius-Clapeyron relation
CAGW – Catastrophic Anthropogenic Global Warming
CO2 – Carbon Dioxide
ENSO – El Niño Southern Oscillation
EUV – Extreme Ultra-Violet
GCR Galactic Cosmic Ray
GHG – Greenhouse gas
IPCC – Intergovernmental Panel on Climate Change
IR – Infra-Red
ISCCP – International Satellite Cloud Climatology Project
ITO – Into The Ocean [Band of Wavelengths approx 200nm to 1000nm]
NCAR – (US) National Center for Atmospheric Research
nm – Nanometre
PDO – Pacific Decadal Oscillation
ppm – Parts Per Million
SCO – the Sun-Cloud-Ocean hypothesis
SW – Short Wave
THC – Thermohaline Circulation
TSI – Total Solar Irradiance
UAH – The University of Alabama in Huntsville
UV – Ultra-Violet
W/m2 or Wm-2 – Watts per Square Metre
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
What matters is whether the models have predictive capability or not. If they don’t, then unless you know why they don’t, the argument about how the models work is a bit pointless. They simply don’t have predictive capability – but the business cases for all the investments to prevent the warming do. They are real money. Thus there is an eternal disconnect: you can calculate the discount rate for the cost of the investments against the predicted or projected future temperatures and economic consequences, which you are practically unable to predict or project in the first place.
It is the thermal conductivity, i.e. composite insulative properties of that square column of air that warms the earth per Q = U * A * dT same as the walls of your house. The alleged 33 C difference between atmosphere/no atmosphere is flat bogus.
BTW NASA defines ToA as 100 km or 62 miles. It’s 68 miles between Colorado Springs and Denver. Contemplate that for a moment.
That’s not just thin, that’s ludicrous thin.
33 C refutation found at following link.
http://writerbeat.com/?search=schroeder&category=all&followers=all
An important omission of the 33C claim is that the system is incrementally reflecting significant power from clouds, ice and snow, which are all a direct response to the forcing. So while its warming by 33C, it’s also cooling by about half this amount. Without clouds. ice and snow, the average surface temperature would be close to about 270K and not 255K.
So what would the earth be like without an atmosphere?
The average solar constant is 1,368 W/m^2 with an S-B BB temperature of 390 K or 17 C higher than the boiling point of water under sea level atmospheric pressure, which would no longer exist. The oceans would boil away removing the google tons of pressure that keep the molten core in place. The molten core would push through flooding the surface with dark magma changing both emissivity and albedo. With no atmosphere a steady rain of meteorites would pulverize the surface to dust same as the moon. The earth would be much like the moon with a similar albedo (0.12) and large swings in surface temperature from lit to dark sides.
No clouds, no vegetation, no snow, no ice a completely different albedo, certainly not the current 30%. No molecules means no convection, conduction, latent energy and surface absorption/radiation would be anybody’s guess. Whatever the conditions of the earth would be without an atmosphere, it is most certainly NOT 240 W/m^2 and 255K per ACS et. al..
Or for that matter – 270 K.
“The average solar constant is 1,368 W/m^2 with an S-B BB temperature of 390 K or 17 C higher than the boiling point of water under sea level atmospheric pressure, which would no longer exist. ”
No. This is already the case and the oceans are just fine. At the equator under clear skies, the noon time solar input is about equal to the solar constant.
To determine the planet wide average, you need to divide 1368 by 4. Then subtract the roughly 12% corresponding the albedo of the Moon which is what the Earth would have without an atmosphere and convert to an EQUIVALENT AVERAGE temperature with SB. The Moon has much higher day time highs only because it’s day is about 28 Earth days long. If Earth rotated that slowly, noon time temperatures would be far higher, although night time temperatures would be a lot cooler.
“The results from the 40 runs were staggeringly different.”
I’ve done an awful lot of modelling of many kinds and it’s been my experience that when you change the initial conditions and the answers do not coverage to the same values, there is definitely something wrong with the model, more often than not, an uninitialized variable. Varying initial conditions and verifying a consistent result is one of the first smoke tests I would run on any model.
Regarding the climate as a coupled non linear system, this really only affects the path from one equilibrium state to another, not what the next equilibrium state will be consequential to some change. When comparing power in and power out at a macroscopic scale, the climate system is quite linear. You expect this since Joules are Joules and each can do an equivalent amount of work and warming the planet takes work.
This is a plot of post albedo power input vs power emitted by the planet. Each little dot is 1 month of data for each 2.5 degree slice of latitude directly from or derived from ISCCP data produced by GISS. There’s about 3 decade of data.
http://www.palisad.com/co2/sens/pi/po.png
The system becomes even more linear (especially towards the equator) when post albedo input power is plotted against the surface emissions equivalent to its temperature.
http://www.palisad.com/co2/sens/pi/se.png
More demonstrations of linearity are here along with the relationships between many different climate system variables:
http://www.palisad.com/co2/sens
I’ll bet anything that if you extract the same data from a GCM, few, if any, of these measured relationships will be met. The proper way to tune a GCM is to tune it to match these macroscopic relationships and not to tune it to match ‘expectations’.
Because it’s cooling profile will be more dominated by the slower rate of cooling.
micro6500,
“Because it’s cooling profile will be more dominated by the slower rate of cooling.”
Perhaps. My take on this is from a more macroscopic perspective, where if the emissions of the surface in equilibrium with the Sun is non linear to the forcing from the Sun, then there will be larger changes in entropy as the system transitions from one equilibrium state to another. A natural system with sufficient degrees of freedom (in this case, clouds, or more precisely the ratio of cloud area to cloud height) will self organize to minimize the change entropy when transitioning between states. This is basically a requirement of the Second Law.
George
This is what I have been showing you, the two states for clear skies calm days, and the transition between the two states, high cooling rate, and low cooing rate, and why they do it.
And how it feeds into your results,
http://www.maxphoton.com/linear-thinking-cyclical-world/
Climate, in all its ways, is statistics of weather over some time, mostly difined as 30 years.
So to define a climate, you have to measure the weather over this time scale. It is the same for predictions (projections) in the future, you need to start with the weather, then you can make the statistics.
Is it clear now that these projections are of very low value?
Anyway, it is all based on an assumed warming by more CO2, so why dont they just figure that out with simple energy (power) calculations. Why involve the unpredictable weather. We have weather anywhere on the globe at any temperature, so let it be what it is, weather.
“This NCAR report shows unequivocally that the climate models in their current form can never predict future climate.”
Mike
Climate system is chaotic and inherently unpredictable. Climate models are not intended to predict future climate. The best they can do is ‘predict’ the statistics of climate – the probability distributions of some key climate variables. The key issue is whether CO2 can produce a statistically significant effect or its effect is indistinguishable from natural variability.
All models are wrong. Some models are useful.
Pre Copernican models of the solar system with geocentricity are absurdly complex and today, quite comical with some planets moving at relativistic velocity. But the astrologers were adamant in their adjustments required to make the observations fit the dogma.
Bartemis – The number of “dimensions” can be confusing. The motion of the double pendulum can be described by two physical variables (the angles of each arm), but that isn’t the same as the number of degrees of freedom. In the Hamiltonian formulation of a problem, the number of degrees of freedom associated with N variables is 2N-1. The double pendulum thus has 3 degrees of freedom. A bigger puzzler would be the Duffing oscillator, which is chaotic with one degree of freedom (it’s just a little harder to determine the real number of degrees of freedom).
The author is effectively lying. His Figure 2.2 shows only 8 of 30 runs in North America for the boreal winter trend over 34 years, that is not even close to being the global average. Go here for the actual paper that data
set comes from:
http://journals.ametsoc.org/doi/full/10.1175/BAMS-D-13-00255.1
Compare Fig.4 and 5, you will see that this region was cherry-picked because the NA Northwest was particularly sensitive to fluctuations, but the global picture is clear, warming was predicted in all runs of the model and warming has been observed as predicted. This is best illustrated by Fig. 2 of the same paper.
http://journals.ametsoc.org/na101/home/literatum/publisher/ams/journals/content/bams/2015/15200477-96.8/bams-d-13-00255.1/20150904/images/large/bams-d-13-00255.1-f2.jpeg
The grey lines show the range of ensemble runs, the red shows observations (before the recent spike in heat that returned us to near the ensemble mean). The CMES runs show that 1) all runs predict global warming
2) year to year variability in any particular run will deviate from the ensemble mean and observations are in line with this, 3) local climate will vary around the global mean- in high latitudes this can even mean cooling in a few regions although on longer time scales this becomes less likely.
Predicting climate is like predicting a casino. No one can tell you with 100% certainty that a particular player will be up or down 30 years from now, but we can tell you that the house always makes money in the long run.
Among other things, that adjustment of a trillionth of a degree seems to cause what looks like a propagation of floating point arithmetic errors in the simulations.
I’ve seen threads where someone asks if two different runs with identical initial conditions on the same machine would produce the same result. The answer has been a resounding “NO” from all of the experts, even skeptical experts. But it isn’t necessarily so. Though one would expect the same code running on the same machine with the same initial conditions to always produce the same results (and I’m not talking about Lorenz’ experience), the fact is that there are random bit errors in machines. Very rare, but in a calculation as large as a 100 year climate prediction, one has on the order of 1E18 floating point operations. A bit error rate of 1E-12 is difficult to achieve in any machine, but one having such huge memory as the supercomputers running climate models would be unable to avoid them. A single-event-upset anywhere along the line would give totally different results between two runs of the same model for the same initial conditions, and one would expect thousands of them in such a big run. However, given the cost of a run, I doubt if anyone has ever checked this. Instead, there is probably incessant tweaking of both the model and the initial conditions between runs. I doubt if this effect has ever even been considered.
Sorry, I meant to say a”resounding “YES.” All machines running the same code with the same initial conditions, according to the experts, should produce exactly the same results. I don’t think that’s true.