Is Climate Science Settled? (Now Includes September Data)

Guest Post by Werner Brozek, Professor Robert Brown from Duke University and Just The Facts

Josh-knobs

Image Credit: Josh

In order for climate science to be settled, there are many requirements. I will list four for now, although I am sure you can think of many more. Then I will expand on those.

1. We must know all variables that can affect climate.

2. We must know how all variables are changing over time.

3. We must know how each changing variable affects climate.

4. We must know about all non-linear changes that take place as a result of changes to variables.

As for the variables affecting climate, Just The Facts has done a superb job compiling many of them on WUWT’s Potential Climatic Variables Reference Page.

If you have an hour, there is lots of good reading here. For now, I will just give the main topics, but note that all main topics have an array of sub topics.

1. Earth’s Rotational Energy

2. Orbital Energy, Orbital Period, Orbital Spiral, Elliptical Orbits (Eccentricity), Tilt (Obliquity), Wobble (Axial precession) and Polar Motion

3. Gravitation

4. Solar Energy

5. Geothermal Energy

6. Outer Space/Cosmic/Galactic Effects

7. Earth’s Magnetic Field

8. Atmospheric Composition

9. Albedo

10. Biology

11. Chemical

12. Physics

13. Known Unknowns

14. Unknown Unknowns

If you know some more that should be added, please let us know.

The above covers my point 1 above. As for points 2 and 3, for all of the items listed above, we need to know if the changes, if any, are linear, exponential, logarithmic, sinusoidal, random or some other pattern. For example, depending on who you talk to and the interval you are considering, our emissions of carbon dioxide could be exponential, but the increase in the atmosphere could be linear, but the effect could be logarithmic. Then there are asteroids which could be totally random. As for point 4 above, the easiest example would be to consider a ball with air at 30 C and a relative humidity of 90%. When this is cooled, the gas molecules do not simply slow down indefinitely. At a certain point, the water molecules move so slowly that the hydrogen bonds cause molecules to stick together after collisions to cause liquid water or ice to form. Further cooling causes the various gases to condense to their liquid states and then to freeze to their solid state.

Further to this last point, Professor Brown offered a very interesting response to a question on a previous post. His comment is reproduced below and ends with his initials rgb:

rgbatduke

October 2, 2015 at 10:36 am

t’s not a law of nature, but outside of Le Chatelier’s principle, a more modern version (in case anyone is still reading this thread) is Prigogene’s Self-Organization of dissipative systems.

https://en.wikipedia.org/wiki/Self-organization

Self-organization as a concept preceded Prigogene, but he quantified it and moved it from the realm of philosophy and psychology and cybernetics to the realm of physics and the behavior of nonlinear non-equilibrium systems.

To put it into a contextual nutshell, an open, non-equilibrium system (such as a gas being heated on one side and cooled on the other) will tend to self-organize into structures that increase the dissipation of the system, that is, facilitate energy transport through the system. The classic contextual example of this is the advent of convective rolls in a fluid in a symmetry breaking gravitational field. Convection moves heat from the hot side to the cold side much, much faster than conduction or radiation does, but initially the gas has no motion but microscopic motions of the molecules and (if we presume symmetry and smoothness in the heated surface and boundaries) experiences only balanced, if unstable, forces. However, those microscopic motions contain small volumes that are not symmetric, that move up or down. These small fluctuations nucleate convection, at first irregular and disorganized, that then “discovers” the favored modes of dissipation, adjacent counter rotating turbulent rolls that have a size characteristic of the geometry of the volume and the thermal imbalance.

The point is that open fluid dynamical differentially heated and cooled systems spontaneously develop these sorts of structures, and they have some degree of stability or at least persistence in time. They can persist a long time — see e.g. the great red spot on Jupiter. The reason that this is essentially a physical, or better yet a mathematical, principle is evident from the wikipedia page above — Prigogene won the Nobel Prize because he showed that this sort of behavior has a universal character and will arise in many, if not most open systems of sufficient complexity. There is a deep connection between this theory and chaos — essentially that an open chaotic system with “noise” is constantly being bounced around in its phase space, so that it wanders around through the broad stretches of uninteresting critical points until it enters the basin of attraction of an interesting one, a strange attractor. At that point the same noise drives it diffusively into a constantly shifting ensemble of comparatively tightly bound orbits. At that point the system is “stable” in that it has temporally persistent behavior with gross physical structures with their own “pseudoparticle” physics and sometimes even thermodynamics. This is one of the things I studied pretty extensively back when I did work in open quantum optical systems.

There is absolutely no question that our climate is precisely a self-organized system of this sort. We have long since named the observed, temporally persistent self-organized structures — ENSO, the Monsoon, the NAO, the PDO. We can also observe more transient structures that appear or disappear such as the “polar vortex” or “The Blob” (warm patch in the ocean off of the Pacific Northwest) or a “blocking high”. Lately, we had “Hurricane Joaquin”. Anybody can play — at this point you can visit various websites and watch a tiny patch of clouds organize into a thunderstorm, then a numbered “disturbance with the potential for tropical development”, then a tropical depression, and finally into a named storm with considerable if highly variable and transient structure.

All of these structures tend to dissipate a huge amount of energy that would otherwise have to escape to space much more slowly. They are born out of energy in flow, and “evolve” so that the ones that move energy most efficiently survive and grow.

Once again, one has to bemoan the lack of serious math that has been done on the climate. This in some sense is understandable, as the math is insanely difficult even when it is limited to toy systems — simple iterated maps, simple ODE or PDE systems with simple boundary conditions. However, there are some principles to guide us. One is that in the case of self-organization in chaotic systems, the dynamical map itself has a structure of critical points and attractors. Once the system “discovers” a favorable attractor and diffuses into an orbit, it actually becomes rather immune to simple changes in the driving. Once a set of turbulent rolls is established, as it were, there is a barrier to be overcome before one can make the number of rolls change or fundamentally change their character — moderate changes in the thermal gradient just make the existing rolls roll faster or slower to maintain heat transport. However, in a sufficiently complex system there are usually neighboring attractors with some sort of barrier in between them, but this barrier is there only in an average sense. In many, many cases, the orbits of the system in phase space have a fractal, folded character where orbits from neighboring attractors can interpenetrate and overlap. If there is noise, there is a probability of switching attractors when one nears a non-equilibrium critical regime, so that the system can suddenly and dramatically change its character. Next, the attractors themselves are not really fixed. As one alters (parametrically for example) the forcing of the system or the boundary conditions or the degree of noise or… one expects the critical points and attractors themselves to move, to appear and disappear, to get pushed together or moved apart, to have the barriers between them rise or fall. Finally (as if this isn’t enough) the climate is not in any usual sense an iterated map. It is usually treated as one from the point of view of solving PDEs (which is usually done via an iterated map where the output of one time step is the input into the next with a fixed dynamics). This makes the solution a Markov Process — one that “forgets” its past history and evolves locally in time and space as an iterated map (usually with a transition “rule” with some randomness in it).

But the climate is almost certainly not Markovian, certainly not in practical terms. What it does today depends on the state today, to be sure, but because there are vast reservoirs where past dynamical evolution is “hidden” in precisely Prigogene’s self-organized structures, structures whose temporal coherence and behavior can only be meaningfully understood on the basis of their own physical description and not microscopically, it is completely, utterly senseless to try to advance a Markovian solution and expect it to actually work!

Two examples, and then I must clean my house and do other work. One is clearly the named structures themselves in the climate. The multidecadal oscillations have spatiotemporal persistence and organization with major spectral components out as far as sixty or seventy years (and may well have longer periods still to be discovered — we have crappy data and not much of it that extends into the increasingly distant past). Current models treat things like ENSO and the PDO and so on more like noise, and we see people constantly “removing the influence of ENSO” from a temperature record to try to reductively discern some underlying ENSO-less trend. But they aren’t noise. They are major features of the dynamics! They move huge amounts of energy around, and are key components of the efficiency of the open system as it transports incident solar energy to infinity, keeping a reservoir of it trapped within along the way. It is practically speaking impossible to integrate the PDEs of the climate models and reproduce any of the multidecadal behavior. Even if multidecadal structures emerge, they have the wrong shape and the wrong spectrum because the chaotic models have a completely different critical structure and attractors as they are iterated maps at the wrong resolution and with parameters that almost certainly move them into completely distinct operational regimes and quite different quasiparticle structures. This is instantly evident if one looks at the actual dynamical futures produced by the climate models. They have the wrong spectrum on pretty much all scales, fluctuating far more wildly than the actual climate does, with the wrong short time autocorrelation and spectral behavior (let alone the longer multidecadal behavior that we observe).

The second is me. I’m precisely a self-organized chaotic system. Here’s a metaphor. Climate models are performing the moral equivalent of trying to predict my behavior by simulating the flow of neural activity in my brain on a coarse-grained basis that chops my cortex up into (say) centimeter square chunks one layer thick and coming up with some sort of crude Markovian model. Since the modelers have no idea what I’m actually thinking, and cannot possibly actually measure the state of my brain outside of some even more crudely averaged surface electrical activity, they just roll dice to generate an initial state “like” what they think my initial state might be, and then trust their dynamics to eventually “forget” that initial state and move the model brain into what they imagine is an “ensemble” of my possible brain states so that after a few years, my behavior will no longer depend on the ignored details (you know, things like memories of my childhood or what I’ve learned in school). They run their model forward twenty years and announce to the world that unless I undergo electroshock therapy right now their models prove that I’m almost certainly destined to become an axe murderer or exhibit some other “extreme” behavior. Only if I am kept in a dark room, not overstimulated, and am fed regular doses of drugs that essentially destroy the resolution of my real brain until it approximates that of their model can they be certain that I won’t either bring about World Peace in one extreme or cause a Nuclear War in the other.

The problem is that this whole idea is just silly! Human behavior cannot be predicted by a microscopic physical model of the neurons at the quantum chemistry level! Humans are open non-Markovian information systems. We are strongly regulated by our past experience, our memory, as well as our instantaneous input, all folded through a noisy, defect-ridden, and unbelievably complex multilayer neural network that is chemically modulated by a few dozen things (hormones, bioavailable energy, diurnal phase, temperature, circulatory state, oxygenation…)

As a good friend of mine who was a World’s Greatest Expert (literally!) on complex systems used to say: “More is different”. Emergent self-organized behavior results in a cascade of structures. Microscopic physics starts with quarks and leptons and interaction particles/rules. The quarks organize into nucleons. The nucleons organize into nuclei. The electrons bond to the nuclei to form atoms. The physics and behavior of the nuclei are not easily understood in terms of bare quark dynamics! The physics and behavior of the atoms are not easily understood in terms of the bare quark plus lepton dynamics! The atoms interact and form molecules, more molecules, increasingly complex molecules. The molecules have behavior that is not easily understood in terms of the “bare” behavior of the isolated atoms that make them up. Some classes of molecular chemistry produce liquids, solids, gases, plasmas. Again, the behavior of these things is increasingly disconnected from the behavior of the specific molecules that make them up — new classes of universal behavior emerge at all steps, so that all fluids are alike in certain ways independent of the particular molecules that make them up, even as they inherent certain parametric behavior from the base molecules. Some molecules in some fluids become organic biomolecules, and there is suddenly a huge disconnect both from simple chemistry and from the several layers of underlying physics.

If more is different, how much is enough? There is a whole lot of more in the coupled Earth-Ocean-Atmosphere-Solar system. There is a whole lot less, heavily oversimplified and with the deliberate omission of the ill-understood quasiparticle structures that we can see dominating the weather and the climate, in climate models.

Could they work? Sure. But one really shouldn’t expect them to work, one

should expect them to work no better than a simulated neural network “works” to simulate actual intelligence, which is to say, it can sometimes produce understandable behaviors “like” intelligence without ever properly resembling the intelligence of any intelligent thing and without the slightest ability to predict the behavior of an intelligent thing. The onus of proof is very much on the modelers that wish to assert that their models are useful for predicting long term climate, but this is a burden that so far they refuse to acknowledge, let alone accept! If they did, large numbers of climate models would have to be rejected because they do not work in the specific sense that they do not come particularly close to predicting the behavior of the actual climate from the instant they entered the regime where they were supposed to be predictive, instead of parametrically tuned and locked to match up well with a reference interval that just happened to be the one single stretch of 15-25 years where strong warming occurred in the last 85 years. There are so very, very many problems with this — training any model on a non-representative segment of the available data is obviously likely to lead to a poor model — but suffice it to say that so far, they aren’t working and nobody should be surprised.

rgb

In the sections below, as in previous posts, we will present you with the latest facts. The information will be presented in three sections and an appendix. The first section will show for how long there has been no warming on some data sets. At the moment, only the satellite data have flat periods of longer than a year. The second section will show for how long there has been no statistically significant warming on several data sets. The third section will show how 2015 so far compares with 2014 and the warmest years and months on record so far. For three of the data sets, 2014 also happens to be the warmest year. The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data.

Section 1

This analysis uses the latest month for which data is available on WoodForTrees.com (WFT). All of the data on WFT is also available at the specific sources as outlined below. We start with the present date and go to the furthest month in the past where the slope is a least slightly negative on at least one calculation. So if the slope from September is 4 x 10^-4 but it is – 4 x 10^-4 from October, we give the time from October so no one can accuse us of being less than honest if we say the slope is flat from a certain month.

1. For GISS, the slope is not flat for any period that is worth mentioning.

2. For Hadcrut4, the slope is not flat for any period that is worth mentioning.

3. For Hadsst3, the slope is not flat for any period that is worth mentioning.

4. For UAH, the slope is flat since May 1997 or 18 years and 5 months. (goes to September using version 6.0)

5. For RSS, the slope is flat since February 1997 or 18 years and 8 months. (goes to September)

The next graph shows just the lines to illustrate the above. Think of it as a sideways bar graph where the lengths of the lines indicate the relative times where the slope is 0. In addition, the upward sloping blue line at the top indicates that CO2 has steadily increased over this period.

Note that the UAH5.6 from WFT needed a detrend to show the slope is zero for UAH6.0.

WoodForTrees.org – Paul Clark – Click the pic to view at­ source

When two things are plotted as I have done, the left only shows a temperature anomaly.

The actual numbers are meaningless since the two slopes are essentially zero. No numbers are given for CO2. Some have asked that the log of the concentration of CO2 be plotted. However WFT does not give this option. The upward sloping CO2 line only shows that while CO2 has been going up over the last 18 years, the temperatures have been flat for varying periods on the two sets.

Section 2

For this analysis, data was retrieved from Nick Stokes’ Trendviewer available on his website. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set. In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.

On several different data sets, there has been no statistically significant warming for between 11 and 22 years according to Nick’s criteria. Cl stands for the confidence limits at the 95% level.

The details for several sets are below.

For UAH6.0: Since December 1992: Cl from -0.009 to 1.688

This is 22 years and 10 months.

For RSS: Since March 1993: Cl from -0.014 to 1.597

This is 22 years and 7 months.

For Hadcrut4.4: Since January 2001: Cl from -0.048 to 1.334

This is 14 years and 9 months.

For Hadsst3: Since July 1995: Cl from -0.002 to 1.949

This is 20 years and 3 months.

For GISS: Since September 2004: Cl from -0.033 to 2.020

This is 11 years and 1 month.

Section 3

This section shows data about 2015 and other information in the form of a table. The table shows the five data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, Hadcrut4, Hadsst3, and GISS.

Down the column, are the following:

1. 14ra: This is the final ranking for 2014 on each data set.

2. 14a: Here I give the average anomaly for 2014.

3. year: This indicates the warmest year on record so far for that particular data set. Note that the satellite data sets have 1998 as the warmest year and the others have 2014 as the warmest year.

4. ano: This is the average of the monthly anomalies of the warmest year just above.

5. mon: This is the month where that particular data set showed the highest anomaly. The months are identified by the first three letters of the month and the last two numbers of the year.

6. ano: This is the anomaly of the month just above.

7. y/m: This is the longest period of time where the slope is not positive given in years/months. So 16/2 means that for 16 years and 2 months the slope is essentially 0. Periods of under a year are not counted and are shown as “0”.

8. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month are followed by the last two numbers of the year.

9. sy/m: This is the years and months for row 8. Depending on when the update was last done, the months may be off by one month.

10. Jan: This is the January 2015 anomaly for that particular data set.

11. Feb: This is the February 2015 anomaly for that particular data set, etc.

19. ave: This is the average anomaly of all months to date taken by adding all numbers and dividing by the number of months.

20. rnk: This is the rank that each particular data set would have for 2015 without regards to error bars and assuming no changes. Think of it as an update 45 minutes into a game.

Source UAH RSS Had4 Sst3 GISS
1.14ra 5th 6th 1st 1st 1st
2.14a 0.188 0.255 0.564 0.479 0.75
3.year 1998 1998 2014 2014 2014
4.ano 0.482 0.55 0.564 0.479 0.75
5.mon Apr98 Apr98 Jan07 Aug14 Jan07
6.ano 0.742 0.857 0.832 0.644 0.97
7.y/m 18/5 18/8 0 0 0
8.sig Dec92 Mar93 Jan01 Jul95 Sep04
9.sy/m 22/10 22/7 14/9 20/3 11/1
Source UAH RSS Had4 Sst3 GISS
10.Jan 0.276 0.365 0.688 0.440 0.82
11.Feb 0.174 0.326 0.660 0.406 0.88
12.Mar 0.164 0.255 0.681 0.424 0.90
13.Apr 0.086 0.172 0.656 0.557 0.74
14.May 0.284 0.309 0.696 0.593 0.79
15.Jun 0.332 0.391 0.730 0.575 0.77
16.Jul 0.182 0.288 0.696 0.637 0.73
17.Aug 0.275 0.389 0.740 0.665 0.81
18.Sep 0.253 0.382 0.786 0.729 0.81
Source UAH RSS Had4 Sst3 GISS
19.ave 0.225 0.320 0.702 0.558 0.81
20.rnk 3rd 4th 1st 1st 1st

If you wish to verify all of the latest anomalies, go to the following:

For UAH, version 6.0beta3 was used. Note that WFT uses version 5.6. So to verify the length of the pause on version 6.0, you need to use Nick’s program.

http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tlt/tltglhmam_6.0beta3.txt

For RSS, see: ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_3.txt

For Hadcrut4, see: http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.4.0.0.monthly_ns_avg.txt

For Hadsst3, see: http://www.cru.uea.ac.uk/cru/data/temperature/HadSST3-gl.dat

For GISS, see:

http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

To see all points since January 2015 in the form of a graph, see the WFT graph below. Note that UAH version 5.6 is shown. WFT does not show version 6.0 yet. Also note that Hadcrut4.3 is shown and not Hadcrut4.4, which is why the last few months are missing for Hadcrut.

WoodForTrees.org – Paul Clark – Click the pic to view at source

As you can see, all lines have been offset so they all start at the same place in January 2015. This makes it easy to compare January 2015 with the latest anomaly.

Appendix

In this part, we are summarizing data for each set separately.

RSS

The slope is flat since February 1997 or 18 years, 8 months. (goes to September)

For RSS: There is no statistically significant warming since March 1993: Cl from -0.014 to 1.597.

The RSS average anomaly so far for 2015 is 0.320. This ties it as 4th place. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2014 was 0.255 and it was ranked 6th.

UAH6.0beta3

The slope is flat since May 1997 or 18 years and 5 months. (goes to September using version 6.0beta3)

For UAH: There is no statistically significant warming since December 1992: Cl from -0.009 to 1.688. (This is using version 6.0 according to Nick’s program.)

The UAH average anomaly so far for 2015 is 0.225. This would rank it as 3rd place. 1998 was the warmest at 0.483. The highest ever monthly anomaly was in April of 1998 when it reached 0.742. The anomaly in 2014 was 0.188 and it was ranked 5th.

Hadcrut4.4

The slope is not flat for any period that is worth mentioning.

For Hadcrut4: There is no statistically significant warming since January 2001: Cl from -0.048 to 1.334.

The Hadcrut4 average anomaly so far for 2015 is 0.702. This would set a new record if it stayed this way. The highest ever monthly anomaly was in January of 2007 when it reached 0.832. The anomaly in 2014 was 0.564 and this set a new record.

Hadsst3

For Hadsst3, the slope is not flat for any period that is worth mentioning. For Hadsst3: There is no statistically significant warming since July 1995: Cl from -0.002 to 1.949.

The Hadsst3 average anomaly so far for 2015 is 0.558. This would set a new record if it stayed this way. The highest ever monthly anomaly was in August of 2014 when it reached 0.644. This is prior to 2015. The anomaly in 2014 was 0.479 and this set a new record. The September 2015 anomaly of 0.729 also sets a new record.

GISS

The slope is not flat for any period that is worth mentioning.

For GISS: There is no statistically significant warming since September 2004: Cl from -0.033 to 2.020.

The GISS average anomaly so far for 2015 is 0.81. This would set a new record if it stayed this way. The highest ever monthly anomaly was in January of 2007 when it reached 0.97. The anomaly in 2014 was 0.75 and it set a new record.

Conclusion

After reading this article, do you think climate science is settled? If not, do you think it will be settled in your lifetime?

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

537 Comments
Inline Feedbacks
View all comments
Björn from sweden
November 6, 2015 6:08 am

“Water Vapour” on the big control knob on the climate apparatus should not be written horisontally but aligned vertically with the indocator arrow, for greater visual cue-effect.

ferd berple
Reply to  Björn from sweden
November 6, 2015 10:05 am

4. Solar Energy
=========
the sun emits both energy and particles.

emsnews
Reply to  ferd berple
November 6, 2015 11:29 am

I see people disregarding the huge, huge effect our local star has on our existence. It looks rather small in our blue sky, a little round ball.
It is immense compared to our earth and utterly dwarfs Jupiter, the biggest ball circling this star. This star spits out immense amounts of this thing we call ‘energy’ in the form of various degrees of light, etc. This, in turn, heats up the various little tiny balls we call ‘planets’.
This vortex of energy is very old and it seems to me that the last two million years, it is not very consistent with energy output. Small changes in energy translates into cold climates for all those tiny balls of various materials that are trapped in orbit of this particular yellow star.
Someday, it will cease to exist, most likely will explode. Nothing is forever. I was raised by grandparents and parents who were astronomers and we learned how to view the universe from them. You have to be strong willed to be optimistic knowing all this information that is frankly, scary for us creatures trapped on this little rock in outer space.

Stephen Richards
Reply to  ferd berple
November 6, 2015 1:32 pm

Or particles with energy?

Evan Jones
Editor
Reply to  ferd berple
November 6, 2015 4:17 pm

Bear in mind that while the sun produces virtually all the heat, it is only changes in the sun (not just TSI, of course) that are (or may be) relevant.

george e. smith
Reply to  Björn from sweden
November 6, 2015 12:32 pm

Well so it is settled from your list of four ” must haves ” for climate science to be settled.
It is clear that you cannot satisfy, any one of those four requirements, ergo climate science is not settled.
George

Monckton of Brenchley
Reply to  Björn from sweden
November 7, 2015 1:00 am

One should add the following to the list in the head posting:
Thermodynamics. The atmosphere where excitation/de-excitation collisions occur is bounded by two heat sinks, both substantial, one near-infinite. Global mean surface temperature has varied by no more than 3.5 degrees either side of the long-run median for 810,000 years – about the same tolerance as a home thermostat. We cannot much perturb that formidable thermostasis.
Quantum mechanics. Models use Lorentzian or Voigt equations to determine the line-shapes of the far wings of the principal absorption spectra of CO2, where most of the forcing occurs, but both equations, which were derived for proposes other than global warming research, assume for convenience that collisions are instantaneous. However, they occupy a few picoseconds, and that is enough to require a reduction of 40% in the current central estimate of the CO2 forcing, and hence of climate sensitivity to a CO2 doubling. This one is right up Professor Brown’s street.
Electronics. Two-thirds of climate sensitivity, according to the Party Line, comes from net-positive or amplifying temperature feedbacks, not one of which can be directly quantified by measurement. The Bode open-loop or system-gain equation that models the mutual amplification of feedbacks is, however, taken from electronics, where it works quite well, but is inapplicable in the climate, particularly where the IPCC’s strongly net-positive feedbacks are assumed. The equation mandates that, at a closed-loop gain >1, feedbacks should drive the output down rather than up. Sure enough, in an electronic circuit the output voltage reverses its sign at a loop gain >1, but in the climate the output temperature does not reverse its direction, not least because, unlike the output voltage in a circuit, the output temperature in the climate is the instrument of the climate system’s self-equilibration following the perturbation caused by a forcing. The Bode equation is thus incapable of modelling dynamical systems such as the climate, especially at high loop gains.
Cybernetics. The architecture of the general-circulation models exhibits a number of defects propagated throughout the models by intercomparison. Not the least of these, as Dr David Evans has pointed out in his fascinating series at Jo Nova’s site, is the models’ inbuilt but actually erroneous assumption that the feedbacks to a greenhouse-gas forcing will be identical to those from a solar or other exogenous forcing, when in reality most feedbacks to a solar forcing will not respond to endogenous perturbations such as greenhouse-gas forcings. Another such defect is the misuse of the Bode equation. Another is the failure to model synoptic variability.
Chaos. In the absence of highly-resolved data on initial conditions and of high understanding of the evolutionary processes of the climate object, reliable prediction for more than a week ahead is not available by any method.bso much for “settled science”.
Logic. The fundamental postulate of logic is that, since objective truth exists, a proposition and its converse cannot simultaneously obtain. Two important corollaries: first, any true proposition is consistent with every true proposition and inconsistent with every false proposition; secondly, if the converse of a proposition be demonstrated to be false, the original proposition is necessarily true. Most of the claims of the believers may be definitively disposed of either through demonstration by contradiction, which can often overcome the difficulties inherent in demonstrating that a hypothesis is true, or by applying the dozen fundamental fallacies first described by Aristotle in the Sophistical Refutations 2350 years ago.
Economics. It is surprisingly easy to demonstrate definitively that mitigation today is orders of magnitude costlier than adaptation the day after tomorrow, even if, per impossibile, the wild exaggerations of the IPCC are accepted as true ad argumentum.

Werner Brozek
Reply to  Monckton of Brenchley
November 7, 2015 5:54 am

Thank you very much! Since it may be missed with the new format at WUWT, I may make this part of my next post.

Dahlquist
Reply to  Monckton of Brenchley
November 7, 2015 9:20 am

Werner
You list #8 Atmospheric Composition. What about the Atmosphere itself. It has mass, layers (strata), etc. and I did not see much about these mentioned, although it was skimmed. It would be good to have the Atmosphere by itself given a number in this list. Also, Under #8 Atmospheric Composition I did not see any references to strata, mass, temperature decrease with elevation, boiling points at various pressures, etc.
Perhaps I am off target here, but thought I’d throw in my 2 cents.
Thanks

James Hein
Reply to  Monckton of Brenchley
November 8, 2015 3:55 pm

On “Electronics” more recent information strongly indicates the net feedback is under 1.

george e. smith
Reply to  Monckton of Brenchley
November 9, 2015 12:14 pm

Plus Lord Monckton, in an electronic feedback amplifier, the feedback (sample of the output) is fed back to the INPUT terminal not to some other node in the output.
Since TOA TSI is the input terminal, and an attenuated version of this is what gets stored in the big ocean heat storage depot, any feedbacks should address the changes in attenuation of TSI before it gets to the surface as solar spectrum radiant energy.
Losses in the atmosphere to Raleigh and Mie scattering plus absorption by atmospheric gases including GHGs, such as H2O and O3 (CO2 as well), convert Solar Spectrum beam energy to isotropic LWIR radiant energy or isotropic heat energy, which preferentially conducts or convects upwards; rather than downwards. (That messy second law thing).
So clouds is where the action is.
g

Editor
November 6, 2015 6:15 am

I think the use of the word “knobs” is a reference to both the controls on the machine and the “scientists” using them. In the UK, “What a knob” is a derogatory term, likening someone (usually male), to a part of the male anatomy!
Another good one Josh!!

Reply to  andrewmharding
November 6, 2015 7:07 am

“control knob” is a reference to the now famous paper by Andrew Lacis

Werner Brozek
Reply to  Chaam Jamal
November 6, 2015 7:17 am

I thought it was:
Richard Alley: “The Biggest Control Knob: Carbon Dioxide in Earth’s Climate History”

Reply to  Chaam Jamal
November 6, 2015 9:21 am

Chaam and Werner. It might have been a Freudian slip on my part. If Josh could clarify this point, I am sure we will be grateful!

November 6, 2015 6:16 am

Can’t see it being settled for many years yet as the political juggernaut will take a lot of turning around. A new RSS/UAH high from an El Nino will just drive that further away, even if 18 months later the temperature crashes back down again as it did in 1999.
Eventually the politicos will have other fish to fry, and all this will be forgotten as we panic about the next threat to all humanity.

Matt G
Reply to  millennia97
November 6, 2015 6:41 am

Beating RSS or UAH yearly record by for example 0.1 c every 15+ years only falsifies CAGW. For a temperature rate to be regarded matching the warmest agenda, yearly records need to be broken every few years (3-4) to be any where near close. If breaking yearly records by 0.1 c to 0.2 c every 3-4 years would result in century warming between 3 c and 6 c. Clearly this has not been happening at all, so the CAGW has already been falsified.

Reply to  Matt G
November 6, 2015 6:45 am

Good scientific arguments of no use when fighting political protectionism. Remember they actually change the data to suit their theories!

Evan Jones
Editor
Reply to  Matt G
November 6, 2015 4:28 pm

But even their changed data does not indicate catastrophe or even alarm, even using the Karl pausebuster metrics. And, anyway, certain data with known biases (TOBs, moves, equipment, what have you) requires changes or must be dropped, entirely.
With the data/metadata-rich USHCN, we can drop then and still maintain adequate coverage. But the GHCN, not so much. (That is the issue Mosh has to contend with. It’s where he went awry, I think, but it’s not his fault. We just happen to have a better dataset.)
My beef is not that these dudes do changes, but that the changes are wrong. We’ll be suggesting a few of our own, down the line, I think.

michael hart
November 6, 2015 6:32 am

Well, yes. With a good climate model, you can jerk around with the control knobs to get the desired result.

Harry Passfield
November 6, 2015 6:44 am

We must know all variables that can affect climate.

Ahh, the known knowns and the unknown knowns. And then, we mustn’t overlook the unknown knowns and the known unknowns, not to mention the unknown unknowns. (h/t Donald Rumsfeld – who coulda been a climate scientist)

rogerknights
Reply to  Harry Passfield
November 6, 2015 2:42 pm

And the not-seen knowns, aka Pink Flamingos.

Greg Cavanagh
Reply to  rogerknights
November 6, 2015 7:36 pm

And the impossible unknowns, aka Black Swans.

Kermit
November 6, 2015 6:49 am

THANK YOU for taking the time and expending the effort to write this. My background is physics & math – a long time ago now – but I have been using computer models for the last 25 years or so to quantify things that affect selected commodity markets. I have found that even among scientists, there is very little knowledge about using computer models to understand or make predictions in complex systems. There is very little understanding of the limitations of these models, even among scientists.
Again, I thank you for writing this. It is something that I look forward to reading several times.

rgbatduke
Reply to  Kermit
November 6, 2015 8:59 am

Yeah, I’ve done two companies (sadly, both failed at this point) doing predictive modeling using e.g. advanced neural networks that I wrote using a bunch of tricks stolen from physics/stat mech. The nets actually worked phenomenally well, but it turns out that founding a business and succeeding is really difficult (something that is reflected in the 10% success rate). One difficulty is that in business, the people you are selling to almost never understand statistics beyond the one course they might have taken and gotten a C in 30 years earlier in college. To them predictive modeling is black magic, and they manage to both doubt that it will work and expect it to do the impossible if it does. It requires a superhuman sales force to be able to explain both the marginal advantages and the limitations.
I advise students who are going on to careers in science and medicine, and know pretty accurately what statistics course(s) (if any) they are likely to end up having taken even at an elite institution like Duke. Sadly, for the most part this is that very same one course, which isn’t even a course at the “serious math” level, it is “practical statistics” and goes over the usual stuff about sampling, the central limit theorem, the error function, and stuff like t. The real goal of the class is to teach students to be able to critically assess statistical claims in papers. As is covered in Statistics Done Wrong: A Woefully Complete Guide:
http://www.statisticsdonewrong.com/
this minimal training fails on both sides of the fence — it does serve to make e.g. physicians a lot more skeptical of marginal results presented in medical journals, but equally obviously, not skeptical enough. But the bigger failure is in the lack of adequate understanding of statistics on the research side. Even the gold standard, a double blind placebo controlled test of some hypothesis, is often plagued by the simplest of problems, such as the fact that the population being studied is itself far from a random selection from the actual population, the fact that the sample size is typically pitifully small OR it is huge but many things are being studied simultaneously (and without Bonferroni correction) in a huge multivariate data dredge that concludes things like “Green Jelly Beans cause Acne”:
https://xkcd.com/882/
Occasionally I see a paper in climate science that does decent statistics, in particular one that openly acknowledges the enormous errors that are more typically minimized and/or openly misrepresented, especially in any material intended for “public” consumption. To be blunt, climate science in general makes claims of “confidence” across the board that cannot possibly be justified using axiomatic statistics. The worst instances of this abuse of terminology with a precise meaning in actual statistics in a context where the reader is deliberately invited to believe that that is the sense being used are in (for example) the summary for policy makers (SPM) of the various ARs from the IPCC, where the abuses are so egregious they have inspired a number of climate scientists to withdraw altogether from the process.
It is, for example, amusing to examine the changes and differences in the temperature anomalies version to version and between two different products that are supposedly measuring the same thing. Consider this plot:
http://www.woodfortrees.org/plot/hadcrut4gl/from:2010/to:2015/plot/gistemp/from:2010/to:2015
This is gistemp and hadcrut4, both global temperature anomalies with a supposedly common base, plotted side by side over just the last five years. As you can see, GISS considers the anomaly to be 0.2C higher than the CRU. If one downloads the HadCRUT4 data and tallies all sources of acknowledged error, the error year to year is claimed to be 0.1 C. This error cannot be a standard deviation, as it sums three or four distinct estimated contributions to a total error, so we have to assume that it is supposed to be a confidence interval, that it is 95% certain that the actual anomaly is within 0.1 C either way of the number they publish. However, let’s be generous and assume that it really is supposed to be a standard deviation or normal equivalent.
Either way it is amusing to note that a very simple way to interpret this graph is that it is 95% or better certain that GISStemp LOTI is wrong according to the CRU! If one plots BEST it is a lot more than 95% certain that BEST is wrong — the two differ by as much as a degree C. If one plots the BEST 95% confidence bound (which actually is available on WFT, amazingly) it is 99.99% certain that HadCRUT4 is badly, badly wrong, as is GISStemp LOTI. HadCRUT3 isn’t quite 95% certain to be wrong according to HadCRUT4, but it is close. One wonders what the error claims for HadCRUT3 were?
To paraphrase Wesley in The Princess Bride, “I do not think this `confidence interval’ means what you think it means…”
As I spend some tiny fraction of my tiny amount of free time (time when I’m not teaching, advising, or working on something else) on learning nonlinear dynamics and some of the “universal” properties of chaotic systems, it is becoming increasingly clear with working numerical examples that I’ve built on top of e.g. octave that the arithmetical average of the one-dimensional projection of a strange attractor is not a good predictor for the state of a chaotic system. Nor is the distribution of the projection of the specific timestep values anything vaguely resembling normal, not for most of the “interesting” systems that are still far less complex than any possible representation of the climate. Furthermore, in direct contradiction to the assertions of Nick Stokes that the climate models are probably as reliable as ordinary CFD codes 30 to 100 years out, the demonstration that chaos depends sensitively on things like integration stepsize and granularity is a textbook example in Sprott’s lovely book, one that I implemented (just for grins) in octave. To put it bluntly, even stable non-chaotic sets of ODEs can become chaotic if integrated with the right (wrong) stepsize, and changing the stepsize can “suddenly” shift the entire character of the solution to patterns that bear absolutely no resemblance to the “correct” solution obtained with a small and ideally adaptive stepsize. This is true for really boring sets of 2-3 coupled ODEs — forget about solutions to nonlinear coupled Navier-Stokes equations solved in an irregularly driven non-inertial reference frame with a highly complex surface structure on an integration granularity 30 orders of magnitude larger than the Kolmogorov scale for the problem.
I can only reiterate my conclusion quoted by Werner above. Could climate models work? Sure. They do work — for about a week out from a good initialization, where we call them weather models. But one really shouldn’t expect them to work out to 30+ years. We shouldn’t really expect them to work out to ten-plus years. Or five. Or even just one. After all, just one year out the climate depends strongly on things like just what is happening in a small patch of the Pacific Ocean where the named dissipation pattern called “ENSO” holds sway, and our ability to predict that one lousy year out just plain sucks. Will the current ENSO condition persist six more months? Maybe. Or maybe it will just blow apart. Or maybe it will get even stronger. Or perhaps it will remain about the same. We would be lucky to make a prediction that was right a bit over half of the time, given nothing but the conditions right now, and none of the decent predictions would be based on microdynamical simulations, they would be based on things like comparing what we see right now to what we’ve seen at similar points in ENSO episodes past, a completely different kind of modeling and prediction.
Then there is the enormously sketchy process of running a climate model (say) 200 times with small perturbations of initial conditions and parameters to get 200 enormously different possible future climate trajectories, some of which cool the planet, some of which warm the planet, all of them suffering from the stepsize problem indicated above and the fact that they cannot even resolve most of the short-time dissipative structures in the atmosphere or maintain detailed balance as they integrate. These trajectories are then averaged, and the average is claimed to be “the prediction” of the model, with the width of the distribution of trajectories used as an utterly (theoretically) unjustifiable estimate of its probable error. Then the ensemble of these average trajectories across all the different, but not independent, climate models is itself averaged, and its spread is used as an even less justifiable assertion of “confidence bounds”.
Nobody rational should expect this to work at all. The average of trajectories produced by a single climate model is going to be just as wrong as the climate model is wrong, and the usual order of science is to first propose a model, and then to test its predictive powers to see if the climate model is wrong. That is, any good model should be subjected to a hypothesis test as far as its ability to predict actual future climates is concerned, and to the extent that it does poorly, it should be at the very least viewed skeptically as a poor predictor.
But how, precisely, are we supposed to interpret the average of many average trajectories produced by many untested climate models, especially when that average is protected from rejection by actual comparison of its predictions to the real climate? The average of many broken models, especially broken models with an obvious bias relative to reality, is a broken model with an obvious bias relative to reality.
Or not, of course. There is a chance that the null hypothesis of the hypothesis test that isn’t being performed, “This is a perfect climate model” is correct, and that the climate we are experiencing in reality is merely unlikely given the inputs, and over time any given model will end up being correct as the system regresses to its true/more probable behavior. However, I repeat: the onus of proof is very much on the modelers that wish to assert that their models are useful for predicting long term climate, but this is a burden that so far they refuse to acknowledge, let alone accept.
I mean this literally. It’s almost a quote from Chapter 9 in AR5. They do, and present, these averages knowing that they haven’t been subjected to a hypothesis test, knowing that the multimodel mean most often presented as the claimed prediction of CMIP5 “warming” contains the results of many models that would be summarily rejected if they were subjected to a hypothesis test, and that aren’t remotely independent and identically distributed samples drawn from some sort of distribution of models.
Bad science, bad statistics. It makes me sad.
rgb

Werner Brozek
Reply to  rgbatduke
November 6, 2015 9:39 am

This is gistemp and hadcrut4, both global temperature anomalies with a supposedly common base

Thank you for this reply! However the base periods are actually different. GISS uses 1951 to 1980 and Hadcrut4 uses 1961 to 1990. So GISS would always read higher. Nevertheless, they can hugely vary month by month as the top graphic on this earlier post clearly shows:
http://wattsupwiththat.com/2013/12/22/hadcrut4-is-from-venus-giss-is-from-mars-now-includes-november-data/
So as you say, something seems to be very wrong with their error bars.

CC Reader
Reply to  rgbatduke
November 6, 2015 10:34 am

Dr. David Evans at science speak blog has a new model and concludes the following:
New Science 18: Finally climate sensitivity calculated at just one tenth of official estimates. Hence we conclude that:
The ECS might be almost zero, is likely less than 0.25 °C, and most likely less than 0.5 °C.
The fraction of global warming caused by CO2, μ,is likely less than 20%.
The CO2 sensitivity, λC, is likely less than 0.15 °C W−1 m2.
Given a descending WVEL, it is difficult to construct a scenario consistent with the observed data in which the influence of CO2 is greater than this.
DEFINITION: The equilibrium climate sensitivity (ECS) is the surface warming ΔTS when the CO2 concentration doubles and the other drivers are unchanged. Note that the effect of CO2 is logarithmic, so each doubling or fraction thereof has the same effect on surface warming.

emsnews
Reply to  rgbatduke
November 6, 2015 11:45 am

And then there is the issue, why did the most recent Little Ice Age happen? Not to mention, the real nasty Ice Ages.

Reply to  rgbatduke
November 6, 2015 12:14 pm

Dr. Brown,
Indeed – among the scientific and statistical mistakes you note above, there is also the mistake where massive computational systems are assumed to have skill.
I personally suspect this is a holdover from the hand calculator. Hand calculators are always right, therefore a really, really, really big hand calculator (i.e. a super computer) must be right even more!

Reply to  rgbatduke
November 6, 2015 1:14 pm

Robert Brown writes: “To paraphrase Wesley in The Princess Bride…” I hate to be one to cast doubt on an otherwise brilliant criticism of climate modeling, but feel obligated to point out that you’ve in fact paraphrased Inigo Montoya with this attribution rather than Wesley.
You know, criticizing climate modelers, especially those participating in the “Grand Average of Nonsense” party the IPCC is promoting with their Three Letter Acronym-5 program is a lot like shooting ducks in a barrel. It’s not as if any one of the models has every been remotely close to predictive, still the IPCC seems to think tossing them all in a bag and shaking them up will magically cause truth to appear.
It reminds me very much of the folks who have no idea at all about whatever “system” they’re trying to model, so they go out and collect whatever raw data might be available cheap, toss it all in a bin and shove the lot of it through a stepwise regression with the hope they’ll get lucky. It’s really pitiful.
That aside, you comment on the test of a hypothesis rests in the performance of the models based on it isn’t lost on me, I’ve used it so many times even I’m getting tired of hearing it, but it just doesn’t work. You’d think something that patently obvious and so broadly accepted within scientific communities of every discipline would be cause enough to show these people the door, but it’s not. Year after year they receive research funds that could be spent on projects that may not have immediate social value, but that at least haven’t been proven to be complete junk beyond any shadow of doubt for nearly half a century!
Personally, I despair. I’m sorry to admit to people who ask that I was once a scientist who worked for NASA and NOAA. That my specialty was statistics and design of experiments. I used to be proud of what I did, but what these people have done has destroyed the reputation of thousands of legitimate scientists. I have no idea what to do about it other than tell people I’m retired and now I restore cars for fun. I just don’t discuss my past anymore.

Gloateus Maximus
Reply to  rgbatduke
November 6, 2015 1:29 pm

emsnews
November 6, 2015 at 11:45 am
The Little Ice Age was the latest in a series of periodic, centennial-scale, cold spells, alternating with warmer cycles of similar intervals, characteristic of the Holocene and other interglacials.
The long-term trend for at least the past 3000 years, since the Minoan Warm Period, has been colder. That is, the peaks of warm periods are getting cooler and the depths of cool periods colder. The trend might date back to the end of the Holocene Climatic Optimum, c. 5000 years ago.
Peak heat of the Minoan Warm Period was higher than of the Roman WP, which was in turn toastier than the Medieval WP, which so far remains balmier than the current Modern WP. The intervening cold periods also appear to have been progressively chillier, ie the Greek Dark Ages CP less frosty than the Dark Ages, which in turn was probably less icy than the LIA.
They’re natural Bond cycles within interglacials, similar to but of less magnitude than D/O cycles within glacial phases. The onset of these real, big ice ages seems linked to earth’s orbital and rotational cycles.

george e. smith
Reply to  rgbatduke
November 6, 2015 3:20 pm

“””””…..
Reader
November 6, 2015 at 10:34 am
Dr. David Evans at science speak blog has a new model and concludes the following: …..””””
When you say ECS is:
“”.. DEFINITION: The equilibrium climate sensitivity (ECS) is the surface warming ΔTS when the CO2 concentration doubles and the other drivers are unchanged. Note that the effect of CO2 is logarithmic, so each doubling or fraction thereof has the same effect on surface warming. ..””
So this is true for CO2 going from 400 ppmm to 800 ppmm, or from 280 ppmm to 560 ppmm, or from 1 ppmm to 2 ppmm ; since that is exactly what logarithmic means.
So far since 1957/58 IGY, when Mauna Loa Data started up, we have gone from 315 ppmm to 400 ppmm or thereabouts, which is about 2^0.34, or about 1/3rd of a doubling.
So far the corresponding Temperature increase is not distinguishable from what a linear trend would imply.
I dare say a Taylor series can model the relationship from 315 to 400 ppm at least as well as the assumption of logarithmeticity, for which there is no foundation either in experiment (science) or theory (maths).
I dare say the experimental data can be fitted to the form:
y = exp(-1/x^2) just as well; and with x,y being CO2 and Temperature IN EITHER ORDER.
Just repeating that the relationship is logarithmic doesn’t make it so. A logarithm is a very specific mathematical function; not just some arbitrary non linearity.
g

Werner Brozek
Reply to  rgbatduke
November 6, 2015 3:26 pm

Just repeating that the relationship is logarithmic doesn’t make it so.

This is very true! As a matter of fact, for the last 18 years and 9 months on RSS, the effect seems to be zero.

Auto
Reply to  rgbatduke
November 6, 2015 3:54 pm

rgb
Thanks – and plus some hundreds.
I have always believed that – since my time as weather forecaster for Mrs Elam’s class in – probably – 1962, when Upstill Nancekevill, Dixon and me more-or-less said ‘Tomorrow will be the same as today’.
And we edged the Met Office (then).
Today – living in the bit of the UK south of London, I feel 24 hour forecasts are pretty good – better than ours were in ’62, for sure.
[Hah – school kids with a chalk board. Ha!]
48/72 hours – take with a pinch of salt.
Five days – well, b*’^^ing optimistic.
Weeks/months/beyond – I trust they’re entered for Nobel Prizes for Literature [Fiction]!
Did the Mann get one of these?
Only asking.
Honest.
Ten years ahead – sorry – don’t make me >0m1T.
Sixty Years – Yeah Right. F+’^I>g aye Right.
Can’t get a fortnight right better than one in three . . . . . . .
Auto

rgbatduke
Reply to  rgbatduke
November 10, 2015 1:19 pm

First, yes, it was Inigo Montoya. Second, Werner, if you look over the length and breadth of the two on WFT, you will find that over a substantial fraction of the two plots they are offset by less than 0.1 C. For example, for much of the first half of the 20th century, they are almost on top of one another with GISS rarely coming up with a patch 0.1 C or so higher. They almost precisely match in a substantial part of their overlapping reference periods. They only start to substantially split in the 1970 to 1990 range (which contains much of the latter 20th century warming). By the 21st century this split has grown to around 0.2 C, and is remarkably consistent. Let’s examine this in some detail:
We can start with very simple graph that shows the divergence over the last century:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1915/to:2015/trend/plot/gistemp/from:1915/to:2015/trend
The two graphs have a widening divergence in the temperatures they obtain. If the two measures were in mutual agreement, one would expect the linear trends to be in good agreement — the anomaly of the anomaly, as it were. They should, after all, be offset by only the difference in mean temperatures in their reference periods, which should be a constant offset if they are both measuring the correct anomalies from the same mean temperatures.
Obviously, they do not. There is a growing rift between the two and, as I noted, they are split by more than the 95% confidence that HadCRUT4, at least, claims even relative to an imagined split in means over their reference periods. There are, very likely, nonlinear terms in the models used to compute the anomalies that are growing and will continue to systematically diverge, simply because they very likely have different algorithms for infilling and kriging and so on, in spite of them very probably having substantial overlap in their input data.
In contrast, BEST and GISS do indeed have similar linear trends in the way expected, with a nearly constant offset. One presumes that this means that they use very similar methods to compute their anomalies (again, from data sets that very likely overlap substantially as well). The two of them look like they want to vote HadCRUT4 off of the island, 2 to 1:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1915/to:2005/trend/plot/gistemp/from:1915/to:2005/trend/plot/best/from:1915/to:2005/trend
Until, of course, one adds the trends of UAH and RSS:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1979/to:2015/trend/plot/gistemp/from:1979/to:2015/trend/plot/best/from:1979/to:2005/trend/plot/rss/from:1979/to:2015/trend/plot/uah/from:1979/to:2015/trend
All of a sudden consistency emerges, with some surprises. GISS, HadCRUT4 and UAH suddenly show almost exactly the same linear trend across the satellite era, with a constant offset of around 0.5 C. RSS is substantially lower. BEST cannot honestly be compared, as it only runs to 2005ish.
One is then very, very tempted to make anomalies out of our anomalies, and project them backwards in time to see how well they agree on hindcasts of past data. Let’s use the reference period show and subtract around 0.5 C from GISS and 0.3 C from HadCRUT4 to try to get them to line up with UAH in 2015 (why not, good as any):
http://www.woodfortrees.org/plot/hadcrut4gl/from:1979/to:2015/offset:-0.32/trend/plot/gistemp/from:1979/to:2015/offset:-0.465/trend/plot/uah/from:1979/to:2015/trend
We check to see if these offsets do make the anomalies match over the last 36 most accurate years (within reason):
http://www.woodfortrees.org/plot/hadcrut4gl/from:1979/to:2015/offset:-0.32/plot/gistemp/from:1979/to:2015/offset:-0.465/plot/uah/from:1979/to:2015
and see that they do. NOW we can compare the anomalies as they project into the indefinite past. Obviously UAH does have a slightly slower linear trend over this “re-reference period” and it doesn’t GO any further back, so we’ll drop it, and go back to 1880 to see how the two remaining anomalies on a common base look:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1880/to:2015/offset:-0.32/plot/gistemp/from:1880/to:2015/offset:-0.465
We now might be surprised to note that HadCRUT4 is well above GISS LOTI across most of its range. Back in the 19th century splits aren’t very important because they both have error bars back there that can forgive any difference, but there is a substantial difference across the entire stretch from 1920 to 1960:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1920/to:1960/offset:-0.32/plot/gistemp/from:1920/to:1960/offset:-0.465/plot/hadcrut4gl/from:1920/to:1960/offset:-0.32/trend/plot/gistemp/from:1920/to:1960/offset:-0.465/trend
This reveals a robust and asymmetric split between HadCRUT4 and GISS LOTI that cannot be written off to any difference in offsets, as I renormalized the offsets to match them across what has to be presumed to be the most precise and accurately known part of their mutual ranges, a stretch of 36 years where in fact their linear trends are almost precisely the same so that the two anomalies differ only BY an offset of 0.145 C with more or less random deviations relative to one another.
We find that except for a short patch right in the middle of World War II, HadCRUT4 is consistently 0.1 to 0.2 C higher than GISStemp. This split cannot be repaired — if one matches it up across the interval from 1920 to 1960 (pushing GISStemp roughly 0.145 HIGHER than HadCRUT4 in the middle of WW II) then one splits it well outside of the 95% confidence interval in the present.
Unfortunately, while it is quite all right to have an occasional point higher or lower between them — as long as the “occasions” are randomly and reasonably symmetrically split — this is not an occasional point. It is a clearly resolved, asymmetric offset in matching linear trends. To make life even more interesting, the linear trends do (again) have a more or less matching slope, across the range 1920 to 1960 just like they do across 1979 through 2015 but with completely different offsets. The entire offset difference was accumulated from 1960 to 1979.
Just for grins, one last plot:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1880/to:1920/offset:-0.365/plot/gistemp/from:1880/to:1920/offset:-0.465/plot/hadcrut4gl/from:1880/to:1920/offset:-0.365/trend/plot/gistemp/from:1880/to:1920/offset:-0.465/trend
Now we have a second, extremely interesting problem. Note that the offset between the linear trends here has shrunk to around half of what it was across the bulk of the early 20th century with HadCRUT4 still warmer, but now only warmer by maybe 0.045 C. This is in a region where the acknowledged 95% confidence range is order of 0.2 to 0.3. When I subtract appropriate offsets to make the linear trends almost precisely match in the middle, we get excellent agreement between the two anomalies.
Too excellent. By far. All of the data is within the mutual 95% confidence interval! This is, believe it or not, a really, really bad thing if one is testing a null hypothesis such as “the statistics we are publishing with our data have some meaning”.
We now have a bit of a paradox. Sure, the two data sets that these anomalies are built from very likely have substantial overlap, so the two anomalies themselves cannot properly be viewed as random samples drawn from a box filled with independent and identically distributed but correctly computed anomalies. But their super-agreement across the range from 1880 to 1920 and 1920 to 1960 (with a different offset) and across the range from 1979 to 2015 (but with yet another offset) means serious trouble for the underlying methods. This is absolutely conclusive evidence, in my opinion, that “According to HadCRUT4, it is well over 99% certain GISStemp is an incorrect computation of the anomaly” and vice versa. Furthermore, the differences between the two can not be explained by the fact that they draw on partially independent data sources — if this were the case, the strong coincidences between the two across piecewise blocking of the data are too strong — obviously the independent data is not sufficient to generate a symmetric and believable distribution of mutual excursions with errors that are anywhere near as large as they have to be, given that both HadCRUT4 and GISStemp if anything underestimate probable errors in the 19th century.
Where is the problem? Well, as I noted, a lot of it happens right here:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1960/to:1979/offset:-0.32/plot/gistemp/from:1960/to:1979/offset:-0.465/plot/hadcrut4gl/from:1960/to:1979/offset:-0.32/trend/plot/gistemp/from:1960/to:1979/offset:-0.465/trend
The two anomalies match up almost perfectly from the right hand edge to the present. They do not match up well from 1920 to 1960, except for a brief stretch of four years or so in early World War II, but for most of this interval they maintain a fairly constant, and identical, slope to their (offset) linear trend! They match up better (too well!) — with again a very similar linear trend but yet another offset across the range from 1880 to 1920. But across the range from 1960 to 1979, Ouch! That’s gotta hurt. Across 20 years, HadCRUT4 cools Earth by around 0.08 C, while GISS warms it by around around 0.07C.
So what’s going on? This is a stretch in the modern era, after all. Thermometers are at this point pretty accurate. World History seems to agree with HadCRUT4, since in the early 70’s there was all sorts of sound and fury about possible ice ages and global cooling, not global warming. One would expect both anomalies to be drawing on very similar data sets with similar precision and with similar global coverage. Yet in this stretch of the modern era with modern instrumentation and (one has to believe) very similar coverage, the two major anomalies don’t even agree in the sign of the linear trend slope and more or less symmetrically split as one goes back to 1960 ,a split that actually goes all the way back to 1943, then splits again all the way back to 1920, then slowly “heals” as one goes back to 1880.
As I said, there is simply no chance that HadCRUT4 and GISS are both correct outside of the satellite era. Within the satellite era their agreement is very good, but they split badly over the 20 years preceding it in spite of the data overlap and quality of instrumentation. This split persists over pretty much the rest of the mutual range of the two anomalies except for a very short period of agreement in mid-WWII, where one might have been forgiven for a maximum disagreement given the chaotic nature of the world at war. One must conclude, based on either one, that it is 99% certain that the other one is incorrect.
Or, of course, that they are both incorrect. Further, one has to wonder about the nature of the errors that result in a split that is so clearly resolved once one puts them on an equal footing across the stretch where one can best believe that they are accurate. Clearly it is an error that is a smooth function of time, not an error that is in any sense due to accuracy of coverage of the (obviously strongly overlapping) data.
This result just makes me itch to get my hands on the data sets and code involved. For example, suppose that one feeds the same data into the two algorithms. What does one get then? Suppose one keeps only the set of sites that are present in 1880 when the two have mutually overlapping application (or better, from 1850 to the present) and runs the algorithm on them. How much do the results split from a) each other; and b) the result obtained from using all of the available sites in the present? One would expect the latter, in particular, to be a much better estimator of the probable method error in the remote past — if one uses only those sites to determine the current anomaly and it differs by (say) 0.5 C from what one gets using all sites, that would be a very interesting thing in and of itself.
Finally, there is the ongoing problem with using anomalies in the first place rather than computing global average temperatures. Somewhere in there, one has to perform a subtraction. The number you subtract is in some sense arbitrary, but any particular number you subtract comes with an error estimate of its own. And here is the rub:
The place where the two global anomalies develop their irreducible split is square inside the mutually overlapping part of their reference periods!
That is, the one place they most need to be in agreement, at least in the sense that they reproduce the same linear trends, that is, the same anomalies is the very place where they most greatly differ. Indeed, their agreement is suspiciously good — as far as linear trend is concerned – everywhere else, in particular in the most recent present where one has to presume that the anomaly is most accurately being computed and the most remote past where one expects to get very different linear trends but instead get almost identical ones!
I doubt that anybody is still reading this thread to see this — but they should.
Finally, to George E. Smith:

So far the corresponding Temperature increase is not distinguishable from what a linear trend would imply.

(along with more about how it is not obviously logarithmic). I’ve posted the figure where one can fit HadCRUT4 with a log function from 1850 to the present with rather excellent agreement, within an apparent oscillatory correction of around 0.1 C plus some noise many times in the past at this point, and I know you have seen it, so you are simply pretending you haven’t in your reply. Over the last 30 years, sure, it isn’t distinguishable from a linear trend given natural variability and probable climate sensitivity. Over the last 165 years, it fits pretty well with very reasonable numbers for both. Over the last 2000 years, it probably doesn’t fit too well. Or rather, (as the discussion above makes clear) we cannot say how well it does or does not fit, because one thing that is clear is that the two global anomaly computations examined in some detail in this very reply are mutually inconsistent in a way that makes them very dubious in their extension back into the early 20th and 19th centuries and utterly inexplicable across the overlapping part of their anomaly reference periods. One has to conclude that the error in the anomaly reference temperatures themselves is at least 0.2 C, and is quite probably several times that given only two samples. IMO the probable error scaling across the reference overlap into the past, allowing from data overlap, makes the probable mutual method error in the 19th and early 20th century closer to 0.5 to 0.6 C than 0.3 C, and I wouldn’t be surprised if it were a full 1 C.
If it anywhere near this, then we literally have no idea how much the Earth has warmed from the mid-19th century to the present. It could be anywhere from 1.5 C to 0.2 C. In this case we have no idea what a “good fit” to the anomaly might look like, and the apparent logarithmic fit to CO2 concentration could end up being either an accident or being adequately well fit with a TCS as small as 0.5 C per doubling. And the sad thing is that there is no point in even trying to fit TCS across an interval outside of the satellite era where UAH, HadCRUT4 and GISS have linear trends in close agreement with each other, where all three produce a linear trend of roughly 0.5 C over 36 years, or around 0.14 C/decade. Across this interval, it is pointless to even try a log fit, and it is quite impossible to resolve the effects of natural things like ENSO from some underlying CO2-driven fraction of the trend since we don’t know how to compute or predict either one and the interval is far too short, and noisy, to have any hope of success if we did.
rgb

Reply to  rgbatduke
November 10, 2015 1:28 pm
Reply to  dbstealey
November 10, 2015 1:38 pm

dbstealey, that’s not quite correct. That is my graph of USClimate Reference Network data, which while not hidden, is not used in public press releases. They prefer the old messed up network that requires adjustments to make their public pronouncements.

Reply to  Anthony Watts
November 10, 2015 7:08 pm

My misteak, then. I got that chart and the info from another site, which it seems was copied from here. I’ll check to see if they gave WUWT credit for the chart.

Reply to  dbstealey
November 10, 2015 7:11 pm

In checking, the chart (which I missed the first time around) was on a site (FLM) that got it from another blog: Poor Richard’s News, which in turn got it from the Powerline blog.
WUWT is getting widely read!

Werner Brozek
Reply to  rgbatduke
November 10, 2015 5:05 pm

I doubt that anybody is still reading this thread to see this — but they should.

Perhaps I can do something about this.☺
Thank you very much!

Latitude
November 6, 2015 6:55 am

Oceans…..they can’t just create heat…..at least not the amount claimed
….just move it around….
So where does the cold go…when the claim all this heat from an El Nino?

Reply to  Latitude
November 6, 2015 7:53 am

The nino takes warm water that has been stacked to some depth by mechanical action of wind and spreads it out over the ocean surface so it transfers enthalpy more efficiently to the atmosphere. That’s easy. More difficult is to explain why ninos are associated with a rise in global mean sea level.
That warm water stacked to depth before the nino and frittering away its time in an inefficient surface area to volume conduction to surrounding sea water is displacing colder water. The colder water returns when the warm water resumes its rightful place on the surface of the ocean.
Give or take a seemingly small effect of pressure at depth on thermal displacement (compared with the volume of the oceans) why should ninos cause GMSL to rise?

Latitude
Reply to  gymnosperm
November 6, 2015 11:15 am

That warm water ………. is displacing colder water.
Exactly my question….neither warm or cold water is invented…..they are both there, just moved around.
…excellent point on GMSL too

Matt G
Reply to  Latitude
November 6, 2015 3:23 pm

Oceans…..they can’t just create heat…..at least not the amount claimed
….just move it around….
So where does the cold go…when the claim all this heat from an El Nino?

The oceans don’t create heat they only transfer energy between the atmosphere and the liquid water. The energy created is from solar energy and the change in the trade winds decides whether this excess solar energy is distributed within the upper ocean or accumulated across the ocean surface. When it is distributed in the upper ocean caused by upwelling, it is this colder water from lower depths that reaches the ocean surface.
When across the ocean surface it quickly affects global atmospheric temperatures and is released to space eventually much quicker. When stored in upper ocean it remains and is not released to space or at least only very slowly to the atmosphere. It moves in the ocean current away from the Topics around the world eventually via the AMOC. Hence, it eventually is transferred between the Pacific ocean and the Atlantic ocean. One observed warmer ocean current part of the AMOC in the Arctic depths only lost 0.5 c over 7 years. That’s when the warmer ocean current was in it’s possible coldest place on the planet too.

That warm water ………. is displacing colder water.
Exactly my question….neither warm or cold water is invented…..they are both there, just moved around.
…excellent point on GMSL too

Cold upwelling water in the Tropics is warmed by solar energy and moved too quickly via strong trade winds to warm at the surface. How much they move around determines how much they are actually warmed by solar energy. The more water in the Tropics is warmed it expands and this depends on the amount of cloud cover. The changes in the trades winds are the reason why they cause sea levels to rise.

Jon
November 6, 2015 7:02 am

I think if you just defund the scientists that are beeing funded to “prove” the political based UNFCCC scientifically. I think the problem then will just go away?

Werner Brozek
November 6, 2015 7:03 am

October Updates:
RSS: It has a 10 month average of 0.33, tying it for third with 2005. There is no way it will get above third place in 2015. As for 2016? Who knows? October 2015, with an anomaly of 0.440, was the second warmest October behind October 1998 which had an anomaly of 0.461. Its pause extends by one month to 18 years and 9 months from February 1997 to October 2015.
UAH6.0beta3: It had the warmest October ever in 2015. It ranks third at present and as with RSS, there is no way that it will finish higher than third.
Hadsst3: It also had its warmest October ever. It is guaranteed to set a new record this year.

AndyG55
Reply to  Werner Brozek
November 6, 2015 8:11 am

Hadxxxx was guaranteed to set new records , right from the beginning of the year. 😉
Just like GISS..
A directive !

Werner Brozek
Reply to  AndyG55
November 6, 2015 8:34 am

So far, GISS is “only” 0.06 above its 2014 record, but Hadcrut4 is 0.14 above its 2014 record! So if we assume an error bar of 0.10, then Hadcrut4 can even say they are 97% certain that 2015 is a new record.

AndyG55
Reply to  Werner Brozek
November 6, 2015 8:20 am

I must have slightly different RSS numbers to you as well.
Here is what I get on a “Year to end of October” basis.
1998 0.609
2010 0.511
2005 0.349
2015 0.332
2002 0.331
2003 0.304
2007 0.287
2014 0.254
2001 0.235
2013 0.232
Looks like I’m going to have to check for any minor changes since whenever I last grabbed RSS.
I’ve just been adding the new value each month.

Werner Brozek
Reply to  AndyG55
November 6, 2015 8:43 am

I must have slightly different RSS numbers to you as well.
Here is what I get on a “Year to end of October” basis.

We are talking about two different things. I am comparing the present average of 0.33 after 10 months to the 12 month average of all other years. So while it may have been 0.609 to the end of October in 1998, it was 0.55 when counting all 12 months.
1 {1998, 0.550},
2 {2010, 0.472},
3 {2005, 0.33},
4 {2003, 0.32},
5 {2002, 0.315},
6 2014: 0.255

AndyG55
Reply to  AndyG55
November 6, 2015 9:09 am

“I am comparing the present average of 0.33 after 10 months to the 12 month average of all other years”
hmmm. not sure that is a valid comparison.. but , ok. ! 🙂

Werner Brozek
Reply to  AndyG55
November 6, 2015 9:58 am

not sure that is a valid comparison

In my opinion, there are different valid ways to present data. The reason I do it my way is that it is easiest for me and the last line of the table always gives the rank if the average anomaly given in the row above stays that way for the rest of the year.

Simon
Reply to  AndyG55
November 6, 2015 10:22 am
Simon
Reply to  AndyG55
November 6, 2015 10:25 am

Actually when you think about it they are all pretty similar looking.
http://www.ncdc.noaa.gov/sotc/service/global/glob/201501-201509.gif

Stephen Richards
Reply to  AndyG55
November 6, 2015 1:40 pm

NCEP real time figures put 2015 at 0.23 Octobre 0.495.
Thanks to WeatherBell and Ryan Maue.

AndyG55
Reply to  AndyG55
November 7, 2015 1:54 am

Slimon.. do you have a coherent comment to make ?

Simon
Reply to  AndyG55
November 7, 2015 10:39 am

AndyG55
Do you have a coherent point to make, simple simon?
Assuming you can read graphs… yes I do. It’s getting warmer and all the land based datasets show it clearly. I could put a few more up if you want. Hell, even Roy’s UAH just recorded the hottest October ever. But you probably believe they are all manipulated to ensure the left wing evil plan of robbing money from the rich can go ahead. Dang clever fraud if they get away with it. Particularly with all the sharp eyed bloggers here watching.

Simon
Reply to  Werner Brozek
November 6, 2015 10:20 am

Just look at the graph, it’s easier.
http://www.cru.uea.ac.uk/cru/data/temperature/HadCRUT4.png

Gloateus Maximus
Reply to  Simon
November 6, 2015 1:51 pm

HadCRUT is a work of anti-science fiction, but even with its cooked to a crisp books, the slope and duration of the late 20th century warming still resembles those of the early 20th century warming, separated by the mid-century cooling interval, in the process of being air-brushed out of existence, as in photos of Soviet leaders.
The early 18th century warming, rebounding from the Maunder Minimum depths of the LIA, was even more pronounced than the two cycles of the 20th century.
Thus the null hypothesis can’t be rejected and there is no human GHG “footprint”.

Mark T
Reply to  Simon
November 6, 2015 7:01 pm

Nice references. Did you draw that last one with your new crayon set? Why don’t you go to Dr.Curry’s blog for a lesson on the differences in all the pretty colored graphs, hmmm? Or better yet go home to your uneducated ilk at Hot Wopper.

AndyG55
Reply to  Simon
November 7, 2015 1:55 am

Do you have a coherent point to make, simple simon?

Reply to  Werner Brozek
November 6, 2015 4:43 pm

Werner Brozek writes: “We are talking about two different things. I am comparing the present average of 0.33 after 10 months to the 12 month average of all other years. So while it may have been 0.609 to the end of October in 1998, it was 0.55 when counting all 12 months.”
Maybe it’s just me, but doesn’t this demonstrate that the science is not settled?
I won’t even ask which RSS data set you’re using, though I would like to know that and also why you’ve chosen that set. I can say there seems to be some debate over which RSS data set is the corect one to use, and even more debate about whether the RSS data shoud be used at all.
I was recently confronted by an alarmist who cited quotes of a Dr. Carl Mears, a VP of RSS, who apparently believes the data provided by RSS is pure junk unsuitable for use as fish wrap. I was a little surprised by this until fining he was still working on his High School Diploma while I was doing medium altitude atmosphere research for NOAA in 1979.
He seems dedicated to the idea that ground based thermometer data is superior to satellite data, in the face of all the criticisms of GISS & etc. that justified RSS in the first place. He wasn’t an original member of the teams who flew the MSU/AMSU instruments, but seems to have made a career of undermining them since joining RSS in 1998. I don’t understand his motives or why RSS keep him on the pay role? Is his job to justify continued funding by trying to demonstrate the most expensive, most accurate atmospheric monitoring system ever invented in the history of the human race just isn’t good enough? What is this guy’s problem?

Werner Brozek
Reply to  Bartleby
November 6, 2015 7:30 pm

Maybe it’s just me, but doesn’t this demonstrate that the science is not settled?
I won’t even ask which RSS data set you’re using, though I would like to know that and also why you’ve chosen that set. I was recently confronted by an alarmist who cited quotes of a Dr. Carl Mears, a VP of RSS, who apparently believes the data provided by RSS is pure junk unsuitable for use as fish wrap.

As for my method of comparing the year to date rank, there are different ways of looking at it and either way is OK in my opinion. After 12 months, they have to agree anyway so I do not see an issue here.
As far as I know, there is only one RSS data set and I give it in the report. Here it is again:
ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_3.txt
As for Dr. Mears reaction, I believe he wants to believe in CAGW is is embarrassed that his own data set does not support that view. He should be respected for not trying to fudge things so RSS agrees with his view. But regardless what he thinks, UAH6.0beta3 agree with him so it seems as if both are right. They also agree very well with the old Hadcrut3 which is now obsolete.

TRM
November 6, 2015 7:06 am

“If you are still reading this” – ha ha. Made me laugh. I will still be reading this come Monday morning. I’m sure this will take several reads to start to understand. But hey the weather outside isn’t conducive to hiking or cycling where I live so thanks for a good weekend read.

Werner Brozek
Reply to  TRM
November 6, 2015 10:02 am

If you are still reading this

☺ Keep in mind this is a reprint of a comment he made later on in an earlier post.

NZ Willy
Reply to  Werner Brozek
November 6, 2015 5:23 pm

Well, RGB’s comment in this post is another worth re-reading and preserving. His point (which he wrote in boldface) that the climate models have never been demonstrated to have predictive value is a major point.

November 6, 2015 7:26 am

It’s not that complicated.
1. Mankind’s contribution to the earth’s CO2 balance is trivial.
2. CO2’s contribution to earth’s heat balance is trivial.
3. GCM’s don’t work.

November 6, 2015 7:34 am

WUWT is back!
•A cartoon that illustrates the failings of the climate models.
•A good discussion of the chaotic nature of the climate
•Some actual real world data.
•And a thought provoking question.
Best post for ages.
I’ve often made the point that the human brain can’t be modelled so why should the climate be more simple (so many different parts). But the control knob answer is always used… that CO2 will dominate all others.
There’s just no evidence for that though.

Michael C. Roberts
Reply to  MCourtney
November 6, 2015 9:50 am

MCourtney – Agreed, and Seconded!
Dr. Brown – Reading your re-posted dissertation (that I sadly missed the first time – somehow) I am struck by your ability to provide a complex set of thoughts and processes in a manner that any student (such as most readers here really are) may begin to understand. I fell I am still in the classroom, with you up there providing the lecture! Thought-provoking, in-depth, and educational experience on this end. This is the type of discourse I crave as I come to WUWT again and again. It never ceases to amaze me that any thoughtful and intelligent person continues to believe that CO2 is the ‘control knob’, the be all and end all of all changes in our climate let alone transient weather patterns. Let’s not even mention the hijacking of climate science by do-gooders such as the UN, and the infiltration (resurrection?) of the socialist movements in the same.
Science Uber Alles!
MCR

Werner Brozek
Reply to  Michael C. Roberts
November 6, 2015 10:11 am

Reading your re-posted dissertation (that I sadly missed the first time – somehow)

I certainly identify with this comment. When I leave a post temporarily, I copy and paste the last URL and continue from there at a later time. So unless I am really interested in a topic, I will not go back through over 300 comments to see if there was a new reply to someone upthread. For what it is worth, I liked the old system better.

Reply to  Michael C. Roberts
November 6, 2015 11:58 am

The old system was better.
I’ve taken to doing a CTRL+F on the recent dates to pick out the recent comments (e.g. November 5 for a 2nd November post).

Werner Brozek
Reply to  Michael C. Roberts
November 6, 2015 12:36 pm

I’ve taken to doing a CTRL+F

That works well for many cases. For people not familiar with it, if I press ctrl and F, and type in “brozek”, I could get 20 cases and I could scroll through them one by one. However if I go for lunch and then want to see what came between 1:00 PM and 1:59 PM and if I punch in “november 6, 2015 at 1”, I could get 80 responses covering everything from november 6 from 10:00AM to 1:59 PM and only 7 might be from between 1 and 2.

Marcus
Reply to  Michael C. Roberts
November 6, 2015 3:10 pm

To W. Brozek below.. I find it is easiest for me to just open each article in a new tab, so I usually have 10 tabs open at the same time with various other pages and just check back and forth as I get time !! Simple, but then, so am I !! LOL

November 6, 2015 7:34 am

To point 5, Geothermal Energy, please add: exchange of gases into/out solution for subsurface flows of water. The amount of water flow that is part of the hydrothermal vent system must be enormous and given their temperature the amount of gas exchange must be monumental. This hydrology seems to be unstudied.

Neil Jordan
November 6, 2015 7:35 am

Good one. This climate model water vapor control knob goes to 11.
https://youtu.be/KOO5S4vxi0o

November 6, 2015 7:36 am

” They are born out of energy in flow, and “evolve” so that the ones that move energy most efficiently survive and grow.”
Why? What selection “pressure” favors thermodynamic efficiency? What cares about efficiency?
Great post.

November 6, 2015 7:36 am

After reading this article, do you think climate science is settled? If not, do you think it will be settled in your lifetime?
There was a period after “Climategate” in 2009 maybe I should say era, where there was a lot of navel gazing and questioning in the media from climate change promoters asking why their message wasn’t getting out. In the last several months, or maybe a year or so, that seems to have disappeared. It’s as if word came down from on high to, “Stop that, get with the program.” and oh by the way start a scorched earth policy with regard to those who aren’t on board. So the whining has stopped, Exxon is being investigated, Philippe Verdier fired, Williy Soon villified, a letter with 20 signatures requesting a RICO investigation of probably even me, and so on.
So I was optimistic after 2009 that the whole thing would collapse of its own weight. Now I am much more pessimistic. The phrase “Too big to fail” seems to apply. The Climate Change Business Journal which seems to be a legitimate organization reports that Climate Change has a $1.5 Trillion dollar economic foot print. These people aren’t going to go quietly into the night ever.
An absolute explosion needs to occur. Climate scientists lying under oath, and the public knows it, might have an effect. I doubt that the actual climate not cooperating will do anything, it hasn’t so far. A real war courtesy of the Islamic Crazies, and “Climate Change” will be forgotten, but I’m not wishing for that.
I think we are headed for a long period of draconian environmentalism, a “You haven’t seen nuthin’ yet” sort of thing. Sorry to be so pessimistic.
So RBG, thanks for asking the question allowing me to vent a bit.

Marcus
Reply to  Steve Case
November 6, 2015 3:16 pm

If another liberal gets into the White House in 2017, America is doomed !!!

jacob
Reply to  Marcus
November 7, 2015 9:47 pm

Unfortunately, I think ONLY liberals are running for office, on both sides of the ticket.

GoatGuy
November 6, 2015 7:40 am

The list of variables doesn’t have a specific bullet for The Moon … our nearest neighbor and clearly one of the more influential gravitational modulators. Tides… anyone? While one might dismiss tides as having influence over climate, if you look inward, aren’t the tidal influences at least as numerically significant as say “Earth’s ephemeris parameters” (ellipticity, precession, nutation, and so forth…)
You asked for extra bullets, so there you be. The Moon.
Let’s see another one. Aerosols … they’re a sufficiently ‘compact group’ that taken as a whole subject, they probably warrant a bullet. Hugely influential on the genesis of clouds (nucleation), and themselves having both provenance and persistence that is influenced heavily by insolation, temperature, local air pressure, and time … well, they’re important. All by themselves.
Unfortunately, (10) thru (13) cover “almost everything” if you want to take them at face value. Aerosols, The Moon, and most of the prior part of the list too. A little too big to be useful (if (1) thru (9) are there!)
GoatGuy

Werner Brozek
Reply to  GoatGuy
November 6, 2015 8:14 am

You asked for extra bullets, so there you be. The Moon. Let’s see another one. Aerosols

I really did not mean extra bullets, but things that affect climate that were not covered. And the moon and aerosols are certainly there, for example:
2. Orbital Energy, Orbital Period, Orbital Spiral, Elliptical Orbits (Eccentricity), Tilt (Obliquity), Wobble (Axial precession) and Polar Motion;
This Tidal Force is influenced by variations in Lunar Orbit;
http://en.wikipedia.org/wiki/Orbit_of_the_Moon
as seen in the Lunar Phases;
http://en.wikipedia.org/wiki/Lunar_phase
Lunar Precession;
http://en.wikipedia.org/wiki/Lunar_precession
Lunar Node;
http://en.wikipedia.org/wiki/Lunar_node
8. Atmospheric Composition
Aerosols;
http://en.wikipedia.org/wiki/Aerosol
that “act as cloud condensation nuclei, they alter albedo (both directly and indirectly via clouds) and hence Earth’s radiation budget, and they serve as catalysts of or sites for atmospheric chemistry reactions.”
“Aerosols play a critical role in the formation of clouds;
http://en.wikipedia.org/wiki/Clouds

GoatGuy
Reply to  Werner Brozek
November 6, 2015 10:03 am

Sorry old bean… I was just reading the bullet list from the top of this blog post. I didn’t have “the hour” or two to invest in the whole article. My apologies.
GoatGuy

Werner Brozek
Reply to  Werner Brozek
November 6, 2015 10:18 am

I didn’t have “the hour” or two to invest in the whole article.

I know this is a very heavy post, especially if you are a new reader and did not see that post the first time. I was actually wondering if I should have asked Just The Facts to reprint his post again a week before mine.

Paul Westhaver
November 6, 2015 7:45 am

Werner Brozek and Robert Brown,
This column is a little dense. So it is going to take me a little longer than usual to read and understand it all. I do like your proposition of items 13 &14 as variables, and I think, since the computer models are so daft, the known unknowns, and the unknown unknowns may have more impact…. sun… water…
now I have to continue the reading….

David L. Hagen
November 6, 2015 7:50 am

Add Hurst Kolmogorov dynamics (aka climate persistence) where real systems show standard deviations about twice that found by assuming random stochastic processes.

rgbatduke
Reply to  David L. Hagen
November 6, 2015 9:05 am

Yup. Again, climate statistics is a lot more difficult than many of the papers addressing the subject allow for. This isn’t always true – there are some good papers out there. But there isn’t an elephant in this particular room — there is a herd of the damn things, all being ignored in order to maintain the assertions of high confidence that support the trillion dollar hysteria.
Climate science isn’t about science or statistics. It is about money. All about money.
rgb

Stephen Richards
Reply to  rgbatduke
November 6, 2015 1:42 pm

Dr B I love your work. Your clarity must come from many years of teaching and being challenged by students.
Many thanks

Rob Morrow
November 6, 2015 7:56 am

Call me a cynic, but I don’t believe climate science will be “settled” in my lifetime. Western citizens are too happy and distracted, fat and wealthy, and willfully ignorant to overcome the decades of green-brainwashing. People with no basis in the physical sciences have Faith that climate scientists are telling it straight, and these people represent the vast majority. “Climate change” has replaced original sin. Ignorant well-meaning citizens will continue to vote for politicians who push the green agenda because they Believe it is the moral thing to do. Most of these citizens are also totally ignorant of economics and they swallow every spoonful of green energy tripe.
Recently elected King Trudeau II changed the name of Canada’s ministry of environment to the ministry of environment and CLIMATE CHANGE. The public’s reaction? PRAISE! Ontario electricity prices have tripled in the last decade, now the Liberals can roll out green BS across the country! Huzzah!!!
If Obama isn’t able to destroy the U.S. economy, future Queen Hillary will continue his legacy.
Many commenters on this site believe the tide of public perception is turning in favour of reason and skepticism. I think that is wishful thinking which hasn’t been confirmed by observation.

Edmonton Al
Reply to  Rob Morrow
November 6, 2015 10:10 am

AND.. The socialist/Marxists have been voted in in Alberta recently.
The ministers have ultra left wing Chiefs-of-staff.
All anti-oil and anti-pipeline anti everything re fossil fuels.

Edmonton Al
Reply to  Edmonton Al
November 6, 2015 10:11 am

Oh I forgot; Not anti carbon tax or not anti CCS

Werner Brozek
Reply to  Edmonton Al
November 6, 2015 10:24 am

and anti-pipeline

While the Alberta premier is against Keystone, Trudeau is for it. And ironically, they may get it sooner than the conservatives because the new provincial NDP and the new federal liberals are so green.

Rob Morrow
Reply to  Edmonton Al
November 6, 2015 12:16 pm

I doubt “social license” would be the true basis for Obama’s Keystone decision. It’s about money and power. The U.S. has built 12,000 miles of new pipeline in the last 5 years while elevating Keystone to pariah status. Obama’s rejection of Keystone (today) is a political move, whereby he has license to add a feather to his climate cap for preventing further spread of the dirty evil tar-sands.

Rob Morrow
Reply to  Edmonton Al
November 6, 2015 10:33 pm

@Werner
P.S. Thank you for your article, and thank you for being so active in the comments! This is the sort of practical integrity that is so rarely seen (from non-skeptics).

Werner Brozek
Reply to  Edmonton Al
November 7, 2015 5:42 am

thank you for being so active in the comments!

You are welcome! If possible, I believe all writers should be available to answer all questions and be available to correct errors if they are pointed out.

NZ Willy
Reply to  Rob Morrow
November 6, 2015 5:30 pm

Agreed, the hysteria is uncontested in the NZ media. The skeptical viewpoint isn’t even mentioned any more, except in readers’ comments.

Rob Morrow
Reply to  NZ Willy
November 6, 2015 11:09 pm

NZ Willy
I observe the same thing in most Canadian press.
As far as (most) politicians are concerned, CAGW skeptics are a bigoted special interest group because they don’t tow the “progressive” line. Whether it be politicians becoming more pragmatic or the dim electorate gaining the power of objective thought, I will not be holding my breath for society to see “climate change” for what it is – an unsolved, highly politicized question. The question of climate change has become politicized to the point where objective falsifiable scientific data is irrelevant, because the scientifically illiterate democratic majority has already been greenly convinced, and they will continue to vote for their green champions until a real pragmatic leader/statesman emerges who is able to convince them otherwise. I doubt such a leader could gain or maintain popularity for very long, E.g. Aussie PM ousted. Public belief in CAGW is rampant. We need the next Martin Luther to help these lost “souls”. I hope such a champion will emerge.
Winston Churchill is credited with the quote that “democracy is the worst form of government, except for all the others”. History suggests this is true in politics and totally backwards for science. Consensus based science = backward thinking.

Reply to  Rob Morrow
November 7, 2015 7:05 am

“While the Alberta premier is against Keystone, Trudeau is for it.”
Is that actually true, or just what Trudeau says because he needs to say such things to get elected? There is a huge difference. If he really wants it gone, but it’s not politically possible for him to do so, day one could be a call with Obama saying “I’d have no objections if you kill this thing once and for all.”

Werner Brozek
Reply to  kcrucible
November 7, 2015 7:52 am

“While the Alberta premier is against Keystone, Trudeau is for it.”
Is that actually true

As far as I know, this is true. Even now, he expresses disappointment in Obama’s decision although Trudeau respects Obama’s right to make that decision. But as for the new Alberta premier, the day after the election she said she will work with Trudeau on pipelines so it does not look as if she wants to get in his way. But she will not go to Washington to push it, unlike the previous conservative premier.

Science or Fiction
November 6, 2015 7:59 am

If the science is settled? I am not even sure it is settled that it is a valid scientific theory! Which falsifiable predictions has been made? Which observations would falsify the theory?
Everything seems to be allowed by this theory: more rain and less rain, more wind and less wind, more ice and less ice and the oceans are rising anyhow. A theory which allows everything explains nothing. Rising temperature, non rising temperature, I guess they could even make an ad hoc excuse for falling temperatures:
“Ocean warming dominates the total energy change inventory, accounting for roughly 93% on average from 1971 to 2010. The upper ocean (0-700 m) accounts for about 64% of the total energy change inventory. Melting ice (including Arctic sea ice, ice sheets and glaciers) accounts for 3% of the total, and warming of the continents 3%. Warming of the atmosphere makes up the remaining 1%.”
(Ref: Contribution from Working group I; On the scientific basis; to the fifth assessment report by IPCC)
The heat capacity of the oceans is about 1000 times the heat capacity of the atmosphere. This means that an amount of energy, which would be sufficient to warm the atmosphere by 1 K, would only be sufficient to warm the oceans by 0.001 K.
This further means that any lack of warming of the atmosphere can be excused by claiming a minuscule change in the temperature of the oceans. A change so miniscule that it cannot be measured. If we add to it that there does not exists a reliable historical temperature record of the oceans, it becomes clear that the Global Warming theory put forward by United Nations isn´t falsifiable.
It is then time to turn to Karl Popper for a take on scientific theories. Karl Popper was the mastermind behind the modern scientific method – Popper´s empirical method. Quotes are from his book “The logic of Scientific Discovery”
http://strangebeautiful.com/other-texts/popper-logic-scientific-discovery.pdf
(First 26 pages should do, easy reading, and soothing – from a master mind)
“But I shall certainly admit a system as empirical or scientific only if it is capable of being tested by experience. These considerations suggest that not the verifiability but the falsifiability of a system is to be taken as a criterion of demarcation. In other words: I shall not require of a scientific system that it shall be capable of being singled out, once and for all, in a positive sense; but I shall require that its logical form shall be such that it can be singled out, by means of empirical tests, in a negative sense: it must be possible for an empirical scientific system to be refuted by experience.»
In short – if it isn´t falsifiable, if no testable and falsifiable predictions are made, it isn´t scientific.

Gloateus Maximus
Reply to  Science or Fiction
November 6, 2015 5:43 pm

The hypothesis of man-made GHG, catastrophic global warming does make predictions, which have repeatedly been shown false.
The air has not warmed more and before the surface, as the hypothesis requires.
No tropical tropospheric hot spot, as per models.
No global warming for the first 32 years after WWII, despite prediction of warming from rapidly rising CO2 levels.
No global warming since the ’90s, despite even more rapidly rising CO2 levels.
For starters.
Massive fail. Falsified. Big time.

Gloateus Maximus
Reply to  Gloateus Maximus
November 6, 2015 5:44 pm

PS:
Major parts of the globe cooling, such as Antarctica, despite supposedly well-mixed CO2.

Science or Fiction
Reply to  Gloateus Maximus
November 7, 2015 12:49 am

I think the main problem is the failure by Intergovernmental Panel on Climate Change to search for and acknowledge falsifying experiences:
Again some words from Karl Popper – The master mind behind the Modern Scientific method – The hypotetico – deductive method (Popper called it the empirical method):
“… it is always possible to find some way of evading falsification, for example by introducing ad hoc an auxiliary hypothesis, or by changing ad hoc a definition. It is even possible without logical inconsistency to adopt the position of simply refusing to acknowledge any falsifying experience whatsoever. Admittedly, scientists do not usually proceed in this way, but logically such procedure is possible”
“the empirical method shall be characterized as a method that excludes precisely those ways of evading falsification which … are logically possible. According to my proposal, what characterizes the empirical method is its manner of exposing to falsification, in every conceivable way, the system to be tested. Its aim is not to save the lives of untenable systems but … exposing them all to the fiercest struggle for survival.”
Has United Nations – IPCC added hypothesis – in ad hoc manners?
Oh yes:
Kevin Trenberth introduced the ad hoc hypothesis that the expected warming of the atmosphere went into the oceans:
“Well, I have my own article on where the heck is global warming?…The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t.”
– Kevin E. Trenberth
(Dr. Kevin Trenberth was a lead author of the IPCC’s 2nd, 3rd and 4th Assessment Reports.)
In his paper, Trenberth and collaborators argue that the ‘missing’ heat is sequestered in the ocean, below 700 m. His paper was called: “Distinctive climate signals in reanalysis of global ocean heat content” (Geophysical research letters – first published 10 May 2013)
I think the phrase “scientists do not usually proceed in this way” is key to understand the misconduct by IPCC. United Nations did not create a scientific body – United Nations created a biased beast – the Intergovernmental Panel on Climate Change. On which United Nations enforced:
– a mission
– the unscientific principle of consensus
– an approval process and organization principle which, by it´s nature, diminish dissenting views.
Ref: http://www.ipcc.ch/pdf/ipcc-principles/ipcc-principles.pdf
We are not dealing with scientist – we are dealing with justificationists and inductivists.
Justificationists and inductivists will not look for falsifying experiences – and if a falsifying experience is presented to them they will start looking for ad hoc excuses.
We can’t solve problems by using the same kind of thinking we used when we created them.
– Albert Einstein

Science or Fiction
Reply to  Gloateus Maximus
November 7, 2015 2:15 am

Correction: The paper I refer to was not the original paper where the ad hoc excuse was introduced. But section 1. introduction gives a good overview of how the issue was approached:
http://www.cgd.ucar.edu/cas/Trenberth/website-archive/trenberth.papers-moved/Balmaseda_Trenberth_Kallen_grl_13.pdf
“increasing greenhouse gases should have led to increasing warming. However, sea surface temperature increases stalled in the 2000s and this is also reflected in upper ocean heat content for the top 700 m in several analyses. Although the energy imbalance from 1993 to 2003 could be accounted for, it was not possible to explain the energy imbalance from 2004–2008. This led to the concept of “missing energy”.” (References and acronyms are removed for clarity)
Clearly – the falsifying experience lead to a search for excuses and ad hoc hypothesis.
So by United Nations theory, energy is supposed to be:
– be trapped by CO2 in the atmosphere – but fails to warm it
– pass the upper 700 meter of the oceans – without warming it
– hide in the deep oceans below 700 meters
( where we don´t have proper data and where it cannot be measured )
“Alice laughed: “There’s no use trying,” she said; “one can’t believe impossible things.”
“I daresay you haven’t had much practice,” said the Queen. “When I was younger, I always did it for half an hour a day. Why, sometimes I’ve believed as many as six impossible things before breakfast.”
– Alice in Wonderland.

T-Braun
November 6, 2015 8:06 am

“More is different”
Also,
“Less is more” (Ludwig Mies Van Der Rohe (1886-1969))
Therefore,
“Less is different” (more or less)

AndyG55
November 6, 2015 8:08 am

Werner, you say….
“The UAH average anomaly so far for 2015 is 0.225. This would rank it as 3rd place. 1998 was the warmest at 0.483. The highest ever monthly anomaly was in April of 1998 when it reached 0.742. The anomaly in 2014 was 0.188 and it was ranked 5th.”
I have 0.245 for UAH 2015 to date (end of October) 3rd place
On a “year to end of October” basis, which is the only comparison one should really make..
I get 1998 anomaly as 0.543, and 2014 as 0.180 in 7th place.
Just wondering why our values differ?
Here’s what I get on a “Year to end October” basis from UAH global
1998 0.543
2010 0.386
2015 0.245
2002 0.223
2005 0.214
2007 0.195
2014 0.180
2003 0.169
2013 0.150
2006 0.115

Werner Brozek
Reply to  AndyG55
November 6, 2015 9:06 am

The UAH average anomaly so far for 2015 is 0.225.

This was to the end of September.

I have 0.245 for UAH 2015 to date (end of October) 3rd place

I agree. (Hadcrut4 is very slow! That is why RSS is out of date.)
As for my other numbers, they are all 12 month averages:
UAH6.0beta3
1st 1998 0.482
2nd 2010 0.344
3rd 2002 0.213
4th 2005 0.200
5th 2014 0.188
6th 2003 0.184
7th 2007 0.162
8th 2013 0.140
9th 2006 0.115
10th 2001 0.115
11th 2009 0.102

AndyG55
Reply to  Werner Brozek
November 6, 2015 9:18 am

Ok, our UAH V6b3 whole year numbers match reasonably well. 🙂
1998 0.483
2010 0.343
2002 0.213
2005 0.201
2014 0.187
2003 0.184
2007 0.161
2013 0.139
2001 0.116
2006 0.116
2009 0.100

emsnews
Reply to  Werner Brozek
November 6, 2015 11:54 am

And what a TINY sample this is! Which makes the entire thing rather silly.

November 6, 2015 8:12 am

I think a good example of Self-organization in chaos is the Hilsch vortex tube and I have always wondered if something similar was going on in the earths climate.
“The hilsch vortex tube, cools and heats air at the SAME time with no moving parts, and NO electricity. cool huh? it’s quite simple, and only a matter of getting the dimensions right! Not to mention the ability to produce EXTREME temperatures! all that’s needed is compressed air!”
http://www.instructables.com/id/The-Hilsch-vortex-tube/

knr
November 6, 2015 8:12 am

I would add another one. we must be able to measure accurately that which are making judgement upon.
You simply cannot understand the nature of any change if you cannot measure that change in a manner that has scientific validity , you can only ‘guess it ‘
Science 101 , no matter how fancy your model or theory are , it gets down to your ability to see and know things .

1 2 3 4