No Consensus: Earth’s Top of Atmosphere Energy Imbalance in CMIP5-Archived (IPCC AR5) Climate Models

Guest Post by Bob Tisdale

This post provides an initial look at climate model simulations of the top of the atmosphere (TOA) energy budget and its three components. It includes the outputs of the climate models stored in the CMIP5 archive (used by the IPCC for the 5th Assessment Report).

There are astonishing differences in the modeled estimates of the past, present and future imbalances and the three components that make up the top of the atmosphere (TOA) energy budget. That is, there is no agreement on the magnitude of TOA Earth’s energy imbalance in the models, and there are even wider disagreements in the calculated components that make up that energy budget, how they evolved in the past, and how they may evolve in the future…all suggesting, among the models, there is little agreement in the modeled processes and physics that contribute to global warming now, contributed to it in the past and might contribute to it in the future.


For those new to this discussion, the Earth is said to have an Energy Budget. Trenberth et al. (2009) Earth’s Global Energy Budget provided a reasonably easy-to-understand discussion of the factors that impact that budget at the top of the atmosphere. My Figure 1 is Figure 1 from Trenberth et al. (2009). Focus your attention on the values of the three components at the top of the atmosphere. Those factors balance. That is, the energy from the sun (incoming solar radiation, a.k.a. Incident Shortwave Radiation) is equal to the sum of the sunlight reflected back to space (reflected solar radiation, a.k.a. Outgoing Shortwave Radiation) and the infrared radiation the Earth emits to space (Outgoing Longwave Radiation). The hypothesis of human-induced global warming says that manmade greenhouse gases cause an imbalance in that budget.

Figure 1

Figure 1

Note that the value for the incoming solar radiation (341.3 watts/m^2) is much less than the values you are used to seeing for Total Solar Irradiance (TSI) at the top of the atmosphere, which is about 1366 watts/m^2. Why the lower number in the energy budget? The sun only shines on half the Earth at one time and, because the Earth is spherical, sunlight is not distributed evenly across Earth’s surface. So the incident shortwave (solar) radiation is ¼ the TSI value. See the NASA EarthObservatory webpage Incoming Sunlight for a further discussion.

We’ll be discussing and illustrating the input values and the climate-model-created values of the components at the top of the atmosphere, and their difference, known as the Energy Imbalance.

Trenberth et al. (2014) Earth’s Energy Imbalance is one of a series of papers that present and discuss the imbalance in Earth’s energy budget. They begin their introduction with:

With increasing greenhouse gases in the atmosphere, there is an imbalance in energy flows in and out of the earth system at the top of the atmosphere (TOA): the greenhouse gases increasingly trap more radiation and hence create warming (Solomon et al. 2007; Trenberth et al. 2009). ‘‘Warming’’ really means heating and extra energy, and hence it can be manifested in many ways. Rising surface temperatures are just one manifestation. Melting Arctic sea ice is another. Increasing the water cycle and altering storms is yet another way that the overall energy imbalance can be perturbed by changing clouds and albedo. However, most of the excess energy goes into the ocean (Bindoff et al. 2007; Trenberth 2009). Can we monitor the energy imbalance with direct measurements, and can we track where the energy goes? Certainly we need to be able to answer these questions if we are to properly track how climate change is manifested and quantify implications for the future.

And we certainly need to look at how climate models attempt to answer those questions.

If you’re new to this discussion, you might be thinking the energy imbalance is a great big number. Sorry to disappoint you. Compared to the amount of sunlight reaching the top of the atmosphere, the imbalance is tiny…really tiny. Trenberth et al. (2014) provide a rough estimate in their Abstract:

All estimates (OHC and TOA) show that over the past decade the energy imbalance ranges between about 0.5 and 1 Wm-2.

The estimates of 0.5 to 1 watts/m^2 (watts per square meter, referenced to Earth’s surface area) are only 0.15 % to 0.29 % of the 341 watts/m^2 estimated amount of sunlight at the top of the atmosphere shown in Figure 1.

As you will see in this post, the range of the climate-modeled energy imbalance has a much larger range, about 10 times the 0.5 watts/m^2 range mentioned by Trenberth et al. (2014). And there is no agreement about how the imbalance was created in the past, or might be created in the future.


The climate models used in this post are from the Coupled Model Intercomparison Project, Phase 5 (CMIP5) archive. The source of the model outputs is the KNMI Climate Explorer, specifically from the Radiation variables on the Monthly CMIP5 scenario runs webpage. The TOA Incident Shortwave Radiation (incoming downward solar radiation) is identified as rsdt on that KNMI webpage, the TOA Outgoing Shortwave Radiation (reflected solar radiation) as rsut, and the TOA Outgoing Longwave Radiation (emitted infrared radiation) as rlut. I’ve used the higher of the middle-of-the-road scenarios, RCP6.0, and downloaded the outputs individually for each model.

I’ve excluded three models: CESM-CAM5 and two IPSL models. There were shifts at 2006 in the TOA Outgoing Longwave Radiation outputs of all three runs of the CESM-CAM5 model (one with a monstrous shift), which skewed the multi-model mean of that metric for that scenario. (I notified KNMI of that problem, and NCAR has since corrected them.) I also excluded the two IPSL models because their TOA Incident Shortwave Radiation contains a volcanic aerosol component, while all other models do not. (The other models address volcanic aerosols with the Outgoing Shortwave Radiation.)

That leaves 21 models, including BCC-CSM1-1, BCC-CSM1-1-M, CCSM4 (6 runs), CSIRO-MK3-6-0 (10 runs), FIO-ESM (3 runs), GFDL-CM3, GFDL-ESM2G, GISS-E2-H p1, GISS-E2-H p2, GISS-E2-H p3, GISS-E2-R p1, GISS-E2-R p2, GISS-E2-R p3, HadGEM2-AO, HadGEM2-ES (3 runs), MIROC5 (3 runs), MIROC-ESM, MIROC-ESM-CHEM, MRI-CGCM3, NorESM1-M, and NorESM1-ME.

For those models with multiple runs, the ensemble members are averaged before being presented in this post and before being included in the multi-model mean.


The multi-model mean for the radiative imbalance at the top of the atmosphere is shown in the top graph (Cell a) of Figure 2. The graphs run from 1861 to 2100. The multi-model mean of the individual components (Incident Shortwave Radiation, Outgoing Shortwave Radiation and Outgoing Longwave Radiation) are also shown in the Cells b to d. Listed on each of the graphs is the average value for the period of 1996 to 2015. (Explanation for those years: The historic portions of the simulations run from 1861 to 2005, while the forecasts based on projected RCP6.0 forcings run from 2006 to 2100. So the period of 1996 to 2015 includes the last 10 years of hindcast and first 10 years of forecast. They’ll serve as our base years for anomalies in future graphs.) The values of the averages are close to those shown in Figure 1 from Trenberth et al. (2009).

Figure 2

Figure 2

Note the increase in TOA Incident Shortwave Radiation from 1900 to the early 1950s (Cell b). That indicates the modelers are still trying to explain part of the warming in the first half of the 20th Century with a notable increase in the output of the sun, which may or may not have happened. Also notice the decrease in the amplitude of the solar cycle during the 21st Century. Explanation: The modelers use different lengths for the solar cycle in future decades. They grow farther out of synch with time, so the average decreases in amplitude.

Cells c and d present some interesting information about many of the models. The model mean for the outgoing shortwave radiation (Cell c) increases during the hindcast, which indicates, from 1861 to the turn of the century, clouds and volcanic aerosols allowed less sunlight to pass through the TOA to the Earth’s surface. And, even though modeled surface temperatures warmed from 1861 to 2000, outgoing longwave radiation decreased (Cell d). However, also based on the model means, the trends of both metrics reverse during the 21st Century. That is, during the 21st Century, outgoing longwave radiation increases as global surfaces warm. And some of that warming is caused by an assumed future increase in sunlight reaching the Earth’s surface. (That is, if less sunlight is being reflected to space as the future progresses, and if there is no assumed increase in the amount of sunlight reaching the top of the atmosphere, then the sunlight reaching the Earth’s surface is increasing.)

But the multi-model means are not the focus of this post. Our interests are the wide ranges in the model simulations of Earth’s Energy Imbalance at the top of the atmosphere and its components. Let’s start with the incident shortwave radiation.


The incident shortwave radiation at the top of the atmosphere is based on a climate model forcing. The CMIP5 – Modeling Info – Forcing Data webpage provides a link to the SOLARIS website for the recommended solar forcing data. The recommendations can be found here. They recommend a total solar irradiance reconstruction by Judith Lean, and provide very clear instructions for solar cycles in the future…repeat Solar Cycle 23.

Apparently there were different interpretations of the recommendations. See Figure 3, which presents the model outputs of TOA incident shortwave radiation in absolute form. There are three primary groupings. The 2 models from one modeling group start about 338.5 watts/m^2. There’s the middle grouping of 5 models that start around 340.25 watts/m^2. Then there is the grouping of the other 14 models starting about 341.5 watts/m^2.

Figure 3

Figure 3

You’ll note that I’ve listed the average, maximum and minimum values for the base period of 1996 to 2015. There is almost a 3 watts/m^2 spread in the TOA incident shortwave radiation among the models during our base period.

In Figure 4, the climate model outputs of TOA incident shortwave radiation are presented as anomalies, with the base years of 1996-2015. With the model outputs in anomaly form we can better see the similarities and differences in the curves. Two models from one modeling group really stand out…they’re the models that started with the lowest absolute incident shortwave radiation. Those models show a much stronger increase in solar forcing during the hindcast and they show a curious decrease in incoming sunlight from the early 2000s to 2100. The other models tend to agree with one another during the hindcast (1861-2005), but then run out of synch during the projections.

Figure 4

Figure 4

Not illustrated: At least one of the modeling groups appears to include a Solar Cycle 24 of lower amplitude, and then repeat Solar Cycles 22, 23 and 24 into the future.


As a reminder, the outgoing shortwave radiation represents a portion of the incoming sunlight that’s reflected back to space, primarily by clouds and volcanic aerosols. Where incident shortwave radiation is basically an input, outgoing shortwave radiation is a model-calculated value.

Figure 5 presents in absolute terms the TOA outgoing shortwave radiation of the 21 individual CMIP5-archived models with historic and RCP6.0 forcings. The thing that stands out most is the wide range of model-manufactured values. Based on the 1996-2015 averages, there’s about a 10 watts/m^2 span from the model with the minimum average to the model with the maximum, while there was only a 3 watts/m^2 span in the amount of incoming sunlight (Figure 3).

Figure 5

Figure 5

The upward spikes show the impacts of volcanic aerosols on the outgoing shortwave radiation.

Figure 6 presents the long-term hindcasts and projections of outgoing shortwave radiation, but this time in anomaly form, referenced to the 1996-2015 base years. The use of anomalies allows a better visual comparison of the modeled changes before and after the transition from hindcast to forecast. Obviously, there are very wide ranges in the hindcasted and forecasted trends in model-simulated outgoing shortwave radiation. Again, note how the model mean shows increasing outgoing shortwave radiation in the past and a decrease in the future.

Figure 6

Figure 6

While the model mean of the outgoing shortwave radiation increases at a rate of about +0.08 watts/m^2/decade from 1861 to 2005, some models show a sizeable increase and others show little to no trend. Only one model shows a decline during the hindcast, and its trend is relatively flat at about -0.01 watts/m^2/decade. The model with the fastest increase from 1861-2005 has a trend of about +0.16 watts/m^2/decade. In other words, there’s about a 0.17 watts/m^2/decade spread in the trends of hindcast outgoing shortwave radiation.

Looking at the projections for 2006 to 2100, the model mean of outgoing shortwave radiation has a negative trend of about -0.21 watts/m^2/decade. But some models show outgoing shortwave radiation increasing slightly in the future, while others show it decreasing at a much greater rate. The greatest negative trend is about -0.48 watts/m^2/decade. At the other end of the wide spectrum is a model with a relatively slight positive trend of +0.02 watts/m^2/decade. Bottom line: there’s about a 0.5 watts/m^2/decade spread in the projected future outgoing shortwave radiation, with some models showing little change and others a sizeable decrease.

In Figure 7, I’ve smoothed the model outputs of outgoing shortwave radiation with 5-year running-mean filters to help show the differences in the shapes of the individual model curves.

Figure 7

Figure 7

One might conclude that a lot of different and contradicting assumptions go into the simulations of Earth’s climate. There certainly is little agreement among modeling groups about how sunlight impacted the energy imbalance in the past or might impact it in the future.


Reminder: the TOA outgoing longwave radiation component of the Earth’s budget represents the infrared radiation emitted to space. Like outgoing shortwave radiation, outgoing longwave radiation is a model-calculated value, not an input like incident shortwave radiation.

The model simulations of outgoing longwave radiation in absolute form are shown in Figure 8. They too show a massive spread in simulated values. The 10 watts/m^2 difference during our base period indicates there is little agreement among the models on how much infrared radiation is presently being emitted by Earth to space.

Figure 8

Figure 8

The model simulations of outgoing longwave radiation are presented as anomalies (referenced to the base years of 1996 to 2015) in Figure 9. As noted earlier, the model mean shows outgoing longwave radiation decreasing during the hindcast but increasing during the forecast.

Figure 9

Figure 9

For the hindcast period of 1861 to 2005, the model mean of the outgoing longwave radiation declines at a rate of about -0.1 watts/m^2/decade. The model with the slowest decline during the hindcast has a trend of about -0.02 watts/m^2/decade, while the model with the fastest decline from 1861-2005 has a trend of about -0.17 watts/m^2/decade. That is, there’s a spread of about 0.15 watts/m^2/decade during the hindcast.

Looking at the projections for 2006 to 2100, the model mean of outgoing longwave radiation has a positive trend of about 0.1 watts/m^2/decade. But some models show outgoing longwave radiation decreasing slightly in the future, while others show it increasing. The greatest negative trend is about -0.1 watts/m^2/decade. At the other end of the wide spectrum is a model with a positive trend of 0.35 watts/m^2/decade. Bottom line: there’s an approximate 0.45 watts/m^2/decade spread in the trends of projected future outgoing longwave radiation, with some models showing an increase and others a decrease.

Figure 10 presents the modeled outgoing longwave radiation anomalies smoothed with 5-year filters, to provide a clearer view of the differences in the model simulations.

Figure 10

Figure 10


If you’re thinking the reasons for the wide ranges in hindcast trends and, similarly, the wide ranges in the forecast trends of outgoing longwave and shortwave radiation have to do with modeled representations in clouds, you’re likely correct.

From Dolinar et al. (2014): Evaluation of CMIP5 simulated clouds and TOA radiation budgets using NASA satellite observations. Their abstract begins:

A large degree of uncertainty in global climate models (GCMs) can be attributed to the representation of clouds and how they interact with incoming solar and outgoing longwave radiation. In this study, the simulated total cloud fraction (CF), cloud water path (CWP), top of the atmosphere (TOA) radiation budgets and cloud radiative forcings (CRFs) from 28 CMIP5 AMIP models are evaluated and compared with multiple satellite observations from CERES, MODIS, ISCCP, CloudSat, and CALIPSO.

They then go on to describe the results of their study of AMIP models, which may help future CMIP models. Dolinar et al. (2014) end the abstract with (my brackets):

Through a comprehensive error analysis, we found that CF [total cloud fraction] is a primary modulator of warming (or cooling) in the atmosphere. The comparisons and statistical results from this study may provide helpful insight for improving GCM simulations of clouds and TOA radiation budgets in future versions of CMIP.

Basically, Dolinar et al. acknowledge that a large source of the uncertainties in outgoing longwave and shortwave radiation in GCMs is clouds.


If you were to scroll up to Figures 9, 6 and 4, you’d note that the scales of the anomaly graphs are very different for the three components of the top-of-the-atmosphere energy imbalance. That is, the differences in the simulated TOA outgoing shortwave radiation are so great that the y-axis on Figure 6 spans 12 watts/m^2, while the y-axis for TOA incident shortwave radiation anomalies in Figure 4 only spans 0.7 watts/m^2. To put those metrics into perspective, for Animation 1, I’ve used a common scale for the spaghetti graphs of the model outputs. And to help minimize the model noise and show the differences between the models, I’ve smoothed them all with 5-year running-average filters.

Animation 1

Animation 1

That leads us to…


As you’ll recall, Earth’s energy imbalance is determined by subtracting the outgoing shortwave radiation (reflected sunlight) and the outgoing longwave radiation (emitted infrared radiation) from the incident shortwave radiation (incoming sunlight). In other words, for the climate models, we’re basically subtracting two computer-calculated values from a computer input.

We’ll start with the energy imbalance in anomaly form. We presented the outgoing shortwave radiation (Figure 6) and outgoing longwave radiation (Figure 9) as anomalies to show the differences between the individual models. But for the energy imbalance, Figure 11, they’re presented as anomalies to show how similar the curves are. That is, there were amazing differences in the basic curve shapes and trends of the individual model simulations of outgoing longwave and shortwave radiation, but remarkably, though not unexpectedly, the basic curves of the modeled TOA energy imbalance are much more similar in shape.

Figure 11

Figure 11

Figure 12 presents the TOA energy imbalance anomalies smoothed with 5-year filters.

Figure 12

Figure 12

Animation 2 is the same as the “perspective animation” (Animation 1) but it also includes the energy imbalance anomalies smoothed with 5-year filters.

Animation 2

Animation 2

Again, we presented the modeled energy imbalances to show the similarities in the curves, but our primary focus is the modeled TOA energy imbalances in absolute form.

And now the punchline:


Figure 13 presents the simulated energy imbalance in absolute form. There is a 5 watts/m^2 span between models for the base period energy imbalances. Four of the models’ energy imbalances for the base period are negative.

Figure 13

Figure 13

That range in modeled energy imbalances was so great, not only did I double check all of the spreadsheets and downloads, but I cross-checked the extremes. Those extremes in the modeled energy imbalances come from two modeling groups, the two highs from one and the two lows from another. To cross-check the results, I downloaded the outputs for the 4 model runs from those two groups (2 each) but this time using the outputs of the historic/RCP8.5 (worst-case) scenario. See Figure 14. The spread is a tick higher with the RCP8.5 scenario. As one would expect, the RCP8.5 scenario also causes the energy imbalances to rise faster in the future than with RCP6.0.

Figure 14

Figure 14

Figures 13 and 14 reminded me of two things:

First is a statement in Hansen et al. (2011) The Earth’s Energy Imbalance and Implications. James Hansen, as you’ll recall, is the former (retired) director of GISS. In that paper, they discussed a problem with the satellite-measured energy imbalance at the top of the atmosphere and how the climate science community worked around it (my boldface):

The precision achieved by the most advanced generation of radiation budget satellites is indicated by the planetary energy imbalance measured by the ongoing CERES (Clouds and the Earth’s Radiant Energy System) instrument (Loeb et al., 2009), which finds a measured 5-yr-mean imbalance of 6.5Wm−2 (Loeb et al., 2009). Because this result is implausible, instrumentation calibration factors were introduced to reduce the imbalance to the imbalance suggested by climate models, 0.85Wm−2 (Loeb et al., 2009).

Phrased differently, because the satellites were inaccurate, climate scientists had to rely on the outputs of climate models and assume they were correct.

If a 6.5 watts/m^2 energy imbalance is considered “implausible”, what about model-simulated energy imbalances of -2.2 watts/m^2 and +2.8 watts/m^2? Are those values implausible as well? If so, why are those models used by the IPCC? Didn’t they bother to check whether the models presented plausible simulations of Earth’s energy imbalance?

What about the four models that show a negative imbalance during our base period of 1996-2015? If those models are correct, then the hypothesis of human-induced global warming has a very big problem. A negative imbalance indicates that presently more energy is being reflected and emitted by the planet than is being received from the sun…and that our emissions of greenhouse gases are returning the Earth to a balanced energy budget.

On the other hand, recall that Trenberth et al. (2014) gave us an approximate range for the energy imbalance of 0.5 to 1.0 watts/m^2. There are 5 models that produce an energy imbalance greater than 1.2 watts/m^2 for the base period of 1996-2015. If they’re right, then there’s even more heat than is presently unaccounted for. They’ll have to call out more search parties to look for all of that missing heat.

The second thing the large range of modeled energy imbalances reminded me of: there is a similar large range in the modeled global surface temperatures in absolute form. See the graph here from the post On the Elusive Absolute Global Mean Surface Temperature – A Model-Data Comparison. The span of the modeled surface temperatures in 2010 was more than 3 deg C, and that’s about 3 times greater than Earth’s surfaces have warmed since pre-industrial times according to the (much-fiddled-with) observations-based data.

Not long after I wrote the “elusive” post, Gavin Schmidt (current director of GISS) made a couple of curious statements in his blog post Absolute temperatures and relative anomalies at RealClimate. We discussed them in the post Interesting Post at RealClimate about Modeled Absolute Global Surface Temperatures. The statement by Gavin Schmidt that bears on this discussion (my boldface):

Second, the absolute value of the global mean temperature in a free-running coupled climate model is an emergent property of the simulation. It therefore has a spread of values across the multi-model ensemble. Showing the models’ anomalies then makes the coherence of the transient responses clearer. However, the variations in the averages of the model GMT values are quite wide, and indeed, are larger than the changes seen over the last century, and so whether this matters needs to be assessed.

“…needs to be assessed” indicates they hadn’t bothered to do it by then…and likely still haven’t.

Climate models have been used by the political entity called the IPCC for almost 2 ½ decades to support a political agenda known as the UNFCCC. In those 2 ½ decades, apparently the climate science community hasn’t bothered to check to see whether it matters that the spread in absolute modeled global mean temperature is three times greater than the warming that’s taken place since pre-industrial times.

Now, let’s consider the absolute values of the radiative imbalance shown in Figure 13. The model mean shows an average energy imbalance of 0.79 watts/m^2 for the base period of 1996-2015, and Earth’s energy imbalance (based on the model mean) was about 0.10 for the first 20 years of 1861-1880 (the pre-industrial values). So, based in the model mean, Earth’s energy imbalance has increased roughly 0.69 watts/m^2 since pre-industrial times. But the range of the modeled energy imbalance (based on the CMIP5-archived models with historic and RCP6.0 forcings) during the base period spans about 5 watts/m^2. That’s more than 7 times greater than the 0.69 watts/m^2 modeled increase. Can we presume that the climate science community has not bothered itself to assess whether that matters, too? Maybe they’ve been avoiding it.

We’ve already mentioned a few reasons why it does matter. And what matters even more is that there is…


Much of this post discussed and illustrated that there was no agreement among climate models about what caused the Earth’s assumed energy imbalance and how those factors might change in the future. To help drive that point home, I’ve prepared Animation 3. It includes the energy balance and its three components in anomaly form for each of the models included in this post. All outputs have been smoothed with 5-year filters to minimize the noise inherent in the models.

Animation 3

Animation 3


Let’s return to the quote from the introduction of Trenberth et al. (2014) Earth’s Energy Imbalance:

With increasing greenhouse gases in the atmosphere, there is an imbalance in energy flows in and out of the earth system at the top of the atmosphere (TOA): the greenhouse gases increasingly trap more radiation and hence create warming (Solomon et al. 2007; Trenberth et al. 2009). ‘‘Warming’’ really means heating and extra energy, and hence it can be manifested in many ways. Rising surface temperatures are just one manifestation.

Climate models have been programmed to raise modeled surface temperatures and subsurface ocean temperatures as the simulated energy imbalance increases in response to the numerical representations of manmade greenhouse gases and other climate forcings. Climate models have also been programmed to create number-crunched changes in numerous other metrics so that they show rising sea levels, decreasing sea ice, ice sheet and glacier mass, increasing precipitation, and so on as the energy imbalance increases. As a result, there is a general agreement among the models that, as the energy imbalance increases in value, the Earth will gain heat and extra energy…and that the heat and extra energy will be “manifested in many ways”.


As we’ve illustrated and discussed in this post, looking at the three factors that make up the TOA energy imbalance, there is no agreement among the climate models on the values of past, present and future outgoing shortwave and longwave radiation. As a result, there is no agreement about:

  • what enhanced the warming we’ve experienced to date,
  • what will enhance any future warming, and
  • what the absolute values of the energy imbalance were in the past, are presently and will be in the future.

Climate models have been programmed to show global warming and all of its manifestations in response to rising energy imbalance values. But modeling groups go through very different gyrations (by manipulating clouds?) with the two computer-calculated components of the Earth’s energy budget at the top of the atmosphere in order to achieve that warming…which indicates there is no consensus on how Earth’s atmosphere and oceans have responded in the past, are responding now, and will respond in the future to manmade greenhouse gases. No consensus whatsoever.


My thanks to Judith Curry for suggesting papers that helped me better understand this topic and to Willis Eschenbach for taking a look at a draft of this post.

0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
August 11, 2015 2:29 am

“No consensus whatsoever” In other words, biased speculation.

Reply to  andrewmharding
August 11, 2015 11:09 am

I disagree, if one looks at the 26 CHIP5 coupled climate models one finds 10 Physical climate system models and 16 Earth system models. The physical models do not include a coupled Carbon Cycle.
One also finds various model resolutions, as the 26 are from 18 institutions in 11 countries. Model outputs require interpolation to a regular grid before actual analysis.
Consensus isn’t the issue, the issue is how each model and its interpolation reflects the observed and the combined bias error.
Physical Climate System models outperform Earth System models possibly because they lack the coupled carbon cycle.
Physical climate models consist of interactions between ocean, land, atmosphere, and sea ice.
There is consensus over aspects in each model and models can share the same code yet none accurately reflect the observed.
Eliminating the carbon cycle until the models get closer to the physics would be logical?

Bernard Lodge
Reply to  andrewmharding
August 11, 2015 9:41 pm

Air molecules emit IR radiation *isotropically*, in other words, equally in all directions. This means that for any given ‘back radiation’ of IR, there will be equal ‘outgoing radiation’ into space. Two points:
1. Trenberths global energy balance diagram is wrong because it shows back radiation of 333 Wm-2 but outgoing radiation of only 239 Wm-2.
2. Since CO2 molecules radiate isotropically, both downwards to earth and upwards to space, any increase in CO2 will obviously cause an increase in IR radiation in both directions. Any increase in upward radiation means a loss to the earth as a whole. In other words, an increase in CO2 will cool the earth.

Reply to  Bernard Lodge
August 12, 2015 1:26 pm

Neither of your “points” is relevant
1. The net energy flux transferred by LWR upwards is only 396-333 = 63Wm_2. So nothing’s wrong with that.
2. Not that simple. What’s radiated upwards at a given altitude H doesn’t simply escape into space since there is “saturation” and opacity due the CO2 present at upper levels that readily reabsorbs that radiation. Only at the effective emission altitude somewhere near top of troposphère does the radiation escape to space. And since this effective altitude of emission necessarily increases with increasing CO2 and both temperature and the number of molecules per unit volume in atmosphere (CO2 as others) decreases with altitude the overall LWR into space does not all increase but indeed decrease in agreement with calculations.
What’s wrong with CAGW alarmism is not to be found here

Reply to  Bernard Lodge
August 12, 2015 2:54 pm

Air molecules (at least the N2 and O2) emit nothing into space or back to the surface. Virtually all atmospheric emissions (up or down) come from either GHG molecules re-emitting a photon or the BB radiation from the non gaseous water in clouds.. Note that GHG molecules re-emit photons either upon collision (most likely) or upon absorption of another photon (less likely). O2 and N2 have no significant absorption lines in the LWIR spectrum relevant to surface and planet emissions, nor are there any GHG molecule state transitions that convert vibratinal energy into the linear kinetic energy of a colliding molecule, at least within the constraints of the energies involved, thus O2 and N2 are irrelevant to the radiant energy balance of the planet.
The preoccupation with the temperature of the atmosphere is an unnecessary complication when you consider that if the Earth had only an O2/N2 atmosphere, it would absorb or emit no photons, still have a lapse rate and vertical temperature profile, yet would have an effective emissivity of 0 and contribute nothing to the energy leaving the planet.
A collection of molecules becomes a black body quantified by a non zero emissivity only in the liquid or solid state as extreme collisional broadening morphs absorption/emission lines into a continuum owing to far more degrees of freedom for the allowed electron states. Other than the spectral properties and relevance of an emissivity, line absorption and emission follows the same basic rules as black body radiation, for example, COE and the ratio between the area of the absorbing surface and emitting surface must be accounted for.
Trenberth’s radiant energy balance is incorrect because he includes non radiant energy in the analysis in order to provide the wiggle room to support the impossible. That is, he conflates energy transported by photons with energy transported by matter. While joules are joules, only radiant energy can leave the planet, thus in the final analysis, the non radiant energy entering the atmosphere (latent heat, thermals, etc.) can only be returned to the surface. If you calculate the radiant surface energy that does not pass through the transparent window of the atmosphere (including blocking by clouds) you will find that about half of this absorption is required to make up the difference in what is required to leave the planet and the remaining half makes up the difference between the 240 W/m^2 of post albedo input power and the radiant emissions of the surface at its average temperature. The average 50/50 split is expected since energy enters the atmosphere across half the area (bottom) over which its emitted (top and bottom). Note that Trenberth also underestimates the power passing through the transparent window by about a factor of 2 and when pressed to justify his value, he can not.
The planet is almost always out of balance. During winter months, each hemisphere is emitting more than it’s receiving and during summer months is emitting less while asymmetries in the hemispheres prevent these effects from completely cancelling out, although, the planet is in balance twice per year.

Reply to  Bernard Lodge
August 13, 2015 1:09 am

CO2isnotevil says:
…only radiant energy can leave the planet, thus in the final analysis, the non radiant energy entering the atmosphere (latent heat, thermals, etc.) can only be returned to the surface.
Not true.
Convection (and latent heat) must of course be treated on the same level as radiative transport because both transport heat upwards from earth surface and therefore heat the upper troposphere.
Thus both heat the upper layers in troposphere that effectively can radiate into space (with 15 micrometer photons via CO2, for instance) because those photons can only escape into space at these altitudes being now no more absorbed by CO2 molecules present at higher altitudes.
And since the amount of those photons radiated into space increases with temperature of the effective emission layer in upper troposphere the latter actually radiates into space all the heat brought about from lower troposphere whether that’s achieved by means of convection, latent heat or radiative transport.

Reply to  Bernard Lodge
August 13, 2015 12:10 pm

Non radiant energy can contribute to the emissions of the planet if and only if it heats dust or non gaseous water in the atmosphere which can then emit it as BB radiation (O2 and N2 emit nothing). However, to the extent that photons leave the planet which had an origin from non radiant energy, they just replace photons with a radiant origin that physical laws dictate must be emitted and are not just trivially added to planet emissions.
The data demonstrates this conclusively and balance is achieved by simply applying physical laws to only the radiant components and ignoring the non radiant energy. The fact that this balances can only mean that an amount of energy equivalent to the non radiant energy entering the atmosphere must be returned to the surface and only to the surface.
Most of the non radiant energy is latent heat, which cools the surface water it evaporates from and warms the water droplets it condenses upon, ultimately returning this energy to the surface as rain which is warmer than it would have been otherwise. Convection is a circular path and what goes up must come down. Wherever a convective up draft exists, there is an equal and opposite down draft somewhere else. The point of all this is that there’s not much opportunity for non radiant energy to participate in the radiative balance of the planet, especially when LTE is considered. For example, in LTE, atmosphere dust and water must be emitting the same amount of energy they are absorbing,
I should point out that the 50/50 split up/down of the radiant energy absorbed by the atmosphere is modulated by clouds above and below nominal, but the long term average is 50/50 as radiative physics requires.

Reply to  Bernard Lodge
August 14, 2015 5:40 am

co2isnotevil says
Convection is a circular path and what goes up must come down.
The matter, namely the gas molecules must come down again, that’s sure.
But not the heat of course…. otherwise convection would not be effective in heating the upper troposphere !
You seem to overlook that convection heats not only N2 and O2 but also the CO2 in upper layers by means of collisions that establish local thermodynamic equilibrium. Hence the CO2 (and H2O) radiates more into space than they would do without convection, in other words it does evacuate heat transported upwards by convection into deep space.
That’s said I agree with your pseudo…

Reply to  gammacrux
August 14, 2015 9:17 am

Hot air rises and cold air sinks. The upper atmosphere warms in one place as the lower atmosphere cools in another and there is no net increase in energy added to the atmosphere, hence no net heating and only the distribution of heat has changed. Any effect of heat leaving the surface to heat the atmosphere and cool the surface (or visa versa) is already accounted for by the average temperature of the surface, thus there is no need to account for this energy again as incremental emissions on top of the BB radiation of the surface at its average temperature.
This is like saying that the effect from doubling CO2 on the system is equivalent to 3.7 W/m^2 of incremental solar power (forcing) and then applying this incremental forcing to a system that has also changed, thus counting the forcing twice. BTW, this is exactly what the consensus is doing.
While the kinetic energy of molecules in motion, including CO2, does manifest temperature, it does not manifest radiant emissions. The only radiant emissions from atmospheric gas molecules come from GHG molecules as they return to a lower energy state and emit a photon. Upper atmosphere CO2 will emit energy as photons only to the extent that it absorbs photons. Since kinetically heated gas molecules do not emit BB radiation, the kinetic temperature of atmospheric gases is irrelevant to the radiant energy balance of the planet.
Naively conflating the kinetic temperature of a gas with radiant energy indicative of the temperature of a distant object is the basic problem. Sure, joules are joules, but there are very strict rules on how energy can be converted between these two forms and its definitely not arbitrary as Trenberth seems to suggest.

Reply to  gammacrux
August 14, 2015 9:31 am

Maybe these concepts will help you understand what I’m talking about.
If not for the dust in galactic clouds of gas or the photons passing though them, those clouds would be invisible to us. Only the dust within radiates BB radiation and the gases only absorb photons passing through.
If the atmosphere contained only N2 and O2, it would still have a lapse rate associated with its kinetic temperature, but those gases are completely transparent to the LWIR leaving the planet and thus will have no effect on the temperature of the surface or the emissions of the planet.

Reply to  Bernard Lodge
August 14, 2015 10:21 am

co2isnotevil says
Upper atmosphere CO2 will emit energy as photons only to the extent that it absorbs photons
I definitely disagree. That’s not true.
CO2 (or any gas) at temperature T emits 15 micrometers photons for instance (or specific characteristic photons) whose number increases with temperature according to Planck’s law.
That’s even the basis of radiative thermometry. For instance that’s how Dr. Christy and Dr. Spencer at UAH obtain their tropospheric temperature data by means of satellite measurements of microwave emission from O2 and of course nobody shines microwaves on O2 to make it emit microwaves.
In contrast to what you seem to believe CO2 molecules in ground state can be and are of course permanently excited into higher vibrational states by means of the collisions with surrounding molecules and so emit 15 micrometer photons and in this way evacuate heat into space.

Reply to  gammacrux
August 14, 2015 12:50 pm

Like many on both sides of this issue, you fail to understand that GHG absorption/emission must obey the laws of Quantum Mechanics as well as bulk thermodynamic laws. The pedantic macroscopic behavior you talk about only accounts for the bulk thermodynamic laws.
Where do you think the energy of emitted 15u photons comes from? It can only come from the prior absorption of another 15u photon. If it came from the linear kinetic energy of a CO2 molecule in motion, the resulting velocity would be near or even less than zero (compare the energy of 6u-15u photons and the linear kinetic energy of a typical CO2 molecule in motion). The kinetic energy of 2 colliding air molecules is far from enough to cause the CO2 molecule to enter an exited state by converting linear kinetic energy into the energy of a state change. When an excited CO2 molecule collides with N2/O2, its enough energy to result in a finite and relatively large probability that the exited molecule will emit a photon and return to the ground state, but a nearly zero probability that the state energy of the CO2 molecule will increase, even if it started out in the ground state.
The Quantum Mechanical restrictions are that what’s possible is probabilistic and that energy comes in quanta (i.e. the energy of a 15u photon is 1 quanta) and energy can only be absorbed and emitted an entire quanta at a time. If a 15.1u photon is emitted, the CO2 molecule may speed up a little, or if a 14.9 photon is emitted the particle will slow down a little(assuming it absorbed a 15u photon in the first place), but on average, these effects cancel out, moreover; only a small fraction of energy can be exchanged between linear kinetic energy and the energy of a state transition in a single transaction.
You also seem to be missing the fact that the temperature measured by thermometers is the combination of collisions of molecules with the sensor and the absorption by the sensor of photons emitted by distant matter. These two manifestations of temperature are quite different and conversion between them must follow the rules of Quantum Mechanics. From a bulk thermodynamic point of view, the emission of photons by energized molecules upon collision is indistinguishable from converting that energy into the kinetic energy of molecular translational motion upon collisions. The difference is that while both represent achieving LTE, the energy of photons converted into linear kinetic energy is no longer able to directly contribute to the radiative balance of the planet and the spectral data precludes this possibility. This is a crucial point that many do not get.

george e. smith
Reply to  Bernard Lodge
August 14, 2015 10:33 am

So gammacrux, you believe that energy that is NOT any kind of EM radiation can actually leave planet earth (in not totally insignificant quantities) and you cite convection as one such mechanism.
So just what is it that is convecting to and from earth to space ??

Reply to  Bernard Lodge
August 15, 2015 2:24 am

co2isnotevil says:
If it came from the linear kinetic energy of a CO2 molecule in motion, the resulting velocity would be near or even less than zero (compare the energy of 6u-15u photons and the linear kinetic energy of a typical CO2 molecule in motion).
Wrong, wrong, wrong….
Definitively wrong.
The kinetic energies of the molecules in a gas at temperature T are broadly distributed according to the Maxwell- Boltzmann distribution. This means that there are always a lot of molecules whose kinetic energy is much larger (smaller) than their mean energy.
Since the mean energy at ambient temperature is 3/2kT=1/40 eV where k is Boltzmann’s constant, that is quite comparable to the 6 u photon energy ( 1/50 eV) there are plenty of molecules that move fast enough to excite the relevant vibrational mode of CO2 and so transfer part of their kinetic energy into CO2 vibrations and subsequently into IR radiation that escapes to space.
I suggest you read a good course of statistical mechanics and learn a bit about basic concepts of kinetic theory of gases, evolution towards thermodynamic equilibrium etc.
For instance I suggest Chap 39- 43 in this one

Reply to  gammacrux
August 15, 2015 9:22 am

Sure there’s always a distribution of energies, but statistically, most of the particles do not have the energy required. As I pointed out, Quantum mechanics is statistical and the probability of what you claim is happening, while finite, is close to zero. Again, you are blindly applying the kinetic theory of gases (which applies to ideal, rarefied gases) without consideration for the Quantum mechanics that governs line absorption/emission. Do you understand the basic concept of quantization? It seems that you do not.
The majority of the relevant LWIR energy is between 5u and 20u whose photons have an energy of between 1E-20 and 4E-20 joules. The majority of the molecule in the atmosphere are travelling between about 350 and 1400 m/sec which for average N2/O2 corresponds to energies of between 6E-21 joules and 2.4E-20 joules.
If all 2E-20 joules for the average photon (consider 10u ozone absorption) is converted into kinetic energy of an average N2/O2 molecule, its average kinetic energy must increase from 1.2E-20 to 3.2E-20 which is well above the high end of molecular velocities and very unlikely. Similarly, an average O2/N2 molecule with 1.2E-20 would need to give up more energy than it has to supply the 2E-20 joules required to energize a GHG molecule. Again, extremely unlikely.

Reply to  gammacrux
August 15, 2015 10:28 am

Perhaps it would help if you understood the physical mechanism by which /rotational/vibrational can be converted into the energy of translational motion.
The average air molecule has a diameter of about 3 Angstroms (3E-10 m). The typical air molecule is travelling at about 1 Angstrom per 70 ps. Most of the molecular collision occurs over about 10 molecular diameters or about 30 A, so it’s about 2 ns of E-field interactions for each collision. A CO2 molecule excited by a 15u photon is vibrating at about a 2E13 Hz rate, for a period of about 5E-14 sec. During the collision, the molecule will have vibrated 10’s of thousands of times, but the average effect on the interacting electric fields of the collision seen by the colliding molecule is close to zero. Its not exactly zero and only the difference between zero and the actual average is what gets converted into linear kinetic energy during any single collision.
This is how collisional broadening works. Collisions add or remove small amounts of energy and when a photon is emitted, its frequency will be a little above or below the frequency of the nominal line. Similarly, a collision coincident with absorption will skew the resonance and permit absorption of photons on either side of the nominal line, but the average frequency will always be that of the specific line, moreover; the effect on velocity is both positive and negative, so its average is zero.

Reply to  Bernard Lodge
August 15, 2015 2:35 am

So gammacrux, you believe that energy that is NOT any kind of EM radiation can actually leave planet earth (in not totally insignificant quantities) and you cite convection as one such mechanism.
So george e. smith you apparently never learned to read ?

Reply to  Bernard Lodge
August 15, 2015 10:34 am

You’re absolutely mistaken, the transfers of energy between translation, rotation and vibration I talked about are essential and actually are the very mechanism that establishes local thermodynamic equilibrium in any system.
Obviously you never heard about how thermodynamic equilibrium is brought about by the collisions in a real gas nor about the equipartition of energy which means that all the degrees of freedom of a molecule (translation, rotation, vibration) must have the same mean energy as soon at kT becomes comparable or larger than the relevant quantas involved, a condition that is quite satisfied here in atmosphere at ambient temperature.
Only at much much much lower temperatures would the CO2 vibrations be frozen and CO2 stay in it’s vibrational ground state.
As already pointed out, it’s an experimental fact that CO2 gas maintained at room temperature continuously emits IR at 15 u.
This is precisely because the collisions involving the fastest molecules continuously excite new molecules from their ground into excited vibrational states. Can’t you grasp that otherwise IR emission would readily stop after all excited CO2 molecules had emitted their photons ?
At any rate, whether you do or not, no further discussion needed.

Reply to  Bernard Lodge
August 15, 2015 10:43 am

And just for your information, co2isnotevil.
I’m a physicist, so quantum mechanics is my daily bread.

Reply to  gammacrux
August 15, 2015 11:25 am

I’m also a physicist and among my expertise is modelling the boundary between quantum mechanics and bulk behavior as it relates to solid state physics. I’ve also written a Modtran like program driven by HITRAN line data that gets the same results as Modtran and does so much faster by using some innovative techniques I developed in order to speed up the processing. It was also far easier to roll my own then it would have been to integrate MODTRANi into the general purpose climate modelling and analysis tool I’ve also written.
If you don’t acknowledge my explanation for how molecular E-fields interact during a collision to convert only tiny amounts rotational/vibrational energy into the energy of translational motion, apply your expertise in physics to explain it in a way that supports your conclusion that a single collision will convert all of it to linear kinetic energy.
BTW, I never claimed that CO2 isn’t a continuous radiator of 15u photons as to maintain it at room temperature, it will be continuously absorbing them as well, nor have I said that CO2 molecules are mostly in the ground state. I’ve only said that a collision has a finite probability of returning an energized GHG molecule to the ground state by emitting a photon and that this probability is far, far larger than the probability of a significant amount of vibrational or rotational energy being converted into linear kinetic energy. The flux of 15u photons from the surface and from re-emissions upon collisions is high enough that most CO2 molecules will be in an energized state.
The distribution of kinetic energy among the molecules of an ideal gas is pretty much the same as the distribution of photons emitted by a BB at that same temperature. Why don’t you think that this is already in LTE and that additional energy from photons needs to be shared with the molecules in motion? The equipartition theorem applies to bulk properties. i.e, the average of all molecules, and not to individual molecules. This is where I think the disconnect is.

Reply to  Bernard Lodge
August 16, 2015 12:31 am

I’m also a physicist
No, sorry, co2isnotevil, I don’t buy it. Your comments clearly demonstrate that this is not true.
I don’t think you ever got any phd in physics and obviously you’re not even familiar with the most basic concepts of statistical mechanics.
Please take some of your time to learn a little bit about them rather than further waste it here.
End of debate.

Reply to  gammacrux
August 17, 2015 8:31 am

You are clearly no scientist if the best you can do is insult me when I asked you to explain how the E-field interaction of a collision can interact in a manner which converts a quantum of state energy into linear momentum. Quantum Mechanics can certainly be counter intuitive, but magic isn’t part if its strangeness.
I should also point out that the only place I’ve ever seen references to the arbitrary conversion of the energy of a state change into linear momentum is in climate literature. Texts on radiative physics do not make this over generalized claim. This is but one of the many errors endemic to consensus climate pseudo science where a kernel of truth is misinterpreted to provide the wiggle room necessary to support an otherwise impossible hypothesis.

Reply to  gammacrux
August 17, 2015 8:59 am

Your logical disconnect arises from arbitrary conflating EM degrees of freedom with kinetic degrees of freedom. This is also the rationalization behind Trenberth’s obfuscation by unnecessarily conflating EM energy with non EM energy which is unnecessary because the planet is demonstrably in RADIANT balance without consideration of the non EM energy entering the atmosphere, therefore, all non RADIANT energy entering the atmosphere can only be returned to the surface and thus has no NET influence on the RADIANT balance of the planet. Note that I’ve emphasized NET and RADIANT. I suggest you review what these terms mean.
Vibrational and rotational modes consequential to degrees of freedom constrained by standing wave resonances in the molecules EM fields are being conflated with kinetic rotational states whose possible energies are not so tightly constrained. For example, CO2 has no relevant EM rotational modes owing to its linear symmetry, but it’s not spherically symmetric and may physically rotate in any orientation at any rate. The difference between a kinetic rotation and an EM rotation is that a kinetic rotation rotates the whole molecule, while an EM rotation behaves more like a bump rotating around the E-field of a molecule (dipole moments make this more complicated), moreover; EM rotation is highly constrained in both orientation and rate while kinetic rotation is not. Even those GHG molecules that have rotational EM resonances can also have an additional kinetic rotation with an arbitrary orientation and rate. Note that nothing prohibits a kinetic rotation from aligning with an EM rotational mode and becoming ‘captured’ as a standing EM wave in the electron shells (or even visa versa), although these generally involve very low energy states equivalent to microwave (> 1000u) photons.
Certainly the kinetic velocity and rotation of all molecules, GHG’s included, will be shared per the kinetic theory of gases, moreover; the EM components will be equalized among the populations of GHG molecules, at least subject to the quantized nature of absorption and emission. Because the bulk of the atmosphere is transparent to the LWIR comprising the EM components, the coupling between the EM and non EM components is nearly zero. Please note the difference between zero and nearly zero.
My model of the atmosphere is similar to a transmission line between surface emissions (source) and emissions into space (load). If there are no GHG’s or clouds, it is perfectly matched at both ends and everything that enters from the source is delivered to the load at the speed of light. GHG’s degrade the VSWR at specific wavelengths so that al least (depending on species concentration) half of what enters from the source is delivered to the load and the rest is reflected back to the source. Clouds do pretty much the same thing except on a broad band basis and can vary a bit on either side of the nominal 50/50 split.

Reply to  Bernard Lodge
August 17, 2015 11:31 am

Hi CO2isnotevil,
I agree with your points above, and was wondering if you have a blog post somewhere distilling your climate theory?
If not, I’d be happy to publish a summary at the HS blog -just email me hockeyschtick at gmail dot com thanks!

Reply to  Bernard Lodge
August 23, 2015 3:26 am

Trenberth’s model is wrong because it assumes a flat earth. Flat it aint.
See J Postma’s model of a rotating earth. No energy problems at all.

August 11, 2015 2:51 am

Thanks for your hard work Bob, another great essay.
I hold to a different theory of how our climate works than that of the “consensus” — one that was the consensus in the 50s to the 80s. But my theory or that of the warmists or luke-warmists matters little versus observation. With that in mind I have a question.
Why is it that we can spend Trillions of dollars on climate “research” and on space exploration and yet we can’t put up an array of devices in orbit to measure the incoming and outgoing energy budget directly? Circle the globe day and night from the tropics to the poles with purpose built devices and see what is coming in and going out. How could that cost more than what we are wasting now?
If we are trying to “have humanity” (can’t save the planet, it will survive us) then surely we have the technology to directly observe and measure the energy budget at the top of the atmosphere. Do we fail to do the measurements because we know that it will show that CO2 does not do what they say it does?
~ Mark

Walt D.
Reply to  markstoval
August 11, 2015 4:07 am

I think you are spot on. As Jack Nicholson said in “A Few Good Men”. You want the truth? You can’t handle the truth.

Reply to  markstoval
August 11, 2015 7:56 am

You are wrongly assuming that they WANT a clear answer to this question. Not knowing exactly, and having to wave hands and adjust models allows them to cobble up the answer they desire for their political money sources.

Reply to  markstoval
August 12, 2015 3:45 pm

GISS commissioned the ISCCP project, one reason of which was to directly measure and observe the planets energy balance. It shows conclusively that the planet behaves like an ideal grey body with an emissivity of 0.615, but that being the case, the sensitivity must be close to 0.3C per W/m^2 and not the 0.8C +/- 0.4C required to support the IPCC’s preordained conclusions used to justify their existence.
Note that 100’s of billions of remotely sensed measurements went into this plot and these results are insensitivity to the known problems with the ISCCP data. The magenta line is an ideal grey body with an emissivity of 0.6 and the ratio of surface temperature to planet emissions very closely follows this curve.
Note that around 273K, feedback from water and melting ice increases and there is a transient increase in sensitivity (slope of the mean of the dots) as the prior W/m^2 of forcing are affected by the emerging feedback effects. Estimates of a high sensitivity often extrapolate this to the entire surface, when in reality, as the planet warms, an even smaller fraction of the surface is subject to these incremental effects and that at higher temperatures, the slope is decreasing indicating decreasing feedback dominates more.

August 11, 2015 3:18 am

Bob, this is fantastic! I don’t know whether to laugh or cry, but the post is fantastic!
I would love to see this put in front of a leader of the modelling community and read their response, in particular on whether they see problems which ought to be and can be addressed.
Like agreeing on the Solar forcing, maybe….

Reply to  Bob Tisdale
August 11, 2015 7:30 am

Maybe I didn’t make myself clear: I don’t know whether to roll around on the floor laughing at the state of the climate model ensemble, or weep at the state of the world where those models are leading public opinion. BTW I think weeping at the way rank politicised propaganda owns the public agenda is a very reasonable reaction, as long as the weeping doesn’t diminish the struggle to speak the truth.

george e. smith
August 11, 2015 3:21 am

Lots of work there bob.
I’m puzzled by the graphs particularly cells a b c d.
Why do three of them have spikes that disappear after about 1990.
The incoming sunshine, cell b doesn’t have any such features, so where do the three get their spikes from, and why do they stop ?
I assume that a c d are all model outputs so why does the variations all smooth out after 1990.
That is a quarter of a century ago; well right around the time that Hansen did his number in front of Congress.
Is your Figure 1 a literal copy of Trenberth’s Figure 1 , or did you change something.
This figure describes it as an energy budget, whereas, it’s units are all in areal power density units.
I consider the difference to be very significant, since energy is an accumulated integral quantity (I believe it says over four years; well it varies in leap year cycles I presume).
But areal power density is an instantaneous variable, and it is anything but constant over time or space, and that makes a big difference.
The sun inputs at a relatively constant rate, 24 / 7 (there’s the orbital cycle) but any point on earth at TOA sees the input varying from zero to 1366 in a 24 hour cycle. I thought a recent NASA/NOAA release, gave a 1362 ish figure instead, but who’s counting.
Then the surface LWIR radiation output (power) of 396 W/m^2 which corresponds to a 288 K black body number, actually varies over about a 12 to 1 range from point to point on the surface, over that leap year cycle.
OOoops !!
I guess the TOA imbalance is only 0.5 to 1.0 w/m^2 which is a 2:1 ratio unknown range; so what does it matter that we don’t know if TSI is 1366 or 1362 ??
Well we used to use 1353, when I was in school; but that was way back.
I’m surprised that these people have the gall to claim these unbalance numbers given that the uncertainty of TSI is already at maybe twice that amount.
And that figures that ALL of the uncertainty and imbalance is due to TSI unknown range, and the gosouta from the surface at 396 is rock solid and exactly known.
I’ve always considered Trenberth’s isothermal infinite thermal conductivity earth to be silly, but actually it is totally laughable.
ALL of their claimed imbalance is nothing more than measured natural variances in just the one basic input value (TSI (real)), which is known to a whole lot better error limit than any of the numbers that go into even the isothermal earth model, let alone the actual real physical earth, with its 150 deg. C temperature extremes range.
But I always appreciate your dissertations on these things Bob, because I have neither time nor patience to study all the numbers myself.
Well I’m not motivated to dabble in hocus pocus either, and that’s what I think these folks are doing.

It doesn't add up...
Reply to  Bob Tisdale
August 11, 2015 7:33 am

I’m sure we have some handle on the frequency of major eruptions by examining history. Given the significant change they make to the TOA imbalance, excluding them simply makes models run hot. The mean over the past 2000 years has been around 1.5 major events per century.

August 11, 2015 3:27 am

““…needs to be assessed” indicates they hadn’t bothered to do it by then…and likely still haven’t.”
No, it doesn’t indicate that. It indicates that in the next para, he’s going to assess it. And he does:
“Most scientific discussions implicitly assume that these differences aren’t important i.e. the changes in temperature are robust to errors in the base GMT value, which is true, and perhaps more importantly, are focussed on the change of temperature anyway, since that is what impacts will be tied to. To be clear, no particular absolute global temperature provides a risk to society, it is the change in temperature compared to what we’ve been used to that matters.
To get an idea of why this is,…”[and he goes on with diagrams etc]

Reply to  Nick Stokes
August 11, 2015 5:10 pm

Wow Nick. Just wow!… I live in an area where the year to year, month to month temperature has ALWAYS varied a lot. Actually the “Climate” has varied a lot: rain, temperature, snow, frost, wind, storm events, etc. Some years we have grasshoppers, some years we have droughts, some years we have floods, some years we get great crops, some years we get hailed out, frosted out, too wet to harvest (every cut hay in the snow in November, I have.), some years we have great summers, some years we don’t, some years we get weeks of 40 below C, some years we don’t. After over a hundred years on the western prairie, we have come to understand climate always changes and we adapt. We really ought to get used to it.
Read the journals of the first European explorers in this land. An eye opener. Look at dried up lakes on the prairie where tree stumps have been exposed. Think climate hasn’t changed before? Think the change in temperature has been a problem? Shoot. We change what we wear, we change what we grow, we change how we ranch.
Gavin is clueless.
The biggest problem we have today is all the people on the CAGW band wagon. Maybe Obama could work that into a speech. LOL NBL

August 11, 2015 3:41 am

I haven’t the ability or the time to understand this, but what if the cooling clouds increase by 2% or 3% over the next 100 years? Zip to do with co2 levels , but the effect would surely stop a lot of temp increase?
And don’t forget that IPCC lead authors like Solomon and Trenberth TRULY believe that their CAGW cannot be mitigated at all for thousands of years. That’s even if humans stopped all of their co2 emissions today. Here’s their point 20.
And they’re joined in this RS and NAS report by another 5 authors from the IPCC as well. Here’s the list of names.

Walt D.
August 11, 2015 3:42 am

Bob: great article. I have a question.
Everything we read says that the energy output from the sun is constant. However what reaches the Earth is not – the Earth’s orbit is not circular nor is the Earth’s axis vertical.
The usual hand waving is that things average out over a complete orbit. However, the change in energy flux due to the elliptical orbit is a few per cent. The southern hemisphere is closer to the sun in the summer.
Also most of the land mass is in the northern hemisphere. One would expect for there to be a difference in temperature and temperature changes between the northern and southern hemisphere. Again, one would not expect these differences to be trivial.
Can you refer me to a good article that has studied this?

Gary Pearse
Reply to  Walt D.
August 11, 2015 6:57 am

Willis has investigated the effect of the elliptical orbit (someone please link this?) and to all our surprise it didn’t show any change! This to me (and others) is a demonstration of the remarkable control (negative feedbacks ruling the show) of the earth adjusting to maintain temperature. Presumably when the orbital distance is closer to the sun, greater ocean evaporation, convective updraft moving warm air up to where much of the heat is emitted to space and more clouds to increase albedo, thunderstorms, etc. counteract the heating and vice versa when the earth moves farther away from the sun. Willis’s point is that this is the mechanism that controls heating in the ITCZ (equatorial band). This thermostatic control is completely missed by warming proponents.

August 11, 2015 3:49 am

Thanks Bob. I will spend a lot of time on this one. Really interesting!

August 11, 2015 4:04 am

Thx Bob,
for your continuous efforts within Modern Climate Enlightenment, revealing a shocking lack of scientific agreement. Hang in there like a hair in a biscuit!

August 11, 2015 4:04 am

Bob –
many thanks for the huge effort to assemble this information in this way. Very instructive indeed. I’m broadly a sceptic still trying to be open to convincing arguments from the establishment side. But I’m not finding any real arguments now being put up by the warmist camp when challenged to debate. Put in the way that you have put it here, looks like their silence is probably wise! Come on, you well-paid scientists who take CAGW as ‘sorted’…. muck in and take on these arguments, please! Earn your corn. Defend your models.. they are looking pretty pathetic to me right now…

August 11, 2015 4:14 am

Thanks Bob.
I still fail to see why you insist on using the1/4 solar TOA value when you admit that only 1/2 the globe is covered at any one time. So the reality figure must be near to 960W/m2, taking into account albedo and dispersal losses, not your figure. The average is 480W/m2 which gives the average temperature close to 30C not the derisary -18C of the IPCC model.
Also the 10% circle of land below the sun, the subsun point, gets all the energy giving a temperature of 120C for the tropics. These average temperatures do not take into consideration heat loss by latent heat required for the evapouration of water and convection, both large heat loss processes. The rotation of the planet takes care of heating the dark side through heat inertia. It also shows that the GHE is not needed for the missing heat because it is not missing.
It also explains why deserts, very dry, are hotter than very humid rainforests of the same latitude. Temperatures at the surface, the real solid surface not the IPCC surface 1.5m above, can reach 80c in deserts but only 40C in rainforests.
You also fell into the same trap as the IPCC by adding radiation flux together. Adding two fluxes of 300W/m2 do not make 600W/m2. Flux is still 300W/m2. You seem to have ignored the 2nd law of thermodynamics and Planck’s Radiation laws.
Joseph Postma has a good explanation in one of his excellent papers plus an excellent radiation model that even gives the differential equations for heat dispersal over the earth’s surface.

Michael J. Dunn
Reply to  johnmarshall
August 11, 2015 1:44 pm

Intercepted area of insolation is pi r squared. Total area of earth is 4 pi r squared. Effective insolation of the total earth is factored by the former divided by the latter (i.e., 1/4). It is assumed, for the sake of modeling homogeneity, that there is no day-night differentiation and the insolation is uniform in space and time. (Don’t jump on this unless you can show that the assumption makes an egregious difference. It shouldn’t.)

Reply to  Michael J. Dunn
August 12, 2015 3:51 am

To take the 1/4 area figure you are assuminf a flat earth, not reality at all. That method might show equal intake over the whole earth but reality insists on a day/night cycle which is what happens. The surface is not heated equally so why assume it does. It makes the arithmetic easy but that is not the point the truth is.
The flat earth theory gives too low a temperature forcing extra heat in the GHE which cannot happen in reality.

Gary Pearse
Reply to  Michael J. Dunn
August 13, 2015 7:03 pm

John Marshall. The effective presentation of a sphere to the sun is as a disc. A square metre along the equatorial band gets the full insolation, the rays hitting at 90 degrees. Let’s take the earth’s position at the spring or fall equinox. At 45 degrees N and S, because of the angle, the sun’s parallel rays are hitting at 45 degrees, so the insolation is less on that square metre (1*cos45 = 0.707). The square metres at 45 lats gets only 70% of the insolation of that at the equator. At 60N and S, 1*cos60 = 0.5. Those square metres get only half the insolation strength of the equator. At +and – 90 latitude (the poles) the insolation on a square metre is zero. If you sliced the earth in half with the cross-section facing the sun, the amount of light striking this disc would be exactly the same as that of the half sphere itself.

August 11, 2015 4:37 am

What efforts have been made by GCM modellers to draw attention to these differences given the political impact of such notions as ‘the science is settled’?
Have they been unaware of the many harms (starvation, fuel poverty, environmental damage, and resources wasted on renewables, for example, along with the less readily estimated psychological harm to children and other vulnerable groups these past few decades) that have followed from the political successes of those campaigning around scares of climate catastrophe?
Thank you for another enlightening article, Bob Tisdale.

Joe Bastardi
August 11, 2015 4:39 am

Great article!

August 11, 2015 4:56 am

I’ve said it before
So I’ll say it again,
Trying to model chaos
Borders on the insane;
Garbage in garbage out
Has never been more true,
Perhaps there’s an agenda
They want to pursue?

David A
August 11, 2015 5:13 am

That’s a lot of models, with lots of variance. In affect they are a W.A.G. What is worse, they do not have any better numbers for the SW selective surface we call the oceans. (And some thought they only modeled clouds badly) Without a detailed understanding of surface insolation, AND the disparate residence time of surface W/L flux into the oceans, they know next to nothing as far a projections that have any meaning. (none of this exists in the energy budget shown)
it looks to me like precision and accuracy are both lacking.

August 11, 2015 5:21 am

I always look forward to Bob Tisdale’s analyses. They are clear and informative. I hope he knows that his efforts are appreciated.
“The precision achieved by the most advanced generation of radiation budget satellites is indicated by the planetary energy imbalance measured by the ongoing CERES (Clouds and the Earth’s Radiant Energy System) instrument (Loeb et al., 2009), which finds a measured 5-yr-mean imbalance of 6.5Wm−2 (Loeb et al., 2009). Because this result is implausible, instrumentation calibration factors were introduced to reduce the imbalance to the imbalance suggested by climate models, 0.85Wm−2”
This quote is some sort of special. The satellites are high precision instruments which means differences can be measured very precisely, they just aren’t accurate which is why they have to be intercalibrated.
Reducing the imbalance by 7.6 times – almost an order of magnitude – just to match the models is an interesting adjustment.. I suppose that endothermic reactions and photosynthetic reactions could be responsible for some of the imbalance. But given that the oceans are heating 0.2-0.3 W/m2 in response to a supposed 1.64 W/m2 of post 1955 CO2 forcing, the 6.5 W imbalance seems a little high and difficult to explain. Either the conceptual understanding of what the satellite is measuring is flawed, the accuracy of the instrument is really that bad, or something on earth absorbs a lot of energy.
It should be noted that we really don’t care what the imbalance is – CO2 should be increasing it by decreasing outgoing long wave and that should be getting measured precisely.

Reply to  PA
August 11, 2015 3:42 pm

Yes photosynthesis, a yet to be considered although possibly insignificant contributor to any imbalance. Can it be evaluated?

Reply to  TedM
August 12, 2015 3:55 pm
Well, plant growth increased 11% from 1985 to 2011.
Let’s assume 15% increased plant growth since 1985. Photosynthesis absorbs 112 GT of carbon and produces 155 GT of biomass.
The 112 GT of carbon is 404 GT of CO2 – most recent IPCC estimate is 123 GT.. Looking for estimates of joules per gram of carbon or CO2.
39kJ per gram of carbon is a low end estimate. But it is what we have.
123GT * 0.15 (the percentage due to post 1985 CO2) * 1E15 g/GT * 39E3 J/g = 7.1955+E20 joules annually.
This number is derived from carbon converted so it could be low – 10.0E29 wouldn’t surprise me.
Earth’s absorbed is 163 W/m3. Land area is roughly 149 million km2. Earth absorbs about 7664+E20 J/Y.
So plants are converting 0.153 W/m2 more to sugar since 1985. Since 1985 CO2 forcing has increased 3.46 * ln (400/346) = 0.502 W/m2. So plant growth is countering about 1/3 of the forcing increase.

August 11, 2015 5:28 am

a question i’ve had for a while–
if increasing GHGs cause more energy to be redirected back to the planets surface, wouldnt they also increase the amount of incoming energy redirected out into space? surely the molecules aren’t directional…

Reply to  Marcos
August 11, 2015 5:47 am

comment image
Yeah, but the GHGs absorb wavelengths in the “earth” energy band, not the “sun” energy band so the effect is small.
GHGs absorb infrared light (low energy/long wavelength/low frequency). Most of the suns output is visible or ultraviolet (higher energy/shorter wavelength/higher frequency) light.

Reply to  PA
August 11, 2015 6:09 am

wouldnt they also increase the amount of incoming energy redirected out into space
they would have to. which would cool the atmosphere. which would increase convection and conduction from the surface. which would cool the surface.
the question is. how much does the increase in convection and conduction due to the cooling of the atmosphere offset the surface heating effect of back radiation? Why is conduction not shown in the energy budget.

Reply to  PA
August 11, 2015 8:40 am
The GHGs absorb narrow bands of radiation mostly on the low (right) end of the electro-magnetic spectrum (far infrared) of the light spectrum. Further right is microwaves and radio waves.
Incoming radiation is visible, ultraviolet, or near infrared. GHGs are transparent to visible, ultraviolet and near infrared. Oxygen and Ozone (O3) is the reason you don’t get seared like a steak by UV when you go outside.
So, no, GHGs have essentially zero (0.0) impact on incoming radiation. Water Vapor (which is really a GHG) and its thicker version – clouds – do absorb or reflect some incoming radiation and some kinds of dust/aerosols absorb or reflect incoming radiation.
GHGs absorb outgoing radiation radiated up from the cool earth in long wavelength low frequency far infrared radiation. It is like the one way mirrors you see on police shows.
Air is an insulator. For all practical purposes air doesn’t conduct anything. Air heat transfer modes are convection and radiation. About 1/2 of surface heat loss is latent (evaporation). Surface heat loss is in general 1/6 convection, 1/2 latent, 1/3 radiation. The fact that there are thermal updrafts indicates that there is only a trivial amount of atmospheric conduction.
<i<"the question is. how much does the increase in convection and conduction due to the cooling of the atmosphere offset the surface heating effect of back radiation? Why is conduction not shown in the energy budget."
I’ll leave this for someone else.
As to conductivity, thermal conductivity of air is 0.026 W/mK. The 255K point is in the 5-6 km range. The amount of energy conducted from the surface to 6 km – if we assume the temperature at that height is absolute zero (0K) for a 1×1 meter square of atmosphere is:
E = 0.026 W/mk*(288 k-0)*1m*1m/6000m = 0.001248 W
And that is making the absurd assumption that 6 km is at absolute zero. The atmosphere does not conduct heat, it is just that simple.

Ian Macdonald
Reply to  PA
August 11, 2015 8:55 am

Beware, this graph is somewhat misleading because the two ranges are not to scale. The sun’s far infrared output is much stronger than it would seem to indicate

Reply to  PA
August 11, 2015 9:58 am

Beware, this graph is somewhat misleading
Kind of. The suns energy at earth orbit is around 1365 W/m2 and the average earth incoming is 340 W/m2 so the two curves are pretty close to the actual energy ratio.comment image
Energy at point blank range is about 185 W/m2 (according to this rendition) with 23 W/m2 more or less reflected and the rest absorbed.
The solar radius is about 0.005 AU or about 1/215 the distance to the earth. The intensity at the surface of the sun is around 63 million W/m2. The note that the solar curve (which appears to be about 4 times larger) is scaled 10-6 is flatly wrong. It is scaled down about 2.2*10**-5 or about 22 times less.than claimed.

Reply to  PA
August 11, 2015 10:39 am

Actually, if you look at the graph above and below, there are several water absorption bands in the primary incoming wavelengths. Also, to the left is the absorption from O2 at the shorter wavelengths. Of course this is the classical answer to the child’s question of why is the sky blue.
One of the things that I think my solar physicists friends miss is the amount of energy absorbed in these absorption spectra. A photon at 0.2 nanometers is far more energetic than one at 2, 4, or 20 microns. I have not dug up the absorption to emission time scale but that photon imparts a lot of thermal energy to an oxygen or ozone molecule.

August 11, 2015 5:53 am

And yet the IPCC still want to average all these models together to form the “ensemble mean” even though it is clear from your animations Bob that they vary greatly in the way they model different aspects of the climate system? Or have I got this badly wrong? How can the ensemble mean concept be justified?

Reply to  ExArdingJas
August 11, 2015 10:54 am

You make a point RGBatDuke is emphatic about. It does not make statistical sense.

Reply to  ristvan
August 11, 2015 11:35 am

The best analogy is hurricane prediction. They have a couple of dozen models. They pick a point where 1/2 of the models are going to the left and 1/2 are going to the right and say that is the predicted hurricane path.
The problem is that all the climate models go to the left. The solution is to defund the 10% of models that show the most warming each year and disqualify the team for future grants (debar them) for 5-10 years.
In a few years 1/2 the models would show too much warming and 1/2 would show too little and if you averaged them you will get a reasonable estimate of future climate. If you fire enough scientists for doing their job wrong, the remainder will figure out how to do their job correctly.
If the hurricane models performed as bad as climate models the director would be fired and the responsibility for prediction assigned to another agency.

August 11, 2015 5:59 am

Where is conduction in the energy budget? I see “thermals” but this is convection. I also see evapo-transpiration but again this is not conduction.
Why is there no accounting for energy moved via conduction? This could be quite significant, because conduction is continually trying to eliminate the lapse rate.
Similarly, where is the accounting for the lapse rate and the gravitational constant that underlies the dry air lapse rate?

August 11, 2015 5:59 am

Very good article Bob, either you are getting better at writing or my comprehension level is increasing.
About the Trenberth cartoon, and solar insolation, while it is true that 1/4 of the solar insolation is an accurate average over the surface, there is no place or latitude that actually receives the average solar insolation (per square meter), because of the curvature of the sphere. Most of the surface receives more than the average solar radiation, which makes the Trenberth simplification wrong.

Reply to  jinghis
August 11, 2015 6:33 am

while it is true that 1/4 of the solar insolation is an accurate average over the surface
the 1/4 simplification is itself more than enough to explain the imbalance, because radiation varies as the 4th power of temperature. Averaging reduces the calculated radiation to less than actual.
for example: divide the earth into equal area tropical and polar regions.
Average temp overall = 15C.+ 273C = 288K
Average temp tropical = 30C + 273C = 303K
Average temp polar = 0C + 273C = 273K
Average radiation = 288^4 = 6.88 x 10^9
Tropical radiation = 303^4 = 8.43 x 10^9
Polar radiation = 273^4 = 5.55 x 10^9
However, (8.43 + 5.55) / 2 = 6.99, which is not equal to 6.88! We have a radiation imbalance simply due to the effects of averaging.

Reply to  ferdberple
August 11, 2015 9:58 am

You gave equal surface radiation area to your calculations which it isn’t. If the earth is divided from the equator to 45˚, 70% of the earths surface and 84% of the radiation is between the the north and south 45˚ latitudes.
Weighting your calcs we get (.84×8.43 + .16×5.55)/2 = 3.98 which is not equal to 6.88 or even close, it is much worse.

Reply to  ferdberple
August 11, 2015 1:10 pm

Thanks – I’ve been emphasizing this nonlinear radiation matter too. In GCMs, this can be cast as absence of subgrid radiation physics models. Atmospheric flow models involve subgrid Reynolds Averaged Navier Stokes which have their own problems; but I’ve not see good discussions about subgrid radiation physics. (Except some on clouds…) Mountainous regions should have greater corrections in those grids because of the greater variations in north-vs-south facing slope than on grid elements on the plains. Architects, the Anasazi, and Ski Area Managers understand these local temperature variations.

Reply to  ferdberple
August 11, 2015 3:49 pm

“while it is true that 1/4 of the solar insolation is an accurate average over the surface”
Unfortunately, that’s also incorrect, because it falsely assumes Earth’s precession angle is zero, instead of the actual 23.44 degrees. Sorokhtin et al calculated that the divisor should actually be ~3.5 because each pole receives zero insolation for six months of the year due to the 23.44 degrees precession angle, so solar energy is not distributed over an entire sphere for each 24 hour period:

george e. smith
Reply to  jinghis
August 12, 2015 12:05 pm

Well that is an amazing statement.
There presumably is a place that receives the maximum solar insolation, and the also is a place, that records the minimum solar insolation.
According to an argument by Galileo Galilei, there must be a place that has any value that lies between those extremes. there has to be a continuum of values from max to min.
Ergo, at least one such place must be identical to the average.

August 11, 2015 6:08 am

If the earth’s climate is a complex ( non-linear) system of feedback loops, how can there be any energy imbalance at all?
Would not such an ” imbalance” manifest itself in some sort of run away train in one or many of the variables that affect climate, in particular global or regional temperatures?
Maybe someone can explain this to me.

Reply to  JohnTyler
August 11, 2015 10:13 am

John, You have stated the CAGW position very well.
The proof that it isn’t correct is that it has never happened in the past.
The CAGW response to that is that this time it is different.

Sun Spot
Reply to  jinghis
August 11, 2015 11:44 am

and the “that this time it is different” has no scientific veracity.

Gary Pearse
Reply to  JohnTyler
August 13, 2015 7:32 pm

Well, if you have a factor that is a positive feedback that is increasing for a while (whatever it is), then the earth heats up. The imbalance is temporary. With the earth at a warmer level, the radiation to space increases to a new equilibrium where the imbalance is finally nullified. Imagine at the end of a glacial period, the earth begins to heat up again. Here the there is more radiation coming in than going back out because it is being captured in the melting of the ice. Every wiggle in a temperature chart represents plus and minus energy balances.

August 11, 2015 6:08 am

“The model mean shows an average energy imbalance of 0.79 watts/m^2 for the base period of 1996-2015, and Earth’s energy imbalance (based on the model mean) was about 0.10 for the first 20 years of 1861-1880 (the pre-industrial values). So, based in the model mean, Earth’s energy imbalance has increased roughly 0.69 watts/m^2 since pre-industrial times.”
According to the model’s mean average energy imbalance, as seen in the two graphs (fig 13 and 14), the global warming in question (the ~0.8C WARMING OF THE LAST 150 YEARS) is not all due to CO2 emissions (either natural or anthropogenic)
All the warming up to 1960 is within a time period of quite a stable energy imbalance, and is as much or perhaps even more (bigger) than the other amount of warming following after the 1960(roughly)
Is strange that comes in a form of confirmations from the GCM’s simulations, with no relation or regard to our understanding and interpretation of the climate data, an understanding and interpretation which actually is forced to conclude the same.

August 11, 2015 6:21 am

Because this result is implausible, instrumentation calibration factors were introduced to reduce the imbalance to the imbalance suggested by climate models, 0.85Wm−2″
This is an amazing quote. The scientists don’t believe the measurement because it contradicts theory, so they change the measurement to match the theory.
Time after time the history of science teaches us that this is one of the most common methods by which science goes astray. Often for hundreds of years.
For nearly two thousand years it was incorrectly believed that heavier objects fall faster than lighter objects. Yet the simplest of experiments show this to be false. Still the false belief persisted, because it was “logical” that heavy objects fell faster.
Now we have satellite measurements being altered to match.theory. Very similar to the ARGO float data being cherry picked to selectively eliminate some buoys because everyone knows that the oceans must be warming.
Changing the data because the result is not “logical” is nonsense. It does not belong in science. The measurement is the measurement. As soon as you alter the value you have no idea where the truth lies.

Reply to  ferdberple
August 11, 2015 7:10 am

It’s worse than you think Fred. If you follow the link to Wong et al poster, given by Bob Tisdale above, you will find that NASA actually say

According to the best available data from the CERES satellite instrument, along with information from other data sources, the radiation budget at the top-of-atmosphere was not balanced during the five years from 2000-2005. Approximately 0.85 Watts of energy were added to the Earth system, on average, for each square meter of the Earth’s surface.

Implying that the CERES satellite is measuring this 0.85 watts, when in fact the instrument has been re-calibrated to match the models.

Mike M. (period)
Reply to  MikeB
August 11, 2015 9:08 am

It is actually the “other data sources”, not CERES or the models, that are the source of the radiation imbalance estimate. The other sources are ocean heat content measurements. The satellite data are not nearly precise enough to give a useful measurement of the imbalance.

Reply to  MikeB
August 11, 2015 11:05 am

Agreed Mike M, but this is why you have to watch the pea under the thimble (as Steve McIntyre says).
The strong implication when reading that sentence is that “the best available data from the CERES satellite” says it is 0.85 watts (otherwise what is the point of mentioning CERES?)
So we claim credibility as if it has been measured experimentally – when in fact it has not. The measured value was 6.5 watts per square metre.
This is called sophistry isn’t it?

David A
Reply to  MikeB
August 11, 2015 1:14 pm

Yet ocean heat content has been adjusted as well. According to the Hansen quote the “other data sources” are the climate models.

Reply to  ferdberple
August 11, 2015 7:21 am

So true, Ferd. The practice of customizing facts to match and support the agenda has been the mainstay of politics and religion since ancient times. Our modern societal apathy toward science has made this a convenient practice for those pursuing global centralization of “governing bodies” based on exaggerated or false scientific claims.

August 11, 2015 6:29 am

I know of no polite way to say this, Bob Tisdale, but the fact that you give credence to the Energy Budget and argue over its finer details, rather than just outright reject it, shows that you too, are incompetent and not deserving of being employed as a bookkeeper, much less a scientist. All of the figures on the right hand bottom half of the energy budget diagram, bare no relationship at all to the rest of the diagram. 341Wm2 comes in 341Wm2 goes out. The rest is just accounting fraud. If you replicated this diagram as money going into and out of a bank account instead of energy going into and out of a planet, the first audit of your accounts would see you arrested. Tell me, by what magic does the power of the sun (shown as only having a strength of 161) hitting the earth get doubled when it leaves the Earth’s surface (for the first time!)? Oh yes, the magic of the Greenhouse Effect! What a joke! You sir, and all like you, alarmists and Luke Warmers alike are a disgrace to science!

Reply to  wickedwenchfan
August 11, 2015 10:22 am

wickedwench, Energy in does not have to match energy out in the ‘short’ term. In fact the earths energy budget is never in equilibrium and can never be in equilibrium as long as the earth is spinning and the sun is shining.
About the doubling effect, just use the ‘net’ and you will be fine.

August 11, 2015 6:34 am

Three Legged Stool of CAGW: 1) Anthropogenic 2) Radiative Forcing 3) GCMs
Leg the 2nd
Radiative forcing of CO2 warming the atmosphere, oceans, etc.
If the solar constant is 1,366 +/- 0.5 W/m^2 why is ToA 340 (+10.7/- 11.2)1 W/m^2 as shown on the plethora of popular heat balances/budgets? Collect an assortment of these global energy budgets/balances graphics. The variations between some of these is unsettling. Some use W/m^2, some use calories/m^2, some show simple %s, some a combination. So much for consensus. What they all seem to have in common is some kind of perpetual motion heat loop with back radiation ranging from 333 to 340.3 W/m^2 without a defined source. BTW additional RF due to CO2 1750-2011, about 2 W/m^2 spherical, 0.6%.
Consider the earth/atmosphere as a disc.
Radius of earth is 6,371 km, effective height of atmosphere 15.8 km, total radius 6,387 km.
Area of 6,387 km disc: PI()*r^2 = 1.28E14 m^2
Solar Constant……………1,366 W/m^2
Total power delivered: 1,366 W/m^2 * 1.28E14 m^2 = 1.74E17 W
Consider the earth/atmosphere as a sphere.
Surface area of 6,387 km sphere: 4*PI()*r^2 = 5.13E14 m^2
Total power above spread over spherical surface: 1.74E17/5.13E14 = 339.8 W/m^2
One fourth. How about that! What a coincidence! However, the total power remains the same.
1,366 * 1.28E14 = 339.8 * 5.13E14 = 1.74E17 W
Big power flow times small area = lesser power flow over bigger area. Same same.
(Watt is a power unit, i.e. energy over time. I’m going English units now.)
In 24 hours the entire globe rotates through the ToA W/m^2 flux. Disc, sphere, same total result. Total power flow over 24 hours at 3.41 Btu/h per W delivers heat load of:
1.74E17 W * 3.41 Btu/h /W * 24 h = 1.43E19 Btu/day
Suppose this heat load were absorbed entirely by the air.
Mass of atmosphere: 1.13E+19 lb
Sensible heat capacity of air: 0.24 Btu/lb-°F
Daily temperature rise: 1.43E19 Btu/day/ (0.24*1.13E19) = 5.25 °F / day
Additional temperature due to RF of CO2: 0.03 °F, 0.6%.
Obviously the atmospheric temperature is not increasing 5.25 °F per day (1,916 °F per year). There are absorbtions, reflections, upwellers, downwellers, LWIR, SWIR, losses during the night, clouds, clear, yadda, yadda.
Suppose this heat load were absorbed entirely by the oceans.
Mass of ocean: 3.09E21 lb
Sensible heat capacity: 1.0 Btu/lb °F
Daily temperature rise: 1.43E19 Btu/day / (1.0 * 3.09E21 lb) = 0.00462 °F / day (1.69 °F per year)
How would anybody notice?
Suppose this heat load were absorbed entirely by evaporation from the surface of the ocean w/ no temperature change. How much of the ocean’s water would have to evaporate?
Latent heat capacity: 970 Btu/lb
Amount of water required: 1.43E19 Btu/day / 970 Btu/lb = 1.47E+16 lb/day
Portion of ocean evaporated: 1.47E16 lb/day / 3.09E21 lb = 4.76 ppm/day (1,737 ppm, 0.174%, per year)
More clouds, rain, snow, etc.
The point of this exercise is to illustrate and compare the enormous difference in heat handling capabilities between the atmosphere and the water vapor cycle. Oceans, clouds and water vapor soak up heat several orders of magnitude greater than GHGs put it out. CO2’s RF of 2 W/m^2 is inconsequential in comparison, completely lost in the natural ebb and flow of atmospheric heat. More clouds, rain, snow, no temperature rise.
Second leg disrupted.
Footnote 1: Journal of Geophysical Research, Vol 83, No C4, 4/20/78, Ellis, Harr, Levitus, Oort

Reply to  Nicholas Schroeder
August 11, 2015 10:04 am

One issue I have with adding the depth of atmosphere to the radius of the Earth is that atmosphere, being primarily transparent to energy, shouldn’t be part of a S-B “black body” calculation, that to use TOA as the surface of the sphere, a complex S-B gray-body calculation should be made taking the various emissibility factors of the atmosphere and actual surface into consideration; but the temperature calculation/energy budget claims of the alarmists are all based on straight S-B “black body” with the claim that emissivity is so close to 1 as to be irrelevant. The flawed usage of S-B based on TOA values for the “energy budget” is why, in my opinion, the “sensitivity” claims are so far-fetched.

August 11, 2015 6:47 am

The precision achieved by the most advanced generation of radiation budget satellites is indicated by the planetary energy imbalance measured by the ongoing CERES (Clouds and the Earth’s Radiant Energy System) instrument (Loeb et al., 2009), which finds a measured 5-yr-mean imbalance of 6.5Wm−2 (Loeb et al., 2009). Because this result is implausible, instrumentation calibration factors were introduced to reduce the imbalance to the imbalance suggested by climate models, 0.85Wm−2 (Loeb et al., 2009).

richard verney
August 11, 2015 7:01 am

Something to ponder on.
One cannot expect the energy budget cartoon to balance since it does not describe the real world.
There are so many problems with it, particularly the average approach, which if it were so, there would be all but no weather on planet Earth. It is only because conditions are not some homogenous average that one gets weather fronts. Further the Earth is 3D not 2D and where energy is absorbed is also material.
One ‘error’ not often mention is the claim that 161 watts/m^2 of incoming solar is absorbed by the surface. This cannot be correct since approximately 70% of the surface of the Earth is ocean, and water is largely transparent to solar. Incoming solar is not absorbed at the surface of the ocean but instead typically at a depth of some 70 com to 15m, but some at more than 100m. Of the solar that is absorbed at such depths some of it stays at depth and some is taken deeper by ocean over turning. It is what is maintaining the average ocean temperature of 3 to 4 degC.
Of course most if not all of the incoming solar eventually finds its way to the surface, or if driven polewards, it may be used in powering the melting of ice. But the issue here is the time scale at which it returns to the surface.
One cannot assume that all of this is a constant, or one should (to simplify matters) assume that solar absorbed at depth instantaneously makes its way to the surface. Given that the thermohaline circulation is measured in a approximately a 1000 years, it may be that solar absorbed today may take on average say 50 years or 100 years or what have you like to reach the surface and to form part of the outgoing radiation.
If one is considering todays energy budget, this begs the question of how much solar irradiance reached the oceans say 50 years ago or 100 years ago or perhaps during the Little Ice Age. etc. Perhaps aerosols were different or patterns of cloudiness were different such that in the past there was more or less solar reaching the oceans. If that was the case then one would immediately obtain an imbalance, especially when one is looking at relatively small imbalances.

Gary Pearse
August 11, 2015 7:20 am

Bob, this should shake up the W proponents muchly. These would appear to be graphs for ‘Policy Makers’ not scientists. What is the probability that there would be such a strong trend-changing inflection point right at the present? No real scientist or engineer would let this go without considerable thought an analysis. I note Steve Mosher, who thinks only he and two other people on the planet are educated and smart enough to have a discussion on aspects of philosophical puffery as relates to logic, hasn’t turned up yet to defend the sanctity of these atrocious models.
Steve, does the following statement perhaps supported by the philosophical logic of you three guys?:
“Because this result is implausible, instrumentation calibration factors were introduced to reduce the imbalance to the imbalance suggested by climate models, 0.85Wm−2″
The hubris is so huge in this composting field that there is indeed a large imbalance, but it is one more psychological than scientific or philosophical. Increasingly, the faithful supporters who have some scruples have been stricken with clinical depression over the pause and the suffering is from classical psychological D’Nile. Those without scruples are beyond reach. They recognized the problem and their solution was to adjust away the pause. Aren’t there laws being broken somewhere here? Engineers can be disciplined and even barred from practice for such antics. It is time to put scientists into a strait jacket of enforceable ethical rules.

Reply to  Gary Pearse
August 11, 2015 2:47 pm


August 11, 2015 7:29 am

Excellent article. Yesterday, there was a nice article in the Chronicle of Higher Education critical of “Consensus Statements” put out by scientific organizations. The author focussed on past Psychological consensus statements that have been found to be wrong over time. He goes on to say that Physical associations feel no need to put out a “Gravity Statement” but then makes a brief waffling comment about Climate Consensus Statements. How did we get to a point where scientists vote on claims!?
My hope is that scientific organizations would refrain from consensus statements and leave the Journals to publish science that readers can assess for themselves. Identification of consensus should itself be scientific such that Meta-Analyses would be Bayesian-based: The more definitive a claim (prior probability), the more definitive the falsification when contrary data is presented. Posterior probabilities of Consensus Climate Statements should be plummeting…

It doesn't add up...
August 11, 2015 7:43 am

More revisions to history:

When Dr Clette and his colleague recalibrated the two lists after developing a method of choosing a main sunspot observer for a given interval of time while ensuring that observers from adjacent periods overlapped to give smooth transitions, the grand maximum of the latter half of the 20th Century disappeared, the journal says.

August 11, 2015 7:54 am

The warmist story and reams of grant-supported science papers are based almost exclusively on the models, and this post shows better than almost anything I’ve seen what a tissue of speculation they are.
I’m bookmarking this and will try to re-read it periodically.
Again, outstanding.

August 11, 2015 8:12 am

Anthony Watts….;)
Solar activity is NOT linked to global warming: Ancient error in the way sunspots are counted disproves climate change theory
The correction, called the Sunspot Number Version 2.0 was led by Frédéric Clette, Director of the World Data Centre (WDC) SILSO, Ed Cliver of the National Solar Observatory and Leif Svalgaard of Stanford University.

Reply to  Latitude
August 11, 2015 8:13 am

that should have gone to moderation….??

Reply to  Latitude
August 11, 2015 5:02 pm

well….I guess the obvious eluded someone!
It’s just an article in a newspaper that mentioned Leif…………

Reply to  Latitude
August 11, 2015 9:42 am

The obvious eludes you and the authors of the revision. Did you ever look at the solar data?
The statement made by the authors regarding a lesser trend in the v2 SSNs since the Maunder Minimum as not being responsible for climate change/global warming is a strawman argument.
Using the v2 yearly SSN data as downloaded on July 1 from (if they haven’t been revised again since then):
The modern maximum in solar activity occurred from 1935.5 to 2004.5, a 70 year period, when v2 yearly SSNs averaged 108.5, as compared to a 65.8 per year average for the 70 years between 1865.5 and 1934.5, which was a 70 year 65% increase in sunspot activity.
This takes us back before the 1880 start of the time range typically used in climate work.
Before someone accuses me of ‘cherry-picking’ – the modern era when compared to earlier times still had higher solar activity:
Using 9 whole solar cycles for each period, the 1914-2009 period that includes the modern maximum averaged 95.1 per year in SSN, and was 22.4% higher than the previous 9 cycles from 1810-1913, which averaged 71.7 annually, and was 18.4% higher than the 1712-1809 period, that averaged 78.7 annually.
The Sun caused global warming during the modern maximum, when sunspot activity was 65% higher for 70 years than the previous 70 years.

Reply to  Bob Weber
August 11, 2015 4:13 pm

Bob they do not seem to get it, even in the face of the black and white data you have presented.

Reply to  Bob Weber
August 11, 2015 10:27 pm

I agree. I have been perplexed by that as well. The modern maximum may not be anything out of line from the last 10,000 years, but in the last 200 yrs, the most recent (until cycle 24) stands out as significantly higher than the period going back to before the Maunder Minimum.

Reply to  Bob Weber
August 12, 2015 11:44 am

Instead of cherry picking, let us simply show the complete record:
The blue curve is the 11 yr average number of sunspot groups since 1600. The red curve is the Carbon 14 cosmic ray based cosmic ray modulation.

Reply to  Bob Weber
August 12, 2015 11:54 am

Translating the sunspot number into TSI, the record looks like this:

Reply to  Bob Weber
August 12, 2015 12:03 pm

Or the Open Magnetic Flux [solar wind], the cosmic ray modulation, and the sunspot groups since 1600:

Reply to  Bob Weber
August 12, 2015 12:21 pm

The Group Number for the first half of the data since 1700 was 4.4+/-0.5, and for the last half also 4.4+/-0.2. Statistically indistinguishable.

August 11, 2015 8:16 am

Bob, here is an interesting comment in the 1959 paper by Kaplan, that refutes earlier papers by Plass. Plass is the touchstone that Gavin Schmidt lives by and is used as the input to basically all climate models…comment image?dl=0comment image?dl=0comment image?dl=0

Reply to  denniswingo
August 11, 2015 8:22 am

I apologize that this is so large.
Bob, there is an incredibly interesting statement by Kaplan above. “Although Plass realized that the existence of clouds would DECREASE the effect of CHANGING CO2 on temperature…..”
I have not seen that discussed in the manner that Kaplan does.

Reply to  denniswingo
August 11, 2015 10:25 am

Wow, and we thought all the innovation was happening today. Kaplan published a work on Remote Satellite Sounding atmospheric analysis in 1959, only two years after Sputnik was launched, and a year before Tiros 1, which only took “photos” of clouds.
Makes you wonder why it took another 20 years to start a satellite record…
Oh wait – that’s right – we HAD a satellite record that started before 1979 – it’s just that the pro-AGW “deciders” chose to ignore pre-1979 satellite data because it was “different.” More likely it’s because it didn’t start at a low point in a temperature cycle. Same reason they ignore pre-1979 satellite-based sea ice extent data.
They can homogenize and adjust and tweak and in-fill land-based data using old instruments claiming it calibrates them to the new high-tech instrumentation, but they can’t figure out how to reconcile between two satellite datasets using different instrumentation?

Reply to  jstalewski
August 11, 2015 11:35 am

Nimbus 1 flew in 1964 with the High Resolution Infrared Radiometer (HRIR) (8 km resolution) which was sensitive in the band of 3.5-4.1 microns). There was also the Medium Resolution Infrared Radiometer (32 km resolution). Nimbus I only lasted 3 weeks in orbit. Nimbus II flew in 1966 and lasted for 9 months. Nimbus III flew in 1969 and lasted over a year. Lots of good data there and we helped to translate it to modern NetCDF-4 format.

Gary Pearse
Reply to  denniswingo
August 11, 2015 11:01 am

Great find Dennis, so Gavin et al cherry-picked the falsified sensitivity of Plass when Kaplan’s paper was already in the literature for over 30 years.

Reply to  Gary Pearse
August 11, 2015 11:22 am


Reply to  Gary Pearse
August 11, 2015 11:30 am

To give a less flip reply…..
Kaplan has some incredibly good insights as it was in this period that instrumentation finally became “selective” (able to discriminate individual absorption wavenumbers) for the different absorption/emission modes (stretching, rocking) of the CO2 molecule. This was done by the USAF principally because they were designing the seekers for the Sidewinder and other missiles and wanted to use bands that were not IR absorbers. This is why our missiles were so much better than the Russian missiles in Vietnam. Thus a lot of this information was highly classified at the time and it only leaked out through science papers by people like Plass, Kaplan and others who were able to get the equipment that had been paid for by USAF R&D dollars.
For those old enough to remember, this is what the “upper atmospheric research” flights by the USAF were about during that time period. There is a great quote above from Kaplan about Plass “his method is too subjective to be reproducible”. Also intriguing about Kaplan’s presentation is the influence of clouds on the ability of CO2 to absorb and emit IR. This makes a lot of sense as H2O overlaps CO2 in most of its wavebands and thus if you have an absorber that is orders of magnitude more prevalent in the same space (clouds and CO2), the energy is going to be preferentially absorbed by H2O over CO2.
I have never seen that expounded upon anywhere in the modern literature (more than happy to be corrected about this).

August 11, 2015 8:17 am

Thanks, Bob. Your analysis of the modeled Earth is very good. I think it shows (again) that the IPCC models are a total waste.

August 11, 2015 8:48 am

This RCP6.0 ensemble is bad. Now imagine what the Business as Usual outputs look like in Watts/sm spread. And it’s those RCP 8.5 ensemble that the policy makers are using justify CO2 cuts and expensive renewables.
Energy economies are being “fundamentally transformed” with the justification based on garbage. We know what the real agenda of the UNFCCC and NWO socialists is, and it has nothing to do with controlling anthropogenic CO2 global warming. With Bob’s “most excellent” exposé here, the model pseudoscience behind which this political agenda hides should be obvious to the non-climate science and engineering communities (or anyone else with a brain). Now it is time for those communities to open their eyes and quit pretending this adulteration of science is just “a bunch of d3#iers”, and to speak out loudly against the CC agenda before any further damage to science is done.

Matt Skaggs
August 11, 2015 8:56 am

“Phrased differently, because the satellites were inaccurate, climate scientists had to rely on the outputs of climate models and assume they were correct.”
Thanks Bob, this works cuts right to the heart of AGW theory. From the perspective I offered in my post on attribution at Climate Etc., we can see that mainstream climate scientists still have no clue on how to weigh the value of different types of evidence. Throw out the direct measurements in favor of model results, and then declare yourself 95% confident in your answer!

Mike M. (period)
August 11, 2015 8:59 am

Some very important stuff here, mixed in with a lot of less important stuff.
The variance between model forecasts is old news; that is part and parcel of the variation in climate sensitivity. The variance between models in TOA incoming solar is no big deal since it is less than 1% and within experimental uncertainty. And the variance model in outgoing longwave radiation is not of independent concern since it is a direct consequence of the variation in albedo (since total outgoing is very nearly fixed).
But the 10% range in SW reflected is shocking (but not surprising) since that means that many of the models get a global albedo outside the measurement uncertainty. I knew that there were wide variations in zonally averaged cloud radiative effects (AR5 Fig. 9.5) but not that they were getting global albedo so wrong. The same applies for the range in TOA imbalance. There are two causes of that. One is differences in the change in ocean heat content (the physically real part of the imbalance), see AR5 Fig. 9.17. The other is that some of the models fail to conserve energy, see: Forster, P. M., T. Andrews, P. Good, J. M. Gregory, L. S. Jackson, and M. Zelinka (2013), Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models, J. Geophys. Res. Atmos., 118, 1139–1150, doi:10.1002/jgrd.50174.
Models that are significantly in variance with reality on key quantities should be deemed unacceptable. You wrote: “If so, why are those models used by the IPCC? Didn’t they bother to check whether the models presented plausible simulations of Earth’s energy imbalance?” Yes, they checked, but they don’t seem to care. I think that is the key point here, and is the central reason why IPCC and their fellow travelers should be deemed unreliable.
A second key point is that the models can not get clouds right, hence the albedo errors. High climate sensitivities (above 2.0 K) are the result of predicted cloud feedbacks. If they can’t get them right now, how can they get them right in future? Yet IPCC has “high confidence” in the cloud feedbacks. Oh, the stupidity.
A lesser issue: Please be careful about your use of the word “assumed”. For example, you wrote “… among climate models about what caused the Earth’s assumed energy imbalance”. The imbalance is not assumed, it is calculated. At best, your wording is confusing; at worst, it gives ammunition to those who look for any little reason to “prove” their critics wrong.

Reply to  Mike M. (period)
August 11, 2015 10:31 pm

thanks for the perspective. somewhere jntheir assumptions, they make major mistake, zimply because the missi g heat was never there in the first place. The SW reflection is higher than they model.

Lance Wallace
August 11, 2015 9:18 am

Trenberth says most of the excess energy goes into the ocean. If so, do we have much idea what the lag time would be before there is an increase in atmospheric temperature? Suppose the lag time is 100 years (allowing for very slow mixing of the ocean). Then we could maintain this energy imbalance for a century before seeing an effect.

Reply to  Lance Wallace
August 11, 2015 11:03 am

Lance, his paper said it went into the deep ocean, because ARGO did not find it down to 2000 meters. The abyssal temperature is betwee 0 and 3C. So if Trenberth were right, it would never come out. Thermodynamics. But he is wrong. Essay Missing Heat.

richard verney
Reply to  Lance Wallace
August 11, 2015 12:27 pm

See my comment above [richard verney (August 11, 2015 at 7:01 am)] that comments upon a similar theme, and why it would be extremely unlikely to get a balance.
The point is that radiation out is from the surface, whereas not all radiation that reaches the ground is absorbed (or reflected) by the surface since some of the incoming solar irradiance is absorbed in the oceans at depth and it takes time for that energy to get back to the surface to be radiated from the surface.
Therefore by looking at radiation in to the surface and comparing it with radiation out from the surface one is engaged in comparing apples with pears.

August 11, 2015 9:41 am

Thank you Bob. A very readable post and is a complete clubbing to the notion of “consensus”.

Claude Harvey
August 11, 2015 10:10 am

I hope everyone appreciates the staggering amount of work that went into assembling this piece. It exposes the preposterous fact that the climate community still does not have a handle on actual measurement of the upper atmosphere’s energy imbalance. In a sane world, said measurement would be the STARTING POINT for calculating future global temperatures. Instead, we have a fleet of models that have “inferred” that imbalance in order to hind-cast actual historic global temperatures consistent with a monstrous array of other model inputs. Since that array of “other inputs” has varied from model to model, it should come as no surprise that the resulting inferred upper atmospheric energy imbalance varies over an absurd range from model to model. That fact alone should inform even the most casual observer that the fleet of AGW-based climate models is a classic collection of “garbage-in-garbage-out” and that averaging their outputs will get you “average garbage”.

August 11, 2015 10:18 am

„Fig.13 presents the simulated energy balance in absolute form. There is a 5W/m^2 span between models for the base period energy imbalances. Four of the models’ energy imbalances for the base period are negative.”
Thanks Bob, very interesting. Some months ago I programmed a simple EBM model. My control knob was that the annual energy imbalance at TOA should be zero because there were no long term heat sinks . Then I searched for the bugs in the program. I had to work hard but finally I succeeded.

Ted G
August 11, 2015 10:20 am

Over the years I have seen a lot of great posts, from some of the smartest people interested in the climate and what drives it.
Bob you knocked this one over the wall for a home run. That’s why I love WUWT so much. Real science on steroids.
And always smart thoughtful comments. What more can any one with an enquiring mind want?

Gary Pearse
August 11, 2015 11:04 am

Man oh man, Bob, even Moshe stayed away and much of his posting has been to defend models.

Reply to  Gary Pearse
August 11, 2015 4:58 pm

Gary Pearse
August 11, 2015 at 11:04 am
Hello Gary.
One way to explain why even Moshe staying away in this particular issue of the models in question, is that defending the models at this one it will completely confuse AGW “rationale”.

August 11, 2015 11:05 am

Thanks Bob. Stupendous.
August 11, 2015 1:14 pm

I keep having a problem with Trenberth’s schematic and would like some help. He has essentially set up two control volumes, one being the earth, the other the atmosphere, and is measuring energy fluxes through boundaries (earth to atmosphere, atmosphere to ‘space’). While he balances the fluxes, I recall being taught that a uniform body radiates uniformly (not referring to black body vs grey body here). One of the assumptions of CO2 theory is that the atmosphere is ‘well mixed’ or uniform. Given that assumption, and that he is already averaging the energy across the entire surface, shouldn’t the radiative flux of the atmosphere be equal in both radial directions (assuming the thin wall theory that dr<<r then dA ~0)? Why can clear atmosphere radiation towards the earth be almost twice the clear atmosphere radiation towards space? It reminds me of the Grinch's statement to Cindy Lou Who about a light bulb that only lights on one side.

Reply to
August 11, 2015 2:10 pm

A wildly oversimplified model may be helpful.
Imagine the atmosphere as consisting of, say, four stationary (!) molecules on a vertical line above the surface and that each molecule can radiate only along that vertical line–but is just as likely to radiate up as down. Now further assume that a (low-energy) photon emitted upward by the lowest molecule has only, say, a 75 % chance of exciting the next molecule up; if it doesn’t excite that one, then it has a 75% chance of exciting the molecule after that, etc. An upward photon from the highest molecule has a 100% chance of being lost to space. Of course, a downward photon from the lowest molecule has a 100% chance of exciting the surface.
The surface is concurrently being excited by high-energy photons that don’t interact with the atmosphere and is therefore the ultimate source of all the low-energy photons that excite the molecules. If you follow the probabilities, I think that you find that the rate of photon exchange between the surface and the molecules exceeds the rate of photon loss to space.

Reply to  Joe Born
August 11, 2015 2:58 pm

A downward photon has a 75%chance of impinging molecule and then a 50/50 chance of being directed back towards space where it has a 75%chance of escaping again, and so on down the chain.

Reply to  Joe Born
August 11, 2015 3:11 pm

In other words the probabilities cancel out.

Reply to  Joe Born
August 12, 2015 2:32 am

I’m not 100% sure I agree with your analysis.
Look at it this way. Suppose the energy of each high-energy photon equals twice that of each low-energy photon. And, remember, low-energy photons can come into being only in response to the surface’s receiving a high-energy photon from space: the number of low-energy photons created equals twice the number of high-energy photons received from space. That means that at equilibrium the number of (in accordance with our simple model, necessarily low-energy) photons lost to space equals twice the number of high-energy photons the surface receives from space: on average, power out equals power in.
But the surface emits photons in response not only to the photons it receives from space but also (one for one) to those it receives from the molecules. So the power it emits exceeds the power lost to space.
Colloquially, this is known as the “greenhouse effect.”

Gary Pearse
Reply to  Joe Born
August 12, 2015 2:51 pm

When you read a book or watch a movie, you don’t get blurring by scattering of photons, letters don’t change places, and you can see exquisite detail in Saturn’s rings or moon craters, so going out of the atmosphere is pretty clear.

Reply to  Joe Born
August 12, 2015 7:04 pm

Joe, I understand what you are saying, but this reads more like a conservation of photons, rather than conservation of energy. At each photon absorption, those released will be at a lower energy state. What it seems like you are saying is that due to the probability gradient from surface up there is more energy transmitted at the bottom. However if you take your simple model, and create a 4 interaction chain where 25% is lost at each interaction, followed by a total loss at the last interaction shows for a 100 W input, the surface would radiate 100 (so far so good for the GH argument) but when you add up the losses from 25% losses from each level, plus the final energy loss, we get 142 out for 100 in. (each atmosphere molecule radiates equally in each direction with a 25% loss at each stage). Somehow this doesn’t make me confident. I’m still missing something.

Reply to  Joe Born
August 13, 2015 6:05 pm

Sorry I missed your question.
Yes, it does sound like I’m counting photons instead of energy, but it amounts to the same thing if you’re going by the simplified model in which (contrary to reality, of course, but I think acceptable for our purposes) all photons emitted by the molecules have the same energy. (If we want, we can say that each insolation photon has, say, twice the energy of each re-radiated photon–i.e., we get a stock split–but it doesn’t really matter; you can forget photons and count ergs instead.)
But you’ve just made me realize that, although the model is simple, totaling up the emissions gets tedious. Unless (as often happens) the programming does, too, I’ll try to do a quick R script that totals it up so you can see it if you have an R interpreter.
But, believe me, it works out.

Reply to  Joe Born
August 13, 2015 7:23 pm

Here’s quick and dirty code to illustrate that one-dimensional greenhouse-effect model:

# A perennial difficulty with the greenhouse-effect concept is that it seems
# wrong that the surface could be radiating more than the earth does to space.
# Sure, atmospheric molecules absorb and re-radiate, but that radiation is
# isotropic, so it seems it all ought to cancel.
# The following routine simulates the situation one-dimensionally; if a molecule
# emits a photon, it emits it either up or down.  We just follow the photons,
# add them up, and see that more are radiated by the surface than are lost to
# space.  One can follow the code to see what it's doing and then run the code
# to let it keep count. The result is that, as advertised, the surface emits
# more than the earth emits to space.
up = 1;
down = -1;
surface = 1;
topOfAtmosphere = 4;
tMax = 200; # Number of simulation time steps.
# Give the optical density a sudden increase, as though the carbon-dioxide concentration suddenly increased.
opticalDensity = c(rep(0.6, tMax / 2), rep(0.9, tMax / 2)); # Per layer
k = 1e-6 * c(0.1, rep(1, topOfAtmosphere - 1));  # An emission-rate coefficient
photonsPerInterval = function(q, k) q - 1 / (q ^ (-3) + k) ^ (1 / 3); # An arbitrarily chosen function for photon emission
insolationMinusReflection = rep(300, tMax);
# insolationMinusReflection[(tMax / 2):tMax] = 0; # Energy the surface absorbs from space at time t
lost = numeric(tMax); # Energy lost to space at time t
q = emitted = matrix(0, nrow = topOfAtmosphere, ncol = tMax); # Energy stored in at emitted from layer at time t
for(t in 2:(tMax - 1)){
  # Shine on surface
  q[surface, t] = q[surface, t] + insolationMinusReflection[t - 1];
  #  Have every level (surface or molecule) emit
  #   some (or all) of its energy as radiation.
  for(emitter in surface:topOfAtmosphere){
    photonsToEmit = emitted[emitter, t] =
      round(photonsPerInterval(q[emitter, t], k[emitter]), 0);
    # if(emitter == surface) surfaceEmissions[t] = photonsToEmit;
    remainingEnergy = q[emitter, t] - photonsToEmit;
    q[emitter, t + 1] = q[emitter, t + 1] + remainingEnergy;
    while(photonsToEmit > 0){
      # Pick a direction in which to emit the photon
      direction = ifelse(runif(1) > 0.5 | emitter == surface, up, down);
      # Have photon travel through other molecules (layers) until absorbed or lost to space
      absorber = emitter + direction;
      if(absorber > topOfAtmosphere) lost[t] = lost[t] + 1;
      while(absorber >= surface & absorber <= topOfAtmosphere){
        if(runif(1)  topOfAtmosphere) lost[t] = lost[t] + 1;
      photonsToEmit = photonsToEmit - 1;
q = q[, -tMax];
lost = lost[-tMax];
emitted = emitted[, -tMax]
insolationMinusReflection = insolationMinusReflection[-tMax];
plot(NA, xlim = c(0, tMax), ylim = range(0, emitted, lost),
     xlab = "time", ylab = "Radiation");
for(i in surface:topOfAtmosphere) lines(emitted[i, ], col = i);
lines(emitted[1,], lwd = 3);
lines(lost, col = topOfAtmosphere + 1, lwd = 3);
lines(insolationMinusReflection, lty = 2, lwd = 3);
abline(v = tMax / 2 + 1, lty = 3); # Denotes time of sudden opacity increase.
       legend = c("surface emissions", "emissions to space", "surface insolation"),
       lty = c(1, 1, 2), lwd = 3, col = c(1, topOfAtmosphere + 1, 1), bty = "n")
legend("topleft", legend = paste("atm. layer", 1:(topOfAtmosphere -1)),
       lty = 1, col = 2:topOfAtmosphere, bty = "n");
title("Radiation from the Earth's Surface\nCompared with Radiation to Space")
Reply to
August 12, 2015 12:00 am

“Why can clear atmosphere radiation towards the earth be almost twice the clear atmosphere radiation towards space? It reminds me of the Grinch’s statement to Cindy Lou Who about a light bulb that only lights on one side.”
The cause is multiple scattering of IR radiation in the atmosphere.

Dr. Daniel Sweger
August 11, 2015 1:49 pm

It is about time that we stop talking/arguing about models on a scientific website. The entire issue revolves around “What is science?” As a scientist for many years and one who has done some modeling, models are NOT science. Let me repeat, models are NOT science. They can be useful, but only as the reflect reality–and reality is data. These models are based only on speculation, and as such they should not be treated as having anything to do with science. One example of rampant speculation, particularly in the light of the Maunder Minimum, is assuming all the future solar cycles are going to be the same as cycle 23.
While I appreciate the work you have done and what it demonstrates about the uselessness of climate models, this entire discussion about energy balance at TOA is unscientific at its core. The fact is, however, that it is not really science.
The language of science is data–hard, cold data. The argument that we as scientific climate skeptics need to make is the unscientific nature of all climate models. We need to take the argument away from the alarmist’s domain of quibbling about details to its most fundamental level–the very definition of science.
Even the temperature records, as they currently exist, are not science. How temperatures that were recorded 10 days ago can be arbitrarily altered and still be called “scientific data” is beyond me. But that is what GISS and the rest, including BEST, have done and are doing. Of course, the “temperature record” shows warming–that is how they have manipulated the actual data. The only ones that can possibly be considered data are the satellite records, but they only go back less than 40 years.

Reply to  Dr. Daniel Sweger
August 11, 2015 5:09 pm

Very true. Selecting the appropriate battlefield, scientific basics, is key.
Otherwise we get lost and play by their fake rules.

August 11, 2015 3:44 pm

I see a change in the 1990″s when all microwaves transmitters would have been focused in on the gulf . Was the stormy desert whipped up by weapons of mass destruction ?
Microwaves in a CO2 rich environment

Reply to  jmorpuss
August 12, 2015 7:33 am

CO2 does not absorb in the microwave spectrum.

August 11, 2015 3:55 pm
August 11, 2015 3:57 pm

My post above shows this quote below to be dead wrong.
With increasing greenhouse gases in the atmosphere, there is an imbalance in energy flows in and out of the earth system at the top of the atmosphere (TOA): the greenhouse gases increasingly trap more radiation and hence create warming (Solomon et al. 2007; Trenberth et al. 2009). ‘‘Warming’’ really means heating and extra energy, and hence it can be manifested in many ways. Rising surface temperatures are just one manifestation.

August 11, 2015 6:24 pm

“the greenhouse gases increasingly trap more radiation and hence create warming (Solomon et al. 2007; Trenberth et al. 2009)”
Ah yes, the old “trapping of radiation” argument. Well at least they have moved on from “trapping heat”, which is well known to be impossible.
But “trapping radiation” (discussing only electromagnetic radiation (EMR): i.e. light in visible or IR wavelengths) is indeed quite possible. It is done everyday by the microwave oven most of us have in your kitchen, office, etc. All that is necessary is a Faraday Cage, a continuous conductive surface that surrounds a source of EMR. But you say; the “window” in my microwave oven is transparent, I can see through it. Well sort of, the window in your oven is actually a conductive metal plate with holes through it. The holes are smaller than the wavelength of the EMR and the EMR cannot escape and is indeed “trapped”. There is some EMR leakage out of a microwave oven because there are conductive seals around the edge of the door that warp/shrink with time.
The IR active gases in the Atmosphere of the Earth (aka the “Greenhouse Gasses”) are not conductive and they are not continuous. They DO NOT “trap radiation”. They simply “redirect” the path of some radiation which causes it to pass through the atmosphere another time (or several) at the speed of light. This merely delays the flow of energy through the Sun/Earth/Atmosphere/Universe system.
Mr. Tisdale has presented quite a collection of data that shows how the climate science community has attempted to establish the alleged “radiative imbalance” which is in fact just a conjecture. It has never been observed in real data. Once proper error bars are assigned to any of the climate science “energy budgets” the “imbalance” is anywhere from PLUS or MINUS about 15 W/m sq. It should be noted that the number ZERO is in the range of plus/minus 15 and is thus an acceptable answer that does indeed match the observed data.
And yes, all of Maxwell’s equations and Faraday’s work applies to all wavelengths/frequencies/wave numbers of EMR radiation from kiloHertz to Visible and Ultraviolet light (IR light falls within those extremes).
There are some nomenclature differences between electrical engineers who speak of dielectric constants and optical engineers who speak of refractive indices. These terms are in fact interchangeable with some small manipulations of Maxwell’s equations. I have graduate degrees in both fields and decades of professional experience applying these equations.
Nice work Mr. Tisdale, thanks for compiling all of it for us. however the “radiative imbalance” is just a conjecture that is still unproven. There are certainly transient radiative imbalances in the system as heat flows through different materials at different velocities, but a constant always present radiative imbalance that has anything to do with determining the average temperature at the surface of the Earth is just a conjecture, nothing more.
Cheers, KevinK.

August 11, 2015 7:35 pm

One fact is clear from the very beginning: because of the measurement accuracies, it is impossible to measure smaller imbalances than 6.5 W/m2 at the TOA. I may be wrong but I get an impression from some comments that increasing GH phenomenon because of increased GH gas concentrations would cause increased outgoing LW radiation at the TOA in the first place. Let us look at the big numbers. The Earth surface radiates upward something like 396 W/m2 and the atmosphere radiates upward about 238 W/m2. The reason between these two numbers is the GH effect. Because GH gases absorb the LW radiation, the LW flux is reduced in its way to the space.
So what is the immediate effect, if the GH gases absorb more radiation? The outgoing LW radiation is reduced – not increased. The second step is caused by the first law of thermodynamics. The Earth has to reach its radiation balance and it will do it for a step change in about a year. This means that the Earth’s surface temperature will increase until it radiates upward so much more LW energy that the outgoing LW radiation at the TOA is the same as the incoming SW radiation (atmosphere 71 W/m2 + surface 167 W/m2 = 238 W/m2). The IPCC’s claim that there is an imbalance of about 0.9 W/m2 is just an estimate without any scientific basis.
Dr. Daniel Sweger commented that the science should be based on the real data and not on any models. In these discussions in some phase a comment appears that how (on the Earth) the atmosphere could possibly radiate downward LW radiation about 345 W/m2, when the Earth’s surface receives only 167 W/m2. The first simple fact is that these fluxes are true, because they are measured. The second fact is that they obey the laws of radiation. It is also the major effect of GH phenomenon. Thanks to it, we have a habitable planet.
IPCC’s model overestimates the warming impacts of CO2 about 200 % because they use positive water feedback twice: Firstly in calculating the CO2 strength in the constant relative humidity atmosphere and then again by doubling all the anthropogenic effects because of constant RH humidity. Finally it might be so that there is negative water feedback (humidity, clouds etc.) which can compensate the small warming effects of the GH gases.
I apologize my language errors, because I am not a native English speaker.

Reply to  aveollila
August 11, 2015 10:44 pm

I understood all of what wrote clearly. And many here concur (if I recall correctly, IIRC) that water vapor and and clouds are more likely to be net neutral or negative rather than positive. The GCM AGW believers must surely be worried at this point that their game is almost up.

Bill Illis
August 12, 2015 7:31 am

The energy balance figures should only be shown as absolute values, not anomalies.from a baseline.
This is and “in minus out” measure which either has a positive or a negative balance.

August 12, 2015 9:04 am

This is explained very nicely which is what I like about Bob Tisdale’s articles. He is clear and explains things.
Not all of us know everything (including myself) and he assumes this which is why these articles are so good.

Svend Ferdinandsen
August 12, 2015 11:25 am

I wonder how they treat the part of incoming solar radiation that is in fact LW.
First of all visible radiation is only 37% of incoming and from 1 to 2um is 21%, 2 to 20um is 6%.
The LW solar radiation between 2 and 20um is 6% equal to 20W/m2 in average.
You reference to the modeled absolute temperature tells all.

george e. smith
Reply to  Svend Ferdinandsen
August 12, 2015 11:52 am

For a Black body radiator, 25% of the power is at wavelengths less than half of the peak wavelength (plotted on wavelength scale)
98% lies between half the wave length and eight times the peak wavelength, and only 1% lies beyond each of those points.
So for the external solar spectrum the peak is at 0.5 micron, so 1% is less than 250 nm, and 1% is longer than 4.0 microns.
so saying 6% is between 2 and 20 microns is misleading. There is very little solar energy beyond 5 microns, so I wouldn’t say the sun is a source of LWIR.

August 13, 2015 10:40 am

Note the increase in TOA Incident Shortwave Radiation from 1900 to the early 1950s (Cell b). That indicates the modelers are still trying to explain part of the warming in the first half of the 20th Century with a notable increase in the output of the sun, which may or may not have happened.
As we now know, this increase did not happen.
TSI now is comparable to what it was 100 years ago.

August 22, 2015 7:06 am

Reblogged this on Climate Collections and commented:
Mighty meaty post by Bob Tisdale. Grab a cup (or pot) of coffee and enjoy.

%d bloggers like this: