Losing Ones Balance

Guest Post by Willis Eschenbach

According to NASA, we have the following exciting news about a new study.

Direct Observations Confirm that Humans are Throwing Earth’s Energy Budget off Balance

Earth is on a budget – an energy budget. Our planet is constantly trying to balance the flow of energy in and out of Earth’s system. But human activities are throwing that off balance, causing our planet to warm in response.

“Off Balance” … sounds scary, huh? Plus according to NASA, this isn’t some computer model output, it’s “direct observations”

The paper, sadly paywalled, is entitled “Observational evidence of increasing global radiative forcing” by Kramer et al., hereinafter Kramer2021. It claims that from 2003 through 2018, human actions increased the downwelling longwave infrared radiation from the atmosphere by 0.53 ± 0.11 watts per square meter (W/m2).

So let me see if I can explain the manifold problems with this hot new Kramer2021 study. Let me start by explaining the size of the system we’re talking about, the huge planet-wide heat engine that we call the “climate”. Here is an overview of what happens to the sunlight that warms the planet. The Kramer2021 study has used CERES satellite data, and I am using the same data.

Figure 1. Solar energy on its path from the top of the atmosphere (TOA) to the surface.

Note that we are talking about hundreds of watts per square metre of the surface of the earth.

Next, to the same scale, here’s a look at the energy absorbed by the atmosphere that is returned to the surface via downwelling longwave thermal radiation.

Figure 2. Sources of energy that power the downwelling longwave radiation that is absorbed by the surface. Read it from the bottom up. This is to the same scale as Figure 1.

So … how much of this downwelling longwave does the new study claim is of human origin during the period 2003 to 2018? See that skinny line to the right of the “300” on the vertical axis? That’s how much the energy is “off balance” …

That’s their claim.

Too big a scale to see how much the study is actually claiming? OK, here’s a detail of Figure 2:

Figure 3. Detail of Figure 2, to show the size of the amount that we’re claimed to be “off balance”.

The “whiskers” to the right of the “355” on the vertical axis show the size by which they are claiming that humans have made the downwelling longwave radiation from the atmosphere “off balance” …

So that’s the first problem with their analysis. They are claiming to diagnose an almost invisible change in downwelling longwave, in a very chaotic, noisy, and imperfectly measured system.

The next problem is with the claim that they are using “direct observations” to get their results. Sounds like they’re avoiding the myriad problems with using the global computer models (GCMs) to get results. From the NASA press release linked at the top of the post, we have (emphasis mine):

Climate modelling predicts that human activities are causing the release of greenhouse gases and aerosols that are affecting Earth’s energy budget. Now, a NASA study has confirmed these predictions with direct observations for the first time: radiative forcings are increasing due to human actions, affecting the planet’s energy balance and ultimately causing climate change.

However, what they really mean by “direct observations” is that they are using direct observations as inputs to “radiative kernels”. Here’s the abstract to their study, emphasis mine:

ABSTRACT

Changes in atmospheric composition, such as increasing greenhouse gases, cause an initial radiative imbalance to the climate system, quantified as the instantaneous radiative forcing. This fundamental metric has not been directly observed globally and previous estimates have come from models. In part, this is because current space-based instruments cannot distinguish the instantaneous radiative forcing from the climate’s radiative response. We apply radiative kernels to satellite observations to disentangle these components and find all-sky instantaneous radiative forcing has increased 0.53±0.11 W/m2 from 2003 through 2018, accounting for positive trends in the total planetary radiative imbalance. This increase has been due to a combination of rising concentrations of well-mixed greenhouse gases and recent reductions in aerosol emissions. These results highlight distinct fingerprints of anthropogenic activity in Earth’s changing energy  budget, which we find observations can detect within 4 years.

And what are “radiative kernels” when they’re at home? They’re a computer-based analysis of the instantaneous radiative forcing and radiative flux changes due to changes in things like temperature, water vapor, surface albedo and clouds.

And as a result, they can never be any more accurate than the underlying temperature, water vapor, surface albedo, and cloud etc. datasets …

Not only that, but to give an accurate result regarding human influence, the “radiation kernels” have to include all of the factors that go into the radiation balance. From Figure 2 above, we can see that these include the amount of solar radiation absorbed by the atmosphere (including the clouds), the sensible heat lost by the surface, the latent heat lost by the surface, and the longwave radiation emitted by the surface.

However, I find no indication that they have included all of the relevant variables.

And in any case, how accurately do we know those values? Not very well. Let me return to that question after we discuss the next problem.

The next problem with their study is that they seem totally unaware of the issues of long-term persistence (LTP). “Long-term persistence” in terms of climate means that today’s climate variables (temperature, rainfall, pressure, etc.) is not totally different from yesterday, this year is somewhat similar to last year, and this decade is not unrelated to the previous decade. Long-term persistence is unmentioned in their study. Long-term persistence is characterized by something called the “Hurst Exponent”. The value of this exponent ranges from 0.0 to 1.0. Purely random numbers have a Hurst Exponent of 0.5. An increasing Hurst Exponent indicates increasing long-term persistence.

And natural climate variables often show high long-term persistence.

What’s the problem with this? Well, the uncertainty in any statistical analysis goes down as the number of observations increases. The number of observations is usually denoted by capital N. If I throw a die (one of a pair of dice) four times (N=4) and I average the answer, I might get a mean (average) value of 4.2, or of 1.6. But if I throw the die ten thousand times (N=10,000), I’ll get something very near to the true average of 3.5. I just tried it on my computer, and with N=10,000, I got 3.4927.

The problem is that if a dataset has high long-term persistence, it acts like it has fewer observations than it actually has.

To deal with this, we can calculate an “Effective N” for a dataset. This is the number of observations that the dataset acts as though it has.

The general effect of long-term persistence is that it greatly increases the uncertainty of our results. For example, finding longer-term trends in a random normal dataset is unusual. But because of long-term persistence, as the saying goes, “Nature’s style is naturally trendy.” Longer-term trends in natural datasets are the rule, not the exception. As that linked article in Nature magazine says, “trend tests which fail to consider long-term persistence greatly overstate the statistical significance of observed trends when long-term persistence is present.”

So let’s take for example the CERES downwelling longwave dataset, the one that they say humans are affecting. It is indeed trending upwards. Looking at the period they studied, it increased by 1.1 W/m2, and they claim about half of that (0.53 W/m2) is from human actions.

And if we ignore long-term persistence, the “p-value” of that trend is 0.0003, which is very small. This means that there is almost no chance that it’s just a random fluctuation. Ignoring long-term persistence, the trend in that data is highly statistically significant.

But that’s calculated with the actual number of datapoints, N = 192. However, once we adjust for long-term correlation, we see that particular dataset has a Hurst Exponent of 0.88, which is very high.

Figure 4. Hurst Exponent analysis of the 16-year CERES dataset used in the Kramer2021 study. The diagonal line is what we’d see if there were no long-term persistence.

This means that there is so much long-term persistence that the Effective N is only 3 data points … which in turn means that the apparent trend is not statistically significant at all. It may be nothing more than another of nature’s many natural trends.

To summarize the problems with the Kramer study:

• The way that they are isolating the human contribution is to measure every single other variable that affects the downwelling longwave radiation, and subtract them from the total downwelling longwave radiation. The residual, presumably, is the human contribution. To do that, we’d need to measure every single variable that either adds to or removes energy from the atmosphere.

  • These include:
    • CO2
    • all other non-condensing greenhouse gases
    • water vapor
    • aerosols such as sulfur dioxide and black carbon
    • surface temperature
    • surface albedo
    • solar absorption/reflection by clouds
    • solar absorption/reflection by the atmosphere
    • solar absorption/reflection by aerosols
    • sensible heat loss from the surface
    • latent heat loss from the surface by evaporation and sublimation
    • sensible heat gain by the surface from the atmosphere
    • latent heat gain by the surface from dew
    • solar wind
    • long-term melting of glacial and sea ice
    • long-term changes in oceanic heat content
    • transfer of cold water from the atmosphere to the surface via snow, rain, and other forms of precipitation

I do not see evidence that all of these have been accounted for.

• The uncertainty in any and all of these measurements presumably adds “in quadrature”, meaning as the square root of the sum of their squares. Their claim is that the total uncertainty of their result is about a tenth of a watt per square metre (±0.11 W/m2) … I’m sorry, but that is simply not credible. For example, even without accounting for long-term persistence, the uncertainty in the mean of the CERES 2003 – 2018 downwelling LW radiation data is more than half of that, ±0.08 W/m2. And including long-term persistence, the uncertainty of the mean goes up to ±0.24, more than twice their claimed uncertainty.

• And it’s not just that longwave radiation dataset, that’s only one of the many uncertainties involved. Uncertainties are increased in all of the datasets by the existence of long-term persistence. For example, using standard statistics, the uncertainty in the mean of the atmospheric absorption of solar energy is ±0.02 W/m2. But when we adjust for long-term persistence, the uncertainty of the mean of the absorption is twice that, ±0.04 W/m2, which alone is a third of the claimed uncertainty of their “human contribution”, which is said to be 0.53 ± 0.11 W/m2.

• They are claiming that they can measure the human contribution to the nearest hundredth of a W/m2, which is far beyond either the accuracy of the instruments or the uncertainty of the measurements involved. And they claim that they can measure human influence as being about 0.15% of the total downwelling longwave … which means that all of their underlying calculations must be even more accurate than that.

Let me close by saying that I DO think that human-generated increases in CO2 alter the energy balance. That much seems reasonable based on known physics.

However, I don’t think changes in CO2 alter the temperature, because the changes are very small and more importantly, they are counteracted by a host of emergent climate phenomena which act to keep the temperature within narrow bounds. In other words, I think that the authors of Kramer2021 are correct in principle (humans are increasing the downwelling LW radiation by a small amount), but I think that they are very far from substantiating that claim by their chosen method.

Not only that, but the change in downwelling LW radiation from increasing CO2 is trivially small, even over the long term.

Figure 5. Using the IPCC figures of an increase of 3.5 W/m2 for each doubling of CO2, the yellow/black line shows the increase in total downwelling radiation (longwave + shortwave) since the year 1700 due to increasing CO2. See here for details on the data used.

As you can see, over the last full three centuries the theoretical increase in downwelling radiation from CO2 is not even four-tenths of one percent of the total.

Now, when I analyze a system, my method is to divide the significant variables into three groups.

  • Categories of Variables
    • First order variables: these cause variations in the measurement of interest which are greater than 10%. If the measurement of interest is instantaneous downwelling radiation (LW + SW), this would include say day/night solar variation, or the formation of tropical cumulus fields.
    • Second order variables: these cause variations in the measurement of interest which are between 1% and 10%. If the measurement of interest is instantaneous downwelling radiation (LW + SW), this would include say nighttime clouds.
    • Third order variables: these cause variations in the measurement of interest which are less than 1%. If the measurement of interest is long-term changes in downwelling radiation (LW + SW), this would include say incremental changes in CO2.

In general, I’ve found that third-order variables can be ignored in all but the most detailed of analyses …

TL;DR Version? They claim far greater accuracy and far smaller uncertainty than they can demonstrate.


Here on the hill, I spent most of my day cleaning up and fixing up my old Peavey Classic amplifier, using windex, a dish scrubby sponge, a wire wheel on my grinder to clean the rust off the corner protectors, and Rustoleum Wipe-New to restore the black finish … and then using the amp to do further damage to my eardrums and the general peace of the house.

I’d been wondering why it was hissing so badly, and then duh, I found out that somewhere along the line the ground prong on the plug had broken off. So I cut off and replaced the plug, and it’s good as new.

Keep the music flowing, dear friends. …

w.

Technical Note: I describe the method I use to determine “Effective N” in a post called “A Way To Calculate Effective N“. It turns out that I had independently discovered a method previously found by the brilliant Greek hydrologist Demetris Koutsoyiannis, whose work is always worth reading.

My Usual Note: To avoid the misunderstandings that bedevil the intarwebs, when you comment please quote the exact words you are discussing. This allows us all to understand just who and what you are referring to.

4.7 33 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

141 Comments
Inline Feedbacks
View all comments
Gary Ashe
March 27, 2021 4:38 pm

Averaging an intensive quality like solar radiation from a 12 hour period over a 24 hour period is as worthless and stupid as averaging testicles and tits over a whole population, where everybody has one tit and one bollock..

Its the very basic foundation of the radiative greenhouse effect con, a cold sun.

Drake
Reply to  Gary Ashe
March 27, 2021 7:54 pm

Everyone, on average, has a little less than one tit (mammary gland) and a little more than one bollock. More men then women being the reason. The ratio is apparently 63 men to 62 women. Really not even a rounding error, but rounding errors are the data on which the AGW folly is built. WUWT is a scientific site. We must maintain the integrity of the science.

Reply to  Gary Ashe
March 29, 2021 8:46 am

Magnificent – I’ll remember that analogy!

March 27, 2021 5:15 pm

Nicely done, Willis. Thank you. I had been hoping to see something from you on this. I think I will express NASA’s concluding imbalance as 530 +/- 110 milliwatts per square meter, to draw attention to how tiny this quantity is in the global scheme of things, and how little sense it makes to state it so precisely.

March 27, 2021 7:05 pm

[[The paper, sadly paywalled, is entitled “Observational evidence of increasing global radiative forcing” by Kramer et al., hereinafter Kramer2021. It claims that from 2003 through 2018, human actions increased the downwelling longwave infrared radiation from the atmosphere by 0.53 ± 0.11 watts per square meter (W/m2).]]

When is this sick U.N. IPCC hoax going to die?

First, only the Sun heats the Earth’s surface. The atmosphere only cools it. It also acts as a blanket, slowing cooling, but it can never raise the surface temperature higher than the Sun did, any more than a real blanket can raise your body temperature higher than your metabolism did. This even applies to an Obama blanket with an IR-reflective surface.

Second, the IPCC picked the wrong getaway driver for their planned trillion-dollar heist. Atmospheric CO2’s only radiation absorption/emission wavelength of 15 microns has a Planck radiation temperature of -80C, colder than dry ice, which can’t melt an ice cube. CO2 allows all of Earth’s real surface radiation in the range of -50C to +50C to pass by untouched. Even if it returned 100% of the -80C radiation to the surface, it would have no effect because it couldn’t raise the temperature higher than -80C.

Take my free Climate Science 101 course and learn to laugh off all of the beehive of IPCC lies:

http://www.historyscoper.com/climatescience101.html

Drake
March 27, 2021 7:37 pm

Willis,

Are the “whiskers” in Figure 3 in proportion to the partial graph shown? The left axis does not start at 0.

March 27, 2021 7:38 pm

They are backing out their fractions of a few watts for use as “direct observations”…
How quaint.

It was such a success when trenberth and others did it with Earth’s energy budget.
Claim to be omniscient regarding knowing everything involved and their exact quantities to “back out”.
Then a little sleight of hand, sleight of mind to enable their legerdemain sleight of numbers…

Voila! A new number they can claim that man causes global warming.

Willis effectively dissects Kragen’s follies in detail and with it, NASA’s fraud.
Thank you, Willis!

Jeff Alberts
March 27, 2021 8:55 pm

So, the REALLY BIG question here is, why is a NASA study paywalled? Didn’t US taxpayers already pay for it?

Herbert
March 27, 2021 9:07 pm

Willis,
Thanks for another excellent presentation.
On your figure 2, Dr.Roy Spencer explains the position succinctly on the Earth’s energy budget in one of his recent books,”Global Warming Skepticism for busy people”-
“Many scientists claim the diagnosis of the cause of global warming is obvious and can be found in basic physical principles.If basic physical principles can explain all of the global-average warming, as the climate consensus claims, then how do we account for the following?
All of the accumulated warming of the climate system since the 1950s,including the deep oceans, was caused by a global energy imbalance of 1 part in 600; yet modern science does not know,with a precision approaching 1 part in 100 ANY of the natural inflows in and out of the climate system. It is simply assumed that the tiny energy imbalance – and thus warming-was caused by humans.”

March 27, 2021 9:35 pm

Really excellent, Willis.

It is scientific insanity to suppose they know all those other variables to an accuracy that allows extraction of a 0.53 W/m^2 perturbation.

The 0.53 W/m^2 is also very convenient, because it’s exactly what one calculates from the average annual increase in forcing from CO2, over 2003-2018 = 15 years.

The annual average increase in forcing calculated from CO2 emissions is 0.035 W/m^2/yr. Times 15 = 0.53 W/m^2. Dead right on what the consensusistas would want to see. Imagine that.

Also problematic is that the TOA radiative balance isn’t known to better than ±4 W/m^2. And yet, somehow, they can detect a shift in radiative balance 7.6 times smaller than the uncertainty.

It must be that ol’ consensus modeler magic of taking the anomaly and having all the error just subtract away.

Reply to  Pat Frank
March 28, 2021 7:19 am

What’s the old saying, “Precise but not accurate”?

Coach Springer
March 28, 2021 5:47 am

Perhaps we’ll know when we’ve reached the “correct” balance point when everyone stops whining. /s

Mark Pawelek
March 28, 2021 6:01 am

Someone’s paywall does not work. I found it using the usual method. Google search for PDF + DOI number. The 2nd link.

Roger Taguchi
March 28, 2021 6:04 am

You “calculated” the Hurst exponent. Which method did you use? R/S, DFA, Peridogram, aggregated variances, local Whittle, or wavelet analysis?
.
Are you aware all of these are estimators, and not hard calculations?

Roger Taguchi
Reply to  Willis Eschenbach
March 28, 2021 11:48 am

“We have heuristic methods to estimate it, but they are just estimations based on experience, without theoretical underpinnings.”

The exponent is an “estimator.” You forgot to include the error bounds of said estimator.
..
Using heuristics in a statistical argument is like using tree rings.

Weekly_rise
March 28, 2021 6:48 am

This means that there is so much long-term persistence that the Effective N is only 3 data points … which in turn means that the apparent trend is not statistically significant at all. It may be nothing more than another of nature’s many natural trends.

This passage makes it sound like your application of the Hurst exponent is simply not useful for evaluating trends in the TOA IRF, since you would need to come at the problem with an understanding of the underlying physics, which we already have.

Mark Pawelek
March 28, 2021 7:02 am

Their justification for using models: radiative kernels, will be that everything’s already a model anyway! For example, when we read a thermometer our optic system is already a kind of model. Make it an electronic thermometer and we have 3+ things in the way: the sensor, electronics and our optic system. Make it a satellite, and there are at least 4 models in the way of direct experience. In fact: there’s no direct experience! They think they’re clever when they use this kind of argument; ignorant that it goes all the way back, at least, to Plato’s shadows reflected on the cave wall. It’s another way to argue everyone has their ‘own truth’. It elides the fact that some models are validated and others not. It confuses their own activists, and it confuses the person who originates the idea. They are forever eliding and avoiding validation and falsification attempts. Where’s the validation of radiative kernels, oh, and the falsification criteria?

Roger Taguchi
March 28, 2021 10:16 am

What is interesting about the results of this study is how well they agree with other observational studies of CO2 caused downwelling radiation: http://asl.umbc.edu/pub/chepplew/journals/nature14240_v519_Feldman_CO2.pdf

Roger Taguchi
Reply to  Willis Eschenbach
March 28, 2021 11:51 am

Autocorrelation is not at issue with spectral emissions of CO2. The agreement is really good considering that “all sky” has the nasty effect of H20 as a green house gas, so one would expect a slightly higher number.

And in fact they are both measuring downwelling IR.

Roger Taguchi
Reply to  Willis Eschenbach
March 28, 2021 12:54 pm

Both measurements agree well, in fact they overlap at .42 w/m2/16y inside their error bounds.

Next, there is no physical way downwelling IR in 2010 affects it in 2011. If you disagree, please tell me where in quantum physics, a photon emitted in 2010 can impact a photon emitted in 2011. Using your heuristic estimator the Hurst exponent would be exactly equal to 0.5

You say: “Next, autocorrelation definitely affects all time series statistics.” This is false in the case of H=0.5

Roger Taguchi
Reply to  Roger Taguchi
March 28, 2021 1:03 pm

Lastly, your statement: ” It also affects the “SEM”, the standard error of the mean of any sequential series of measurements.”
..
This is also blatantly false. If I have a set of 1000 rulers, and use each one to measure the width of my left front tire, the SEM is unaffected by autocorrelation because H=0.5

Roger Taguchi
Reply to  Willis Eschenbach
March 28, 2021 4:56 pm

“that kind of bullshit nitpicking”

And that, Mr. Eschenbach is why you are incapable of submitting an acceptable paper to reputable journals. You are not interested in the nitpicking DETAILS that makes science what it is. More proof that your “amateur” status can’t be overcome.

Roger Taguchi
Reply to  Roger Taguchi
March 28, 2021 5:00 pm

Pity your inflated ego gets in the way when your errors in logic and methodology get pointed out. Playing fast and loose with the words “any” and “all” in logic is a serious problem you have. Not to mention that in addition to you being an amateur in science, you are a neophyte with respect to statistics.

Roger Taguchi
Reply to  Roger Taguchi
March 28, 2021 5:12 pm

If CP/M is a computer language, then so is Windows. Have you written any “Windows” programs?

Richard G.
March 28, 2021 4:42 pm

Another humdinger Willis.
“Not only that, but to give an accurate result regarding human influence, the “radiation kernels” have to include all of the factors that go into the radiation balance. From Figure 2 above, we can see that these include the amount of solar radiation absorbed by the atmosphere (including the clouds), the sensible heat lost by the surface, the latent heat lost by the surface, and the longwave radiation emitted by the surface.”
You include a nice list of variables to which I suggest a Kernel to address the absorption and conversion of solar radiation into high energy chemical bonds by plant photosynthesis of polysaccharides and lignins that have no sensable heat value. I have never seen this addressed. Dependent variables would include human land use changes, irrigation, precipitation distribution changes and CO2 fertilizer effect.
Rock on.

robertok06
March 30, 2021 1:06 am

Hello:

At one point you conclude that…

They are claiming that they can measure the human contribution to the nearest hundredth of a W/m2, which is far beyond either the accuracy of the instruments or the uncertainty of the measurements involved. 

… but they do not claim to have that accuracy, there’s a sentence in the doc document “supporting information” that says:

“While CERES has well documented uncertainty in the magnitude of the TOA radiative flux measurements, our work to estimate the IRF is conducted in anomaly space, where uncertainty in absolute fluxes is irrelevant.”

So, again, they use “anomalies” and not real absolute values, so that they can deceive the readers.
Preposterous conclusions at best.

robertok06
March 30, 2021 1:11 am

Hello again, dr Eshenback:

I forgot one link, possibly interesting, same authors:

https://ceres.larc.nasa.gov/documents/STM/2020-09/14_CERESstm2020_Kramer.pdf

Editor
March 30, 2021 4:03 am

Great post Willis and a good lesson in autocorrelation.

March 30, 2021 4:50 am

“Certainty Laundering”

Take much uncertain data to make certain claims by intentionally misleading.

What is a must for this kind of “science” is observation data that is not so accurate, as it presents an opportunity to produce such papers as the one cited by W.

Verified by MonsterInsights