Guest Post by Willis Eschenbach
I have put forth the idea for some time now that one of the main climate thermoregulatory mechanisms is a temperature-controlled sharp increase in albedo in the tropical regions. I have explained that this occurs in a stepwise fashion when cumulus clouds first emerge, and that the albedo is further increased when some of the cumulus clouds evolve into thunderstorms.
I’ve demonstrated this with actual observations in a couple of ways. I first showed it by means of average photographs of the “view from the sun” here. I’ve also shown this occurring on a daily basis in the TAO data. So I thought, I should look in the CERES data for evidence of this putative phenomenon that I claim occurs, whereby the albedo is actively controlling the thermal input to the climate system.
Mostly, this thermoregulation appears to be happening over the ocean. And I generally dislike averages, I avoid them when I can. So … I had the idea of making a scatterplot of the total amount of reflected solar energy, versus the sea surface temperature, on a gridcell-by-gridcell basis. No averaging required. I thought well, if I’m correct, I should see the increased reflection of solar energy required by my hypothesis in the scatterplots. Figure 1 shows those results for four individual months in one meteorological year. (The year-to-year variations are surprisingly small, so these months are quite representative.)
Figure 1. Scatterplots showing the relationship between sea surface temperature (horizontal axis, in °C) and total energy reflected by each gridcell (in terawatts). I have used this measurement in preference to watts/square metre because each point on the scatterplot represents a different area. This approach effectively area-averages the data. Colors indicate latitude of the gridcell. Light gray is south pole, shading to black at the equator. Blue is north pole, shading to red at the equator. Click to enlarge
So … what are we looking at here, and what does it mean?
This analysis uses a one-degree by one-degree gridcell size. So each month of data contains 180 rows (latitude) by 360 rows (longitude) of data. Each point in each graph above is one gridcell.That’s 64,800 data points in each of the graphs. Each point is located on the horizontal axis by its temperature, and on the vertical axis by the total energy reflected from that gridcell.
The main feature I want to highlight is what happens when the ocean gets warm. From about 20°C to maybe 26°C, the amount of solar energy reflected by the system is generally dropping. You can see it most clearly in Figure 1’s March and September panels. But from about 26° up to the general oceanic maximum of just above 30°C, the amount of solar energy that is reflected goes through the roof. Reflected energy more than doubles in that short interval.
Note that as the ocean warms, the total energy being reflected first drops, and then reverses direction and increases. This will tend to keep ocean temperatures constant—decreasing reflections allow more energy in. But only up to a certain temperature. Above that temperature, the system rapidly increases the amount reflected to cut down any further warming.
Overall, I’d say that this is some of the strongest evidence that my proposed thermoregulatory system exists. Not only does it exist, but it appears to be a main mechanism governing the total amount of energy that enters the climate system.
It’s very late … my best regards to everyone, hasta luego …
w.
[UPDATE] A commenter asked that I show the northern and southern hemispheres separately. Here is the Southern Hemisphere
And the Northern. The vertical lines are at 30.75°C, nothing magical about that number, I wanted to see the temperature shift over the year and that worked.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


@RC Saumarez at 3:44 pm
Now simulate the process. Then decimate the data so we have “observations” at monthly intervals.
No.
Simulate the sampling process so that only one observation is made each day at the same time each day. 30 days are averaged into a month.
The Nyquist damage is done by the one sample per day at a constant time.
We’ve strobed the wagon wheel into what appears to be a dead stop.
Stephen, there are several problems.
SST is a separate dataset (I presume the same as Willis said in previous articles but not explicitly stated here). SST itself is a massive integrator so sampling is less of an issue but suffers the same aliasing issues as just about everything in climate science.
RC Saumarez did a very good article demonstrating how aliasing can cause false long term trends in SST and showed that hadSST was showing signs of aliasing in frequency domain.
Variations in cloud, if they are not at the right time of day get missed totally. They’ve been, had their radiative effect and gone.
Willis has already detailed in earlier articles that they are not randomly spread through the day, They arrive in late morning. Further he is suggesting a mechanism where the time they appear is the key climate feedback.
Now a sun sync orbit will always sample a give cell at the same time. That means one or two cells may get sampled at about the right time but still only once, the rest miss the event. There is still some information since the cells retain the storm activity (not individual storms) to late afternoon. Some cells get sampled before it all happens, some late afternoon in a sliding pattern.
It is obvious there is plenty of scope for creating false signal.
Willis calls out “show me what it would look like in my plot” . The trouble is, unless you know what the correctly sampled data looks like, you cannot predict what aliasing effects will be produced. This is why the problem is so important : it irretrievably corrupts the data.
However let me try to help. In the last plot in Willis’ article we see June and Sept NH plots. I had already noticed some odd repetitive banding looking rather like an animal’s rib cage in the middle of these plot.
I think it very unlikely that this is a physical phenomenon in the feedback relationship and is almost certainly a sampling issue related to the time of day.
SH March shows similar oddity but in a different temp range , nearer high end. This seems like a clear ‘wagon-wheel’ and is due to repetitive patterns in the data being incorrectly sampled. Irregular patterns will suffer that same sampling issues but will not produce nice identifiable patterns that we can apply external logic to.
I do not think the strong up tick is likely to be sampling artefact but we cannot be sure it is not in the presence of such problems with the data. But that possibility needs to be examined.
I does however put into doubt some of the other interesting tendencies I had noted in comments above.
The whole idea of monthly averages is anathema to good data processing yet is ubiquitous in climatology. It is equivalent to passing a 30 day running mean filter (which itself introduces huge artefacts) then sub-sampling at every 30th point. WRONG. You need to use a clean filter at 60 days before re-sampling at 30.
It may be instructive to look into the rib-cage pattern, for example, in NH Sept plot.
It seems to be temperate latitudes. Is it one ocean basin? Does each band represent a day or week or is it a time of day?
Since there are at least five clearly definable bands we should be able to walk this back by breaking the data down into subsets , spacial and temporal, to find out what is causing it. We may then find that each line collapses to small cluster and is pure artefact of the sun-sync orbital cycle, or that it reflects an underlying phenomenon that has been badly sampled.
In either case it would be a good example of one of the oddities that aliasing can produce.
This is very analogous to the wagon-wheel turning backwards.
Greg
It is equivalent to passing a 30 day running mean filter
I don’t think this is true. I could be wrong. But as far as I know they use discrete windows (window length = calender month/fixed length). The method involved in producing monthly averages is akin to upscaling rather than as you suggest running a low pass filter. Where this type of thing is required one generally uses a Butterworth filter which by design removes higher frequency components rather than augments the entire signal (yes and I do accept there will be a slight phase-shift with such an approach and yes there strategies to deal with this).
We seem to be getting a lot of comments from people outside the Earth sciences here. There are very challenging experimental issues to deal with and there have been many, many methodologies developed to deal with them. A large part of geophysics and geology is dedicated just to these issues.
Look for upscaling in the Earth science literature.
@Stephen Fisher Rasey
Wrong.
2 stages:
1) simulate the process to achieve unaliased model data.
2) Use this data to simulate the actual sampling process.
“…there have been many, many methodologies developed to deal with them.”
Oh there is a huge wealth of techniques used in a range science and engineering fields. They just seem to get ignored. Climatology d.p. seems often completely naive apart from some stats ideas they’ve imported from econometrics. Another field of study that seems to have little success with prediction 😉
Monthly average: It is equivalent to passing a 30 day running mean filter
I’m not saying they do this as two separate steps, they have not realised they need to filter before re sampling.
My point is that taking a simple monthly average to decimate the data is _mathematically_ identical to doing a running mean , then picking every 30th value. ie it is effectively identical to using a filter with a poor freq resp and the wrong period.
The basic error here is that averaging is an effective means to reduce truly random gaussian distributed noise. It is not a valid method in the presence periodic or structured variation.
This may reflect a misconceived assumption that climate is AGW + noise. !
RC Saumarez
Seems like a sound idea. But what happens if you can’t practically achieve 2. Should you abandon such a study altogether?
Greg
It is not a valid method in the presence periodic or structured variation.
Is this true. If your signal is locally stationary then it is quite reasonable to take an average (e.e. average daily temperature) in order to compare periods of time by any type of measurement. This is a type of upscaling; fractal type models are used just about everywhere today in image re-sampling. They work on the assumption that continuous phenomenon can be discretised (which involves upscaling rules – averaging could be one such rule) into ever coarser grids all the while preserving the character of the signal at that scale => semi-continuity. You’re probably quite happy to accept it in the fields of astronomy say but not here; it happens everywhere and yes it probably does produce artifacts but as long as they are acknowledged then so what?
BTW Greg I do accept that if your discrete window is out of phase with the signal then you will lose relative information – but again you got start somewhere.
” You’re probably quite happy to accept it in the fields of astronomy say but not here”
I’d love to have the chance see. So far all I see is “anomalies” instead of filtering, averages instead of proper re-sampling, running mean distortion and bloody straight line regression on everything that is neither straight nor linear.
Maybe some of the complex techniques you cite could be applied. but they won’t help if the data’s already been screwed by improper processing. Someone should probably try what you suggest but not before insisting that any and all processing is done correctly.
Roy Spencer’s commented on this article:
http://www.drroyspencer.com/2013/10/citizen-scientist-willis-and-the-cloud-radiative-effect/
@CD.
I think the key is to ANALYSE what is happening. We all have to accept that our measurements have limitations. The question is what effect do those limitations have on the conclusions we are trying to draw.
For example, consider aliasing. If we have no a-priori knowledge of the system, an aliased signal is an aliased signal. If, on the other hand, we KNOW that the highest frequency in the system is only a small amount above the Nyquist, we know which components of the signal are likely to be degraded, because there is a small degree of spectral overlap near the Nyquist, and we can can filter these out to get reliable low frequency information. This approach presupposes a model of the system.
I would say that we have a model of the system we are investigating and we want to make some deductions about that model using measurements. We can simulate the model and then simulate the effects of the data acquisition chain on the measurements that we can make practically. Given these simulated measurements, we can then determine the likely errors in determining whatever parameter of the model we are interested in. If we can tie down the statistics of the process, we can use Monte-Carlo methods to get the error distribution. Generally this shows the limits of SCALE that we can hope to measure successfully.
I have found this approach very useful in the past; it is really just glorified error analysis. In my discipline, biomedical engineering, I’ve often found that what I hoped I could measure with some precision, has large errors and this effects the experimental approach.
Greg
You’re probably right on all counts. But I think one must also acknowledge that there is sometimes, in the realms of experimentation, a disconnect between best practice and what is practically possible – surely pragmatism should prevail. As for bad stats and inappropriate post-processing, it’s pretty much endemic.
Thanks for the link.
Yeah, sometimes you have to make do but you don’t have to make worse. 😉
RC, you’re way ahead of most of us on this and I’m not challenging you on any of the points you make. And, although it mightn’t seem like it, I appreciate all the points you make.
My own experience of error, and by extension uncertainty, is that most people get completely obsessed with this and very often in the wrong parts of the work-flow; in fact a significant amount of my work deals with modelling uncertainty. I’d say, in this instance, error associated with the apparatus is likely to be the most significant issue here and yet no one has mentioned it.
If I’ve demangled the address OK this is the Ramanthan et al 1989 that Spencer refers to:
http://88.167.97.19/albums/files/TMTisFree/Documents/Climate/The_radiative_forcing_due_to_clouds_and_water_vapor_FCMTheRadiativeForcingDuetoCloudsandWaterVapor.pdf
CD.
If your job is modelling uncertainty, you probably know a lot more about this than I do. I agree that instrument error is an issue that needs to be taken into account.
RC, the problems all started from your claim on this thread:
RC Saumarez says:
October 6, 2013 at 4:32 am
When I asked you to “put up or shut up” regarding that claim, you’ve replied as follows …
RC Saumarez says:
October 7, 2013 at 1:11 pm
RC Saumarez says:
October 7, 2013 at 3:44 pm
So … “several times” you’ve demonstrated that aliasing is a problem in this or another thread of mine? As always, you’re long on claims but short on citations. Where and when have you actually shown (not claimed but shown) that my analyses have suffered from aliasing?
In any case, so far you’ve told us nothing except that you are not referring to this post. You are referring to earlier posts … except that you’ve made the same claims regarding my work here
So, sorry for my lack of clarity. What I meant was, it’s time to put up or shut up about this post. I am using the CERES and Reynolds information directly. You know, directly, as in, take it from the sources and show a scatterplot.
Now, you keep claiming that somehow aliasing comes into play in that process, viz:
and
So here’s your big chance, RC, to show that you are right. What you need to do is to show, not claim but show, that:
a) the CERES data is aliased, and that
b) this somehow makes my scatterplot wrong.
Please do not bother posting anything containing the words “suggests” or “implies”, or that you are “concerned” or “worried”, or about “potential problems”, or that you “suspect” this or that. I have been informed by you, time after time, that you are concerned about aliasing, and that you suspect it is doing bad things to my analysis … sorry, that doesn’t impress me in the slightest.
Because as far as I know, all we have are your fatuous claims—not once have you actually shown that aliasing was an issue in my work. And in particular, in this thread you have given us nothing but your inchoate fears.
So how about it, RC? Look at the head post, and point out to us where the aliasing is actually a problem, and tell us how you know it’s a problem there and not elsewhere. You know, with links and data and all that sciencey stuff you love to skip over in favor of informing us, endlessly, that you suspect that there might possibly be something that could raise a concern a potential problem in some unspecified part of my work. Somewhere.
w.
RE: Stephen Rasey 10/7 9:22 pm
Simulate the sampling process so that only one observation is made each day at the same time each day. 30 days are averaged into a month.
…The Nyquist damage is done by the one sample per day at a constant time.
@RC Saumarez 10/8 4:02 am
… @Stephen Fisher Rasey
Wrong. … 2 stages:
1) simulate the process to achieve unaliased model data.
2) Use this data to simulate the actual sampling process.
We are saying the same thing, aren’t we?
@ur momisugly Willis Eschenbach
You have produced 3 science posts in the last 3 days relating to feedback and its components.
You keep telling me to put up or shut up, which I am unable to do because the medium allows you to advance an argument using graphs and the ability to paste in maths, but not myself.
I note that you have also told another commenter on this blog who does Earth science signal processing for a living that he doesn’t know what he is talking about.
I suggest that you do the little exercise that I set you. It is a variant of an exercise that used when I taught DSP to undergraduates and MSC students. They thought it was informative and it should satisfy your intellectual curiosity.
Then the answer will staring you straight in the eyes and you will have learnt something.
@Willis Eschenbach 9:44 am
Willis, the aliasing is not in your use of the CERES data, it is in the CERES data itself.
How many times per day does the CERES measure an equatorial grid cell?
I think it is once. And that is generous. What was vertical on one pass is on the horizon on the next pass.
Sun-Synchronous orbit parameters (dayling side. same goes for night passes)
http://en.wikipedia.org/wiki/Terra_(satellite)
Perigee (h) = 705. km
Apogee = 725. km
Inclination = 98.1991 °
Period = t = 98.8 min
Earth Radius = r = 6350. km
Distance to Horizon = r/(r+h)*Sqrt(2rh + h^2)
Distance to Horizon a Perigee = 2767. km
Distance around Equator = c = 39898. km
speed of rotation of Earth at equator = Ve = 27.7 km/min
Distance between passes = Ve*t = 2737 km
Width of 1×1 grid cell at equator = 111 km
Grid Cells between passes = 24.7 cells
Grid Cells between 45 deg oblique on each pass = 12.7
So every day, only 1/2 of the grid cells near the equator is within the 45 degree oblique view of TERRA (CERES) satellite. These fortunate cells are sampled only once that day. You have to be up near 60N and 60S before you get full coverage of the cells within 45 deg of vertical.
I still do not know the time of the day of the pass.
Would we have any idea of the Global Average Temperature if we measured half the thermometers only at 11:58 am and 11:58 pm each day? It would be an interesting dataset. It might even make some sense. But it isn’t the min and max. It isn’t the hourly measured average. It is something else.
I think I know where Trenberth’s Missing Heat is…. It is lost in the Nyquist aliasing of the CERES measurement system. He should be thankful that only 0.7 W/m^2 is unaccounted for. The rest is lost between the frames of the “stopped” wagon wheel.
Stephan says: “Would we have any idea of the Global Average Temperature if we measured half the thermometers only at 11:58 am and 11:58 pm each day?”
Then you measure 10N at 10.30; 20N at 10am …. the poles at 6 am and 6 pm….and see whether anything odd happens to GMST.
that kind of orbit is adequate for things that change slowly from day to day but starts to be a problem when you have fast moving changes . Exactly the short of fast moving changes Willis is suggesting controls tropical climate.
I pointed out what looks like an obvious wagon-wheel aliasing defect in the data.
Willis apparently finds more interest telling RC to shut up than addressing the issue he want him to ‘put up’ on, when it is someone other than RC that points it out what he wants to be pointed out.
Grid Cells between passes = 24.7 cells
Grid Cells between 45 deg oblique on each pass = 12.7
I think this flight path has 360/24.7=14.6 orbits per day sampling down one side and up the other. It seems designed to get full coverage (at least one reading per day) at the equator.
This is quite common for polar sun-sync orbital paths. At polar latitudes it gives 6 or 7 readings per day but with at 2 deg black spot at the geographic pole.
If this was done for ERBE it appears to imply that they assumed no regular diurnal pattern in cloud and that variations were normally distributed and would all average out.
where have I seen that before?