Guest essay by Johannes Herbst
There is a much discussed graph in the blogosphere from ‘Tamino’ (Grant Foster), which aims to prove that there is no delay or pause or decline in global warming.
He states: Twelve of sixteen were hotter than expected even according to the still-warming prediction, and all sixteen were above the no-warming prediction:
Let’s get a larger picture:
- We see the red HADCRUT4 graph, coming downwards a bit from 1960 to 1975, and inclining steeper beyond 2000, with a slight drop of about the last 10 years.
- We see a blue trend, rising at the alarming rate of 0.4°C within only one decade! This was the time when some scientists started to worry about global warming.
- We see the green trend, used by the blogger Tamino in the first graphic, rising less than 0.1°C per decade.
- Below we see the Sunspot Numbers, pulsing in a frequency of about 11 years. Comparing it with the red temperature graph, we see the same pattern of 11 years pulsing. It shows clear evidence that temperature is linked to the sunspot activity.
Tamino started his trend at high sun activity and it stopped at low activity. Therefore the weak increase during 18 years.
Which leads us to the question: How long should a time be for observing climate change? If we look at the sunspot activity and the clear pattern it produces in the temperature graph, the answer is: 11 years or a multiple of it.
Or we can measure from any point of:
·high sun activity to one of the following
·low sun activity to one of the following
·rising sun activity to one of the following
·declining sun activity to one of the following
to eliminate the pattern of sunspot numbers.
Let’s try it out:
The last point of observation of the trend is between 2003 and 2014, about 2008. But even here we can see the trend has changed.
We do not know about the future. An downward trend seems possible, but a sharp rise is predicted from some others, which would destroy our musings so far.
Just being curious: How would the graph look with satellite data? Let’s check RSS.
Really interesting. The top of both graph appears to be at 2003 or 2004. HADCRUT4 shows a 0.05°C decline, RSS a 0.1°C per decade.
A simple way for smoothing a curve
There is a more simple way for averaging patterns (like the influence of sunspots). I added a 132 months average (11 years). This means at every spot of the graph all neighboring data (5.5 years to the left and 5.5 years to the right) are averaged. This also means that the graph will stop 5.5 years from the beginning or the end. And voila, the curve is the same as with our method in the previous post to measure at the same slope of a pattern.
As I said before the top of the curve is about 2003, and our last point of observation of a 11 years pattern is 2008. From 2008 to 2003 is only 5 years. This downtrend, even averaged, is somehow too short for a long time forecast. But anyway, the sharp acceleration of the the 1975-2000 period has stopped and the warming even halted – for the moment.
Note: I gave the running average graph (pale lilac) an offset of 0.2°C to get it out of the mess of all the trend lines.
If Tamino would have smoothed the 11years sun influence of the temperature graph before plotting the trend like done here at WFT, his green trend would be would be the same incline like the blue 33 year trend:
Even smoother
Having learned how to double and triple smooth a curve, I tried it as well on this graph:
We learned from Judith Curry’s Blog that on the top of a single smoothed curve a trough appears. So the dent at 2004 seems to be the center of the 132 month’s smoothed wave. I double smoothed the curve and reached 2004 as well, now eliminating the dent.
Note: Each smoothing cuts away the end of the graph by half of the smoothing span. So with every smoothing the curve gets shorter. But even the not visible data are already included in the visible curve.
According to the data, after removing all the “noise” (especially the 11 year’s sun activity cycle) 2004 was the very top of the 60 years sine wave and we are progressing downwards now for 10 years.
If you are not aware about the 60 years cycle, I just have used HADCRUT4 and smoothed the 11 years sunspot activity, which influences the temperature in a significant way.
We can clearly see the tops and bottoms of the wave at about 1880, 1910, 1940, 1970, and 2000. If this pattern repeats, the we will have 20 more years going down – more or less steep. About ten years of the 30 year down slope are already gone.
One more pattern
There is also a double bump visible at the downward slopes of about 10/10 years up and down. By looking closer you will see a hunch of it even at the upward slope. If we are now at the beginning of the downward slope – which could last 30 years – we could experience these bumps as well.
Going back further
Unfortunately we have no global temperature records before 1850. But we have one from a single station in Germany. The Hohenpeissenberg in Bavaria, not influenced from ocean winds or towns.
http://commons.wikimedia.org/wiki/File:Temperaturreihe_Hoher_Pei%C3%9Fenberg.PNG
Sure, it’s only one single station, but the measurements were continuously with no pause, and we can get somehow an idea by looking at the whole picture. Not in terms of 100% perfection, but just seeing the trends. The global climate surely had it’s influence here as well.
What we see is a short upward trend of about ten years, a downward slope of 100 years of about 1°C, an upward trend for another 100 years, and about 10 years going slightly down. Looks like an about 200 years wave. We can’t see far at both sides of the curve, but if this Pattern is repeating, this would only mean: We are now on the downward slope. Possibly for the next hundred years, if there is nothing additional at work.
The article of Greg Goodman about mean smoothers can be read here:
Data corruption by running mean ‘smoothers’
==================================
Johannes Herbst writes at: http://klimawandler.blogspot.de/





Greg says:
February 8, 2014 at 4:57 pm
It is well within the accuracy of the data and the extraction method.
OK, if that floats your boat. More disturbing is the adding rather than multiplying the two variations. Why add?
lsvalgaard says:
February 8, 2014 at 4:10 pm
“The only way to be sure the code works is to expose it to scrutiny by other programmers.”
The only way to be sure code works is to write it in such a way that it CAN be exposed to other programmers!
Done a few code reviews in my time.
Just to remind people just how complicated the simple dance between the Moon and Earth is(without considering the Sun).
http://i29.photobucket.com/albums/c274/richardlinsleyhood/GravitationtidalcyclesfromWoodetal_zps27a493b4.gif
W. The link to Univ. NSW link explains the modulation and the envelop and gives plots to help visualise it. The maths is standard trig identities.
http://www.trans4mind.com/personal_development/mathematics/trigonometry/sumProductCosSin.htm
I explain the difference between modulation, superposition and beats in the text. The splitting of frequencies is basic AM radio stuff too.
lsvalgaard seems to agree about the maths he’s just questioning which case should be applied. I’ll try to get onto that next.
rgbatduke says:
February 8, 2014 at 4:16 pm
“Tides are not gravity per se.”
Picky. Picky. Tides are the effect of the gravitational fields that other bodies exert on the Earth, its water and atmosphere.
There is more to it than just vertical movement as well. There is a horizontal vector (tangential to the surface if you will) that may well be of importance coming as it does between 45 and 60 degrees to the orbital plane.
http://i29.photobucket.com/albums/c274/richardlinsleyhood/Tidalgravityfield_zpsab34f0f0.png
I believe that this lateral component is often overlooked.
And where are the choke points for North-South ocean flows in the Northern Hemisphere? Where do the Polar and Ferrel Cells meet?
If we were talking about the bodies being directly overhead then it would be easy. But they are at quite an angle to Equator so all is not that simple.
Gareth Phillips says:
February 7, 2014 at 8:29 am
When has “The Climate” NOT been “Changing”? Never. Did Humans ’cause’ that? If Humans didn’t cause the millenia of ongoing variabilityu, how can they possibly hope to stop it?
Thinking that Human CO2 emissions significantly impact climate is akin to thinking that Human dandruff is the source of the Rivers’ Silt.
@ur momisugly Willis: OK I knew the detection accuracy was not high and there was not a perfect match, so I just did a quick calculation. Lets use some more accurate figures:
18.631/2 X 8.852591 => 9.078
So I shall now refer to this as 9.08. years.
Add or modulate ?
The 8.852591 y cycle is the precession , ie it’s the orientation not the amplitude variation. There is a roughly 6 monthly variation in the amplitude of the perigee/apogee difference. That could be taken as modulating the other effect. If there was an amplitude variation of 8.85 y I would agree about multiplying the two.
The apside cycle is a separate effect to the combination of declination angle and solar tide that leads to 18.626 and hence 9.315 years, which is why I was using superpostition.
There does not seem to be enough resolution in the data to separate the two peaks. Here’s a wider scale plot of the same spectrum (NB peak annotations are approximate).
http://climategrog.wordpress.com/?attachment_id=754
Boat floating.
You’ve picked on 9.15 which I said was “typical”. That is based on Scafetta, BEST and a variety of plots I’ve done. It was a rough value for a broad discussion answering Richard’s question on the possible origin of 60 years. It was not intended as a precisely reported result.
The peak I found in cross-correlation of N.Atl and N.Pac was 9.05.
The detected peak could be precisely 9.08 to within the expected accuracy. Some small poorly resolved noise peak close by would be enough to shift the centre of the measured peak. Even if the result was kind enough to be closer to 9.08 it would not be any more certain with this same data and method. This is just exploration but that’s near enough to merit a closer look.
Greg says:
February 8, 2014 at 6:40 pm
There is a roughly 6 monthly variation in the amplitude of the perigee/apogee difference. That could be taken as modulating the other effect. If there was an amplitude variation of 8.85 y I would agree about multiplying the two.
Earlier you were harping on the 40% difference in amplitude, so one may be excused for assuming that you meant amplitude.
The apside cycle is a separate effect
But it would seem to be controlling the amplitude so amplitude modulation would seem to be natural. May I suggest that your reason for not adopting that physical view is simply that it does not give you the peak you want.
You’ve picked on 9.15
I simply got your 9.15 from your plot: http://climategrog.wordpress.com/?attachment_id=121
Perhaps I should have said 9.14767, but in keeping with normal practice of not quoting more decimals than are justified, I rounded to 9.15, perhaps that number is too rough and all we can say that it may be some number in the 8.9 to 9.4 range.
The real problem may simply be that you have not identified any plausible mechanism other than something that ‘could’ be, ‘might be’, etc happening. This is apart from all the other arguments about the response time being much longer than the solar cycle.
Greg says:
February 8, 2014 at 6:40 pm
This is just exploration but that’s near enough to merit a closer look.
is a far cry from the certainty you displayed in:
Greg Goodman says:
February 8, 2014 at 2:04 am
FWIW , here is the spectrum of HadCruft4 that I did some time ago.
http://climategrog.wordpress.com/?attachment_id=121
Main peak is lunar not SSN
lsvalgaard says: “I simply got your 9.15 from your plot: http://climategrog.wordpress.com/?attachment_id=121”
OK , since you did not say what you were referring to I assumed that you had got that figure from the most recent comments in this discussion. This thread started about HadCruft4 so that fair enough but a bit more clarity would help avoid talking at cross purposes.
I have already posted a link to my article that notes how Hadley processing distorts the circa 9 year peak. In view of the BEST paper it may be the land component that boosts it back up. I also said why I don’t think a hybrid dataset is appropriate for calorimetry. That is why I prefer to use ICOADS SST for spectral investigation.
I did the NH cross-correlation to take an alternative look at the BEST result that did a similar thing with AMO and PDO. Since PDO is a derivative index that in part itself includes Atlantic SST, I think my use of ex-tropical N.Pac SST is probably more reliable and avoids the risk of inducing a false correlation.
I also regard it as more reliable than using over processed data like hadSST and crufTEM4 mixed into a land-sea hybrid that involves different heat capacity and other confounding factors.
It does, however, seems clear there is a strong peak around 9.1 years +/- 0.1 , which IIRC was Scafetta’s result for lunar influence derived from JPL ephemeris data. That was an empirical result without mechanism but clearly links this period to lunar influence.
The harmonic mean may well explain the origin:
18.631/2 X 8.852591 => 9.078
Detecting a notable peak at 9.05 in Atlantic-Pacific NH SST cross-correlation links this period to the surface temperature record. Hence my conclusion that this is a lunar signal not solar.
“This is apart from all the other arguments about the response time being much longer than the solar cycle.”
I did not say it _was_ much longer. I simply noted that your use of power as a “shorthand” for its time integral , energy, was only applicable IF the system equilibrated to changes in radiative forcing in a time much shorter than a solar cycle and without establishing that, expecting a direct correlation of surface temp and SSN was erroneous.
Unless I missed something you seem to have dropped that discussion.
That has little to do with bulk displacement of heat energy by long term tides.
lsvalgaard says: “Earlier you were harping on the 40% difference in amplitude, so one may be excused for assuming that you meant amplitude.
>> The apside cycle is a separate effect
But it would seem to be controlling the amplitude so amplitude modulation would seem to be natural. May I suggest that your reason for not adopting that physical view is simply that it does not give you the peak you want.”
If you can avoid trolling with language like “harping on”, you are indeed ‘excused’. That is why I provided more information about the nature of the anomalistic cycle in order that you should be better informed. Your incorrect statement saying this should be modulation was reasonable if you thought the 8.85y was the 40% variation.
The point about 40% _monthly_ variation due to eccentricity is that it is a large effect and its _orientation_ is thus relevant.
>> “May I suggest that your reason for not adopting that physical view is simply that it does not give you the peak you want.”
I don’t ‘adopt’ it because I do not see an obvious reason that the _orientation_ would modulate the lunar declination – solar alignment. In the absence of any reason to suggest modulation I have to assume it’s additive. I agreed it would make sense for the circa 6 mo variation in amplitude to cause modulation.
I am not “wanting” a peak , I am looking for a physical explanation for an observation.
Observe, analyse , explain.
Clearly the next step is to look for a direct physical mechanism and try to understand the geographical phase relationships. The Atlantic-Pacific cross-correlation was part of that process.
Way up thread Richard made this remark
http://wattsupwiththat.com/2014/02/07/proper-cherry-picking/#comment-1561235
The levels as Richard observed were a swampy hide out for Alfred the Great and many of the village place names have a historical significance pointing to their background such as ‘Great island.’
The Dutch were largely responsible for draining the land several hundred years ago.
Over the last decade parts of the Somerset levels have been deliberately neglected by a policy of reducing dredging, ditching and curtailing general maintenance and pumping. This was quite deliberate policy from the head of the EA responding to environmental pressures from the Government and the EU.
It was known several years ago that this policy would cause problems and the chickens have come home to roost. . I doubt that the levels would have been flooded to anything like the extent we see today if action had been taken although some parts will always be vulnerable and cannot reasonably always be protected
tonyb
Greg Goodman says:
February 9, 2014 at 1:59 am
I simply noted that your use of power as a “shorthand” for its time integral , energy
Since energy = power * (area*time) the power is a convenient shorthand for the energy received each second and is what is normally used [e.g. http://lasp.colorado.edu/home/sorce/ ]. I have not seen papers that discuss the climate in terms of the time integral of the power. And, this is my point: the original post claims that there are pulses of solar activity matching pulses of temperature. So, you are just back to nitpicking.
Greg Goodman says:
February 9, 2014 at 2:35 am
In the absence of any reason to suggest modulation I have to assume it’s additive
In the absence of any reason to suggest it is additive one have to assume it is multiplicative. You see, you have not connected the dots as to how the two effects might cooperate.
Recognised talbes of tidal periods usually run to 18.6 years I think. The direct input of tidal energy dispersed through frictional losses is small as you correctly point out. However, if there is inter-annual to decadal scale bulk displacement of water this could cause more energy to be captured from Mr Sun:
Sure, it could. Although I’ve heard more plausible arguments suggesting that atmospheric tides can modulate the tropospheric height, hence the DALR, hence the GHE. Either way, MORE plausible is still far from convincing. It’s just like sunspots/solar activity. There are tantalizing hints of correlation, if you pick the interval to look at and ignore the places where a mental linear model doesn’t work. OTOH, if you look at all of the data and try to establish a simple linear one-parameter model effect it is far from compelling.
None of which rules out nonlinear multivariate or differential effects, but looking for that degree of complexity is difficult without some specific model to motivate it and nobody seems to discuss them much in climate science outside of throwing a boatload of more or less linear stuff into a GCM and letting it stir the pot. Maybe if one puts solar activity on one axis, the ENSO index on another axis, your tidal activity voodoo on yet another axis, and then plots the time derivative of GASTA on still another axis, it loops around in a beautiful coherent Poincare cycle trajectory that slowly varies in a predictable way with CO_2 concentration. Unless and until somebody goes positively crazy with cone-head quantities of data, some truly excellent analytic tools, and nothing but time on their hands, we may never know. Especially when “coherence” might emerge from phenomena that in projection look like ill-correlated noise only when one gets up to ten dimensions. Or twenty. We just don’t know.
IMO a lot of the climate data probably could be understood in terms of local attractors and/or Hurst-Kolmogorov statistics — locally stable climate (that is, macroscopic and decadal in scope) global states where the climate whirls around in some sort of demented set of epicycles — diurnal epicycles, annual epicycles, solar cycle epicycles, hell, why not, tidal epicycles, multidecadal oscillation epicycles that are at once a semistable response to all of this cyclic time evolution with strong feedbacks and themselves epicyclic factors in the dynamics. Because the climate oscillates (not coherently, but as in is bound to around some sort of comparatively stationary set point) we know there are strong negative feedbacks that push it back towards the set point — if it gets “too warm” for the set point it cools, if it gets “too cool” for the set point it warms. The climate certainly is not an undirected random walk or we would cook or freeze in short order.
But the climate also clearly has a kind of “inertia” — once too much heating establishes a cooling mode, the cooling mode persists long enough to overshoot the set point, once it gets too cool and establishes a heating mode it lasts long enough to overshoot the other way. Sometimes — because these modes are themselves multivariate and subject to the whirling epicyclic evolution of their many components. Instead of getting a clean oscillation with a particularly clear fourier signal (outside of the obvious diurnal and seasonal signals for the purely periodic drivers) one gets chaos.
Underneath all of this, the locally stable climate states — the climate “attractors” as it were — are not really stable at all. They are only stable on a scale of decades to centuries, and may well be jinking around on even shorter time scales. If I understand the idea behind CAGW, it is that the steady addition of CO_2 is supposed to systematically tilt the “forces” that act on these attractors to make the set points move ever further in the warmer direction, and that as it does this tilting water vapor will chime in to amplify the tilt instead of doing the exact opposite — push it back — which is what one would expect purely on the basis of the overall semi-stability of the space of attractor set points visible in the climate record at various scales.
But sadly, we do not know anything at all about the factors and epicycles that determine the dynamics of the attractors themselves — we just pretend that we do when we build a simple linear model without even understanding the nature of the underlying macroscopic dynamics.
I’ll end my diatribe du jour with an analogy. My mentor in the general field of complex systems, Dr. Richard Palmer (who was once on the shortest list for the directorship of the Santa Fe institute before a series of strokes tragically affected his career) had an adage which was very useful to understand the motivation for the study of complex systems: More is Different.
This is clearly visible when one attempts to move from the microscale to the macroscale in physics. Microscopically everything is linear, reversible, time symmetric, deterministic (the latter in quantum mechanics only if one considers complete/closed systems, and irrelevant in chaotic classical mechanics where one cannot sufficiently precisely specify an initial state and hence true but irrelevant in both). Consideration of the fundamental interactions leads one to an understanding of how e.g. quarks combine to form nucleons, how nucleons combine to form more or less stable nuclei, how electrons bind to nuclei to form atoms, how atoms — now a set with its own rules for stability and dynamics that are in no way obviously linked to the “bare” laws of the symmetry broken field — combine to form molecules, how molecules — now with their own laws that make up an entire standalone science, “chemistry” — develop enough complexity that chemistry itself fractures into subsets each with its own UNIQUE set of rules plus rules for more general interactions, organic chemistry for molecules with carbon, the chemistry of ceramics, metals, semiconductors, acids, bases, how chemistry (apparently) self-organizes organic chemistry into organic BIO-chemistry where proteins and amino acids transform into self-replicating factories, are shaped by evolution (a whole new selection paradigm that is in no way relatable to e.g. electrodynamic theory) into life. Life has its own rules, and those rules look nothing at all like the rules of physics, chemistry, organic chemistry. In other directions, gravity assembles atoms and molecules into larger bodies, heating them as they fall together, until some of the largest bodies “ignite” with the strong nuclear force of fusion and become enormously complex objects forging the heavier atoms (the picture above was logical, not temporally ordered) and scattering them in supernovae to be re-forged into smaller bodies that follow complex orbits around the larger bodies and develop surface interfaces with sufficient chemistry to permit the growth of complexity with its own rules leading to life. Stars have numerous rules of their own, rules that can in some sense be related to understood physics and chemistry but nevertheless rules for macroscopic structures that are “discrete” — not really thought of in terms of the individual rules and motions of their enormous numbers of constituent parts.
In physics, one generally knows better than to try to understand Shakespeare by considering the unified field applied to the 10 to the thirty-something elementary particles (at least) that make up the brain, not including the massless particles of the field itself that have no meaningful count. Or one should. The physics of the dozens of layers of meta-structure between the unified field simply cannot be traced quantitatively through the layers except by developing rules for the many intermediate layers by a mix of observation and induction. That is, we do well to understand the physics of each transition from one layer to the next, and perhaps to be able to do at least some computations in both worlds to give ourselves comfort that our hierarchical decomposition is sound. Hence it is lovely to be able to use quantum chemistry to solve for the chemical laws that govern at least simple molecules and to help us understand particular features of the internal structure and dynamics of more complex molecules even though we know that it would be silly to try to solve a Schrodinger equation for the dynamics of a neuron, or for hemoglobin in an electrolyte water-based fluid. It’s nice to be able to derive the ideal gas law and the VanderWaals gas law from comparatively simple principles even though real gases often do not fit either one perfectly, especially near phase transitions or when lots of chemical stuff is going on. We end up with a decent bridge of verified consistency from the microscale where things are simple and comparatively easy to understand and compute (well, ok, maybe not THAT easy) to the semantic content of the Gettysburg Address, with only a few “gaps” that are difficult to bride or at best still rather speculative.
Consider, then, what we are trying to do in climate science. Consider, in fact, GCMs — an effort to compute what the climate will do based on a meso-scale microdynamic model. By that I mean that we take the planet and chop its surface and atmosphere and some depth of the ocean into chunks, establish what we hope are accurate physics-based rules for the time evolution of each chunk based on its own state, its interaction with the surrounding chunks, and with various external forces (or “forcings” in context). The rules of this sort of game are actually known to us from experience in other contexts.
We do not really know the dynamics of the quasiparticles of the climate system. By this I mean that the climate system and its drivers have numerous named structures, both specific large scale structures and generic small scale structures, that have sufficient spatiotemporal coherence that we can observe them and give them names: Thunderstorms, tornadoes, hurricanes, droughts, high pressure centers, low pressure centers, waves, ridges, Hadley cells, the jet stream, troposphere, stratosphere, the Gulf stream, the global thermohaline circulation, ENSO, monsoon, cumulus clouds, Santa Ana winds, the Sun, solar cycles, elliptical orbits. All of these objects mutually conspire in one fundamental process — the transport of nuclear energy released in the heart of the sun 100,000 years ago (or so) to deep space through the tiny solid angle subtended by the planet Earth. That’s it. Energy from the sun arrives on Earth, hangs out for a while, and then continues on its merry way to infinity and beyond, to entropic oblivion. Our entire climate and life itself is nothing but self-organized structure involved in dissipating energy in an open system.
I’m a quasiparticle in the climate system. So are you. The entire climate debate is about quasiparticles in the climate system inventing new high level rules for quasiparticle activity that might or might not impact the transfer efficiency of the entire system (or consideration of the possibility that quasiparticle dynamics to date has already affected it and will continue to affect it).
In simulations of complex systems, if the resolution of the simulation is not sufficient to represent the important quasiparticles of the system, the simulations often fail, even simulations of far simpler systems than the climate. We can easily understand why, of course. You cannot describe a laser in terms of blackbody radiation — the latter is all completely understandable and physically correct in context, but it makes certain assumptions in its averaging that are, in fact, not well represented by all kinds of optical phenomena, especially things like lasers that are monochromatic, coherent, and not even vaguely “thermal”.
Understanding the evolution of a fluid system in closed container with convection from conduction with at most one convective roll, through a state with numerous stable convective rolls (determined non-uniquely by boundary conditions, the degree of forcing, and non-Markovian time evolution from various microscopic nucleations), to a fully turbulent state by solving microscopic Navier-Stokes equations will not work if one restricts the granularity of the spatial decomposition to be commensurate with the sizes of the intermediate convective rolls. It will not deterministically tell you what a real system will do even if it is remarkably fine grained — the specific patterns of convective rolls that emerge as being stable depend in some detail on how the system is nucleated, how the structures emerge from the initial INTERNAL motions of the system, and in numerous critical regions the apparent “stability” of the quasiparticle structure is, well, not. Stable. Right up to where turbulence and true chaos emerge, where the quasiparticles of one domain break down and are replaced by turbulent rolls, by eddies on all length scales jostling and bouncing off of one another, forming and dissipating as they help carry energy from one place to another in the service of Entropy.
GCMs use cells that are not uniform in area — because the writers of the GCMs are (I have to suppose) too lazy to implement a rescalable unbiased e.g. icosahedral tiling of the sphere, they use the “easy” latitude/longitude decomposition with cells determined as integer numbers of degrees in a coordinate system with a horrendous polar-singular Jacobean. This undersamples the equator and oversamples the high latitudes (and sometimes they heuristically correct for this, a bit). It means that cells are nearly square near the equator and wedge shaped near the poles. IIRC, there are few or no GCMs with resolution less than 1 degree, so equatorial cells are order of 70 miles to the side or larger. The motivation for using a lat/long grid is that rectilinear cells are easy to loop over and write nearest neighbor dynamics for. This presumes that one can correct for the cell distortions due to the Jacobean in a PDE solution based on a rectangular grid more easily than one can develop rescalable icosahedral routines for solving partial differential equations on a non-rectangular grid. GCMs by their nature cannot, therefore, resolve quasiparticle structures less than several grid cells in size — to do a centered circulation for example would require at least 9 cells in a 3×3 grid — a central “defect” cell and eight bordering cells such that the integral of the “wind velocity” assigned to the cell around the loop of cells is nonzero. That is a unit order of 200 miles square in the tropics and 200 miles not so square elsewhere.
200 miles, of course, is the order of the size of large scale weather quasiparticles — hurricanes, for example. It is completely blind to most mundane thunderstorms, all tornadoes, and even simple local things like a daily/cyclic updraft over a farmer’s plowed field and an accompanying downdraft over a nearby cold lake, or the effect of hills and valleys. The coastal microclimates of e.g. California are lost to it — smeared out so that land that is basically hot desert may be considered to be nicely damp and temperature in temperature because the west slope of a mountain range through a cell is waterlogged and cool where the east slope is hot and dry. North Carolina has four or five distinct named microclimates (all basically climate “quasiparticles) between the mountains and the coast. The coast itself is obviously a coastal microclimate, but in NC the coast is penetrated by enormous sounds and broad rivers so that a truly complex modulation of temperatures and weather by water occurs in almost a fractal way as one moves inland. Then there are the sandhills — so named because the soil is very sandy with different water-retention and evaporation properties — which have a unique pattern of thunderstorms and quite distinct weather from the piedmont, where I live (we still sometimes get weather with sandhills-like patterns even though our soil is basically hardpack clay under a thin layer of loam) up to the actual rocky mountains to the north and west, with a gradual ascension in height, reduction of surface water flow, and with more and more rock replacing the clay. Greensboro has a noticeably different climate than Durham, Asheville a very different climate than Durham. All of this complexity has to be reduced to around five cells across, around two or three cells deep, and the models AFAIK do not contain parameters to correct for things like surface area occupied by lakes and waterways, water retention of the soil, rock versus clay versus sand. They at best have some sort of mean height and local albedo
Is it any surprise, really, that Pielke Sr. reports that GCMs do terribly at representing rainfall patterns? Those things depend on the ignored features of the terrain at a spatial resolution well beneath what the models can handle, whether or not the cells themselves are distorting outcomes along with their shape. And then there is the vertical decomposition of the atmosphere above each cell, and the vertical decomposition (if any) of the ocean below all of the cells that cover some 70% of the Earth. Do these cells matter? ENSO says that they do, they matter very much, they matter so much that models that cannot replicate ENSO haven’t got a prayer of tracking the actual climate and this is just one complex transport quasiparticle/structure in the system where heat flows in one place, is stored at depth in the ocean, is laterally transported as much as thousands of miles, and then re-emerges as hot surface water, is transported rapidly up to the stratosphere, and then spreads out globally to substantially alter the entire quasiparticle pattern of heating and cooling and rainfall and drought everywhere for an immediate effect lasting years. A sufficiently strong ENSO can clearly shift the attractors of the climate system, effectively reset the quasi-stable set point of the system, and influence the climate on a multidecadal time scale even across multiple El Nino and La Nina events.
Until we understand the scale and importance of the many, many quasiparticles in the self-organized system — quasiparticles that are driven by dissipation, that generally grow and become more efficient when fed energy — we are going to have a hard time even knowing how to build a functional GCM. It might involve (for example) using a variable resolution icosahedral grid where grid cells are decorated with another five or six or ten parameters that have (in aggregate, over time) non-negligible impacts on the time evolution of both weather and climate. It might require the insertion of quasiparticle time evolution rules to replace or correct supposedly “physics based” cell microdynamics — we have a hard time predicting the time evolution of hurricanes, for example, for exactly this sort of reason — tiny fluctuations (noise at the grid level) make microscopic/ensemble methods “blur” past a certain point until their predictions are no longer useful, where semi-heuristic rules involving macro-scale structures like ridges and high and low pressure centers and atmospheric shear might do better, and where most hurricane predictions rely on a mix of both with a touch of human judgement based on experience as to what sorts of errors both methods are prone to.
We can do that with hurricanes, but how can we do that with climate? Hurricanes spin a week, two weeks, and then are gone. The effect a hurricane has on the climate cannot be limited. The GCMs themselves produce a staggering range of possible output climates including some that are as cool or cooler than what we are experiencing from even smaller perturbations of initial conditions. It’s that damn butterfly in Brazil again, beating its wings and changing everything by far more than any simple heuristic rule can manage.
rgb
J.Herbst says
if this pattern repeats, then we will have 20 more years going down – more or less steep. About ten years of the 30 year down slope are already gone.
Henry says
looking at energy coming in
first table, maxima, here:
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/
we will have another 35 years or so before (global) cooling ends.
But where is J.Herbst? I have not seen any comment from him here in comments?
“Since energy = power * (area*time) the power is a convenient shorthand for the energy received each second and is what is normally used [e.g. http://lasp.colorado.edu/home/sorce/ ]. I have not seen papers that discuss the climate in terms of the time integral of the power. ”
I don’t see what you are pointing me to at SORCE. Incoming radiation is usually measured in W/m^2 or equivalent. A radiative flux which is instantaneous power not energy.
A watt is a joule per second so that is “the energy received each second “. It’s called power.
However, temperature is not power it is a measure of thermal energy so to relate incoming radiative power to a change in temperature you need to either look at dT/dt vs power or the temp vs integral of power in a trivial radiation hits blackbody kind of experiment. In the case of a complex system it will be a complex response with lags and a response function with feedbacks.
If you want to look for direct correlation to either suggest a solar linkage as OP (incorrectly) did or to refute similar by its absence you are in effect _assuming_ equilibrium is reached in a delay much shorter than the time-scale you are considering. ie there is negligible lag and the temp record represents the integrated response function.
If you shine a powerful lamp on a blackbody, its temperature will not be the instantaneous power output of the lamp that intersects the body. It will be the time integral of the incoming power minus some kind of plank neg. feedback as it warms.
If the target is 0.5g of aluminium foil painted black illuminated by a 500W lamp, you may be close to correlating instantaneous lamp power and temp of the body.
I don’t see Earth climate system conforming to the tin foil model. It seems you do. Maybe you could explain why.
Greg Goodman says:
February 9, 2014 at 8:16 am
I don’t see Earth climate system conforming to the tin foil model. It seems you do. Maybe you could explain why.
You are missing the whole point. A measure of the incoming energy is how much we get per second [and per square meter] thus the humble TSI is a good shorthand for the energy received. How that influences the climate is a complicated deal and is not what the issue is, although the author of the post thought there were a direct relationship become sunspots [TSI varies as they do] and temperature.
“…. thus the humble TSI is a good shorthand for the energy received. ”
No, the humble TSI is a proxy of the power, so its _integral_ is a measure of the energy received.
You can’t just slip “per unit time” in brackets like it doesn’t really matter. It’s the difference between a quantity and it’s integral
You are trying to say the abscissa is ” a measure of ” the area under the graph.
NO.
Greg Goodman says:
February 9, 2014 at 8:55 am
No, the humble TSI is a proxy of the power, so its _integral_ is a measure of the energy received.
TSI is a measure [not a proxy] of the energy we receive every second. YES.
lsvalgaard says:
“…. thus the humble TSI is a good shorthand for the energy received. ”
“TSI is a measure …. of the energy we receive every second. ”
So which is it ? Energy or power?
Greg Goodman says:
February 9, 2014 at 9:22 am
“thus the humble TSI is a good shorthand for the energy received. ” every second.
So which is it ? Energy or power?
TSI is a measure of the energy we receive every second, or which is the same ‘the power’.
The relevant physics is the Stefan–Boltzmann law which relates the power to the temperature.
Time to remove your foot from your mouth.
there is a variation within TSI
mainly to do with the E-UV
I figured that there must be a small window at the top of the atmosphere (TOA) that gets opened and closed a bit, every so often. Chemists know that a lot of incoming radiation is deflected to space by the ozone and the peroxides and nitrous oxides lying at the TOA. These chemicals are manufactured from the UV coming from the sun. Luckily we do have measurements on ozone, from stations in both hemispheres. I looked at these results. Incredibly, I found that ozone started going down around 1951 and started going up again in 1995, both on the NH and the SH. Percentage wise the increase in ozone in the SH since 1995 is much more spectacular.
HenryP says:
February 9, 2014 at 9:31 am
there is a variation within TSI mainly to do with the E-UV
No, not mainly. The variation of TSI measured in Watt/M^2 is much larger than that of EUV in the same unit.
lsvalgaard says:
TSI is a measure of the energy we receive every second, or which is the same ‘the power’.
Indeed, which is what I’ve been saying all along. So when you said :
“…. thus the humble TSI is a good shorthand for the energy received. ”
you were wrong. Hardly a “knit pick” you were fundamentally wrong.
Time to admit you were wrong.
But of course Stanford professors never do that do they? About as much chance as getting Happy Days Fonzie to say the word.
It’s unfortunate, your critical questions were a useful challenge initially.