Guest Post by Willis Eschenbach
The fundamental and to me incorrect assumption at the core of the modern view of climate is that changes in temperature are a linear function of changes in forcing. Forcing is defined as the net downwelling radiation at the top of the atmosphere (TOA). According to this theory, in order to figure out what the change in global temperature will be between now and the year 2050, you just estimate the change in net forcing between now and then, multiply it by the magic number, et voilà—the change in temperature pops out!
I find this theory very doubtful for a number of reasons. I went over the problems with the mathematics underlying the claim in a post called “The Cold Equations“, for those interested. However, I’m not talking theory today, what I want to look at is some empirical data.
The CERES dataset contains measurements of upwelling radiation at the top of the atmosphere. It also has various other subsidiary datasets which are calculated from both the CERES data, other satellite data, and ground measurements. These include the upwelling thermal (IR) radiation from the surface. I apply the Stefan-Boltzmann equation to that upwelling IR data in order to calculate the surface data. I’ve checked this data against the HadCRUT surface temperature data, and they agree very closely with the exception of certain areas around the poles. I ascribe this to the very poor coverage of ground weather stations around the poles. This has forced the ground datasets to infill these areas based on the nearest stations. Even with that polar difference, however, the standard deviation of difference between the CERES and the HadCRUT monthly data is only 0.08°C, extremely small. the CERES data is more complete than the HADcrut data, so I use it for the surface temperature.
Now, this lets us compare changes in the net TOA forcing imbalance with the changes in the surface temperature. For this kind of study we need to remove the effects of the seasons. We do this by subtracting the full-dataset average for each month from the data for that month. For each month, this leaves the “anomaly”—how much warmer or colder it is that month compared to the average.
For example, here’s the temperature data, with the top panel showing the raw data, the middle panel showing the annually repeated seasonal variations, and the bottom panel showing the “anomaly”, how much warmer or cooler the globe is compared to average.

Figure 1. Raw data, seasonal changes, and anomaly of the CERES surface temperature dataset. Note the upswing at the end from the latest El Nino. The temperature has dropped since, but the CERES data has not been updated past February 2016.
According to the incorrect paradigm that says that changes in surface temperatures follow the changes in forcing, we should be able to see the relationship between the two in the CERES data—when the TOA forcing takes a big jump, the temperatures should take a big jump as well, and vice-versa. However, it turns out that that is not the case:

Figure 2. Changes in TOA radiation (forcing) ∆F versus changes in surface temperature ∆T. Delta (∆) is the standard abbreviation meaning “change in”. In this case they are the month-to-month changes. The background is a hurricane from space. I added it because I got tired of plain old white.
As you can see, in the CERES dataset there is no statistically significant relationship between the changes in TOA forcing ∆F and the changes in surface temperature ∆T. Go figure.
Now, I can already hear some folks thinking something like “But, but, that’s far too short a time period for that small a change to have an effect … I mean, one watt per square metre over a month? The Earth has thermal inertia, it wouldn’t respond to that …”
So lets take a look at a different scatterplot. This time we’ll look at change in total surface energy absorption (shortwave plus longwave) versus change in temperature.

Figure 3. Changes in surface energy absorption versus changes in surface temperature ∆T.
So the objection that the time span is too short is nullified. A change of one watt per square metre over a month is indeed able to change the surface temperature, by about a tenth of a degree.
Finally, is this just an artifact because we’re using CERES data for both surface temperature and total surface energy absorption? We can check that by repeating the analysis, but this time we’ll use the HadCRUT surface temperature data instead of the CERES data …

Figure 4. As in Figure 3, but this time using HadCRUT surface temperature data.
While as we’d expect there are differences when we use the different surface temperature datasets, in both of them the surface clearly is able to change temperature from a difference of one watt per square metre over a month.
So we are left at the end of the day with Figure 2, showing that there is no significant relationship between changes in TOA forcing and surface temperatures.
Note that I am NOT claiming that this method can determine the so-called “climate sensitivity”. I am merely pointing out that the CERES data does not show the expected relationship between changes in net TOA radiation imbalance and changes in surface temperature.
Best to all,
w.
As Usual: When you comment, please QUOTE THE EXACT WORDS YOU ARE DISCUSSING so we can all be clear just what you are referring to.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
What happens in graphs #2, #3 and #4 when you include the time-variant lines between the points?
That is what Spencer and Braswell do (eg see Fig. 3a in the linked PDF of the paper). The regression trend is one thing but the feedback response is quite different, as they demonstrated.
Spencer and Braswell paper points out the problem but say they can’t see a way to untangle the two. I can. At least a first approximation.
“On Determination of Tropical Feedbacks”
https://climategrog.wordpress.com/2015/01/17/on-determination-of-tropical-feedbacks/
Bruce, just tried that. There’s no pattern at all, it just looks like an exploding star …
w.
Greg December 18, 2017 at 4:20 pm Edit
Mmm … I thought the same so I used a “robust” linear analysis which basically ignores outliers … same outcome.
Not only that, but there is an active response to temperature that varies both place to place and time to time …
w.
Sorry Willis, probably too late in coming back in this comment. The “robust” method does not help because it is not the outliers which are the cause of the problem, it is the fact that you have an error laden abscissa not a controlled variable. READ the article I linked, it explains it in detail and shows the huge errors caused by ignoring that OLS is only valid with minimal x errors, NOT for scatter plots.
If you want ‘robust’ invert axes and then try to work out which value you want to believe 😉
Let me note in passing that we’d expect a change of 0.18°C from a 1 W/m2 change (or vice versa) from Stefan-Boltzmann at earth surface temperature … the results above show about half of that. That plus the scatter in all three charts shows that there are other important variables.
w.
Remember that TSI only varies by .25 watts per meter squared during the solar cycle over the surface of the earth. (the earth being a sphere) We should actually expect 1 watt/m2 to produce .3C without feedbacks. A warming of only .1C per 1 watt/m2 would equal an ECS of just .6C…
(Ha! “jinx”, willis! i’m going off the stated 1.1C per doubling of co2; that is, 1.1C divided by 3.7 watts/meter squared)…
alfonzarelli,
The Stefan-Boltzmann sensitivity is given by 1/(4eoT^3), where e is the emissivity between 0 and 1, o is the SB constant and T is the temperature in degrees K.
The sensitivity of an ideal BB at 288K is about 0.18 K per W/m^2. The sensitivity of an ideal BB emitting 240 W/m^2 (255K) is about 0.27 K per W/m^2. The sensitivity of an non ideal BB (gray body) with an emissivity of about (255/288)^4 = 0.61 and whose temperature is 288K is 0.30 K per W/m^2.
The gray body sensitivity of 0.3 represents the path from the surface to space and sets the maximum possible sensitivity as it represents the minimum rate of cooling. The black body sensitivity of the surface at 0.18 represents the minimum as it represents the maximum rate of heating. The actual sensitivity is somewhere in between and most likely closer to the lower limit.
The SB sensitivity is equivalent to the ECS. The TCS will be lower since it takes applying the W/m^2 of forcing for 5 time constants to achieve about 99% of the SB effect. Being applied for 1 time constant would result in about 63% of the final value. This is why the value measured by shorter term changes is less than the ECS value.
The reason I use 2.5 degree slices to slice up data for scatter plots is that the difference in insolation between slices has been the same for many, many time constants, thus the relative behavior of adjacent slice averages is representative of the ECS, while absolute differences between months is more indicative of the TCS.
clueless alert.
Is the idea that radiation arrives from the Sun, hits the atmosphere, some bounces off and some gets through, of that which gets through some of it bounces off the surface, hits the atmosphere and bounces back to the surface.
More CO2 means more bouncing back down to the surface causing greenhouse warming but if more CO2 does this not mean that of the radiation that arrives from the Sun more bounces off into space ?
zemlik, CO2 doesn’t absorb radiation of the frequencies the come from the sun. It absorbs the “thermal infrared” that comes from the earth.
And on my planet, the only stupid questions are the ones you don’t ask.
w.
thanks guys
“Does this not mean that of the radiation that arrives from the sun more bounces off into space?”
No. Absorption of radiation is based on the wavelength. In clear sky (no clouds), the atmosphere is nearly transparent to visible light, which is mostly what we receive from the sun. There is a bit of UV that is absorbed by ozone, but in simple terms it almost all gets to the surface.
The wavelengths of radiation that a surface emits is based on its temperature. These wavelengths can be found using a planck curve calculator such as this:
http://www.spectralcalc.com/blackbody_calculator/blackbody.php
So for bodies around the temperatures we see on earth that will emit in the infrared range (once a body gets up to about 780K it will begin to glow because the wavelengths are beginning to get into the visible length). The atmosphere isn’t transparent to infrared the same was it is visible. Some wavelengths, like the ones absorbed by co2, don’t make it more than a few meters from the ground before being absorbed, while others do have a clear shot to space from the ground.
Radiation that is absorbed by a gas will be emitted by the gas at nearly the identical wavelength that it was absorbed at. Emission can happen in any direction. So if we take a large sample, a thin layer 10 meters off the ground for example, and could see just it’s emitted radiation, we would see that approx half is emitted up, the other half down toward the surface.
More co2 means more opportunities for radiation to be absorbed and stay in the system rather than leaving. And since the rate of incoming more or less remains constant, but the outgoing has been reduced, the surface will warm. A warmer surface emits more radiation and will this restore balance, but now at an increased temperature.
This is a very simplistic explanation of the greenhouse effect.
Ok Brad, at this site we are all familiar with the greenhouse effect. IMHO too many (on the warm side) infer large climate change from an effect estimated in tenths of a degree. They are most irritating when they insist extrapolation of short term trends along with comparison of proxy to instrumental data is valid as evidence, it isn’t…they are not being scientific.
The issue is the phenomenon’s magnitude. There is yet an answer, what say ye?
A reader was asking about the greenhouse effect, so not everyone here is familiar.
What I know from my reading is a doubling of co2 amounts to 3.7Wm-2 of radiation being received at the surface. The implications of that are very complicated, and I’m not going to even attempt to get into that because I don’t know it well enough.
I would agree the forcing and changes seem minute, and in hindsight of where the planet has been before it often can seem irrelevant. We’ve been here before, we’ll be fine.
What I see as concerning is the rate at which we are approaching these changes. In the past com levels may have been higher but it was a gradually process that would have allowed for adaptive changes within the ecosystems. Changes are happening much quicker and as such these systems won’t adapt as quickly.
I agree there is yet an answer. How long do you want to wait to find out though?
Thanks for the reply Brad.
You say: “Changes are happening much quicker and as such these systems won’t adapt as quickly.” So I have to ask: since when?
To compare current rates of change with previous rates of change means you are comparing proxy with instrumental data. And, as we all know, the proxy data lack the temporal resolution required for a valid comparison with the current instrumental record.
When it comes to “man-made global warming” and “alternative energy” my interest is centered on what is and isn’t known. Specifically where the science ends and the supposition begins. It is my observation, when it comes to media, the uncertainties surrounding this issue are not properly addressed.
Regards, M.W.
And Brad, what about natural variability?. In millennial (proxy reconstructions) terms we are warming, the recent neoglacial of the current interglacial appears to have ended 150 years ago. This is revealed in the glacial retreats of both hemispheres.
The “Little Ice Age” (on average 2° colder and wetter than today) began over 700 years ago with the end of the multi-centennial “Medieval Warm Period” (average 2° warmer/dryer). Duration and temperature estimates of these two periods vary. As the LIA tightened its grip the Viking settlements of the North Atlantic became inhospitable. Sea levels lowered, ice flows hindered navigation, crops withered, farm animals died and the Norse went home. The favourable climate from several decadal warming trends during the LIA and superior sailing skills may have facilitated the European discovery of and settlement in the Americas. A 70-year period of low sunspot activity, named the “Maunder Minimum”, beginning in 1645 coincides with the LIA’s coldest decades.
Estimates of the lowest LIA sea levels are as much as 2 ft. below todays. Thermal expansion of the ocean’s water at the MWP’s peak place estimates of sea levels as much as 1 ft. higher than today’s. A portion of the .08°C mean temperature increase of the past 150 years is attributed to the recovery (which continues to this day) from the LIA. Sublimation from nightly drier air is reducing the planet’s mountain glaciers to their previous “normal” positions. Glacial retreat moves quickly in comparison to glacial advance. Retreating ice is now revealing remnants of previous climate periods.
End moraines of glacier advances prior to the MWP indicate the glaciers of the LIA are the longest of the Holocene. The MWP/LIA is the last of four cycles of minor glacial advance and retreat in the past 6,000 years, perhaps linked to solar activity (sunspots) cycles. The colder “Dark Ages” and the “Roman Warm Period” were the previous cycle. We may now be at or past the start of the warm half of the next millenial cycle, the “Current Warm Period” which, if like the last, may have legs for another century or two.
I realize our opinions on this topic may differ, I hope I’ve been helpful.
Regards, M.W. Plia.
Yes, so what were the forcing that drove the warming and cooling of the MWP/LIA? Often times people will attribute them to changes in solar activity. Generally speaking, the solar activity for these periods fluctuated by approx 2Wm-2 from MWP to LIA.
A doubling of co2 leads to 3.7Wm-2. This makes the 2W that took us from MWP to LIA seem trivial. Of course this is all proxy and conjecture. What do you think to be the forcing for mwp and lia
No. Different wavelengths.
You of course did not invent it but you go along with it. My remarks are really meant for whoever invented this stupidity. If it is IPCC as you suspect it says a whole lot about them. For one thing, it is obvious that they y have no working scientist who knows what to do with data. Not all data have meaning and if you know your field you can instantly spot the difference. Before there were computers I had to plot spectrochemical data on a variety of pre-printed graph papers. But graphs like yours were never made because data of that type was obviously useless trash, The whole lot of graphs you show, pretending to show some aspect of science, is nothing but trash and should never have been shown in a scientific article anywhere. I knew how to dispose of it long ago but they are now elevating that trash pile, trying to resuscitate its denizens, and pretend they are doing science. Arno Arrak
Arno Arrak December 18, 2017 at 7:14 pm starts out abysmally …
Arno, this is why you really should follow my request and QUOTE THE WORDS YOU ARE DISCUSSING. I have no idea who “you” is. I have no idea what “it” is. And I’m not going to either research or guess to find out. I stopped after your first sentence.
w.
Nope. Didn’t get it, not going to look for it. If you want an answer QUOTE WHAT THE HECK YOU’RE REFERRING TO. You sure you wrote it on this post?
Also, could you dial back on the ad hominem attacks? Where I got my knowledge is immaterial. I’m self-taught in science … so sue me.
w.
(Nathaniel is banned) MOD
Nathaniel December 19, 2017 at 12:05 am
(He is banned) MOD
Despite being a dab hand at MIG, TIG, stick, gas, brazing, and underwater welding, I’ve never taken a basic welders course … so what? How much underwater welding have you done?
w.
Willis
Great post – great comments
Have a great Christmas
Willis ==> It is no surprise that ∆T and ∆F are not linearly related. T (temperature) in a property of matter (air in this case) relating to its energy level that can be measured as relative sensible heat (obviously — just to set the stage). Temperature itself is not a “thing” and is not an inherent property like “mass”. Of the mathematical formulas for heat transfer, we can say this:
The key point is that heat transfer in a fluid (atmosphere) is itself a non-linear process and thus it is highly unlikely that global average 2-meter-above-surface air temperature would be linearly related to TOA radiation. Way too many other energy interactions, themselves mostly non-linear, in between.
Good to have the point nailed down, though. Thank you.
The fundamental and to me incorrect assumption at the core of the modern view of climate is that changes in temperature are a linear function of changes in forcing.
I have not seen that in my readings. A citation and exact quote would be helpful.
Thanks, Matthew. Here are a few of an uncountable number. Let’s start with the IPCC, WGI, Chapter 8, p 664
As you might expect, this definition is echoed all over the web.
Climate sensitivity lambda = Delta Teq / Delta F atmos.
University of Washington Atmospheric Sciences
Let’s estimate ECS for our zero-dimensional model. We know that the warming for any given radiative forcing RR is
ΔTs=−Rλ
University of Albany Atmospheric Sciences
Temperature change = climate sensitivity x forcing
ΔT = λ ∆Q
Jagiellonian University Poland
Start with climate sensitivity:
lambda = delta T/ delta F
it captures the temperature response to a change in forcing.
Steven Mosher
The predicted change in the average planetary surface temperature is
ΔT ≈ [0.3 K·(W·m–2)–1] (2.2 W·m–2) ≈ 0.7 K
American Chemical Society
When radiative forcing F is applied to the TOA, the energy budget equation with the net TOA radiation, called N, may be written in the simplest form as
N=F−λΔT
where λ is termed the climate feedback parameter and represents how much energy is lost to space in accordance with the unit increase of the global mean surface temperature T (e.g., Gregory et al. 2015;
Progress in Earth and Planetary Science
Cal Tech
The climate sensitivity parameter, λ (pronounce “lambda”), is more general and isn’t necessarily connected with CO2. It is the change of the near-surface air temperature, ΔT, that you obtain from a unit change of the radiative forcing, RF – which is the net irradiance measured at the tropopause (the boundary between troposphere and stratosphere, i.e. the upper boundary of the atmosphere where “weather occurs”):
ΔT = λ . RF
Lubos Motl
Climate sensitivity is inversely related to the feedback factor
:

Description, MAGICC program
Gotta say, I’ve been surprised at people asking me for examples of that, they’re everywhere …
Best regards,
w.
Willis,
IPCC:
“The assumed relation between a sustained RF and the equilibrium global mean surface temperature response (∆T) is ∆T = λRF where λ is the climate sensitivity parameter.”
Yes. But you say something quite different
“According to this theory, in order to figure out what the change in global temperature will be between now and the year 2050, you just estimate the change in net forcing between now and then, multiply it by the magic number, et voilà—the change in temperature pops out!”,/i>
Not a sustained RF, and not an equilibrium response. And then further:
“According to the incorrect paradigm that says that changes in surface temperatures follow the changes in forcing, we should be able to see the relationship between the two in the CERES data—when the TOA forcing takes a big jump, the temperatures should take a big jump as well, and vice-versa.”
This is nothing like the IPCC statement. The point is, as I said above, temperature at any time depends on the forcing history. The definition of ECS is deliberately framed to ensure that there is only one effective item in that history – a step change in the distant past. It is aacknowledged that this is very hard to achieve in observation, and even in modelling. But that is what the IPCC definition refers to. It does not mean that you can get the temperature in 2050 by a rough characterisation of the history of 33 years.
All your other examples are referring to this concept of equilibrium climate sensitivity.
Willis, thank you.
I don’t give much weight to the comments of Steve Mosher, but the others are clearly substantial.
I regret that in my readings I have not made a searchable data base or bibliography. It’s one of my perennial new year’s resolutions.
matthewrmarler December 20, 2017 at 12:32 am Edit
You’re welcome, Matt. As to Mosh, I have a different view of him than most folks, perhaps because I’ve met him. He’s a very smart guy who has the unfortunate habit of posting cryptic comments which might be true but it’s hard to impossible to tell … as a result, I do pay attention to what he says.
Best regards,
w.
Willis, IIRC it is the Gregory & Foster 2015paper which is the only mainstream acknowedgement of the OLS bias problem. They do look at other methods but hide it in an appendix and don’t refer to it in abstract.
BTW Nick is correct the dT vs dRad is the long term response and will not be seen in short term data where it is dT/dt which is driven by rad. This is basic physics.
You need to be looking at timescales greater than several “time constants” for the global climate to get the dT dRad relationship.
Nick Stokes December 19, 2017 at 4:17 pm
Nick, you might have noticed that I’ve pretty much given up discussing things with you. Why? Because in all of the time we’ve interacted you’ve never admitted that you were wrong and someone else is right. You always start out with “Yes, but …” and go from there.
It’s why people gave you your nickname some years ago, “Racehorse” Nick Stokes, after the famous Texas lawyer “Racehorse” Haynes. He famously said:
That’s the level of your discussion. No matter what anyone says, Racehorse Nick is never, ever wrong, and if he is wrong, he doesn’t have a dog …
Sorry, but your schtick has ground me into the ground. I’ve given up. You’ll have to address your statements about the dog to someone else, I’m tired of them. Don’t worry, though … there are still plenty of people here who I’m you can fool.
Just don’t count me among them. Address any future comments to them. You’ve pulled your dog act one too many times, I’m over it. Talk to someone else, there’s a good fellow.
Sadly,
w.
Willis, this is an odd time to spit the dummy. He has quoted you, been polite, and made a very clear point which shows your argument and logic to be lacking. And your response is to never speak to him again? That says more about you than him.
It is also an “odd” time to give up because Nick Stokes actually knows his stuff and Willis should be listening and learning and checking who is correct instead of “giving up”.
Seems like Willis is having trouble finding a logical reply to Nick’s point. Saying I can’t find a reply so I will ignore you is not impressive. I won’t criticise Nick for not admitting he is wrong until I can show he is wrong. The other reason for not admitting you are are wrong is not being wrong.
Mat, if you think arguing with Nick about what “sustained” means will get you anywhere, I encourage you to go for it. My point is simple. If ∆T = lambda ∆F in some kind of “long run”, then it has to do so when averaged over shorter time scales as well. Yes, you won’t get the same result, but you will get A result.
In this case, I’m looking at 16 YEARS of data. Are you (and Nick) seriously claiming that there will be a response at say 100 years, but NO response at 16 years? Really? I assuredly can’t prove that there would be, but after sixteen years we should see something …
But if you think I’ll discuss that with Nick? Fugeddaboudit. He’ll just tell me “I don’t have a dog”. I’m sorry you don’t like it but the man didn’t get his nickname by accident, and I’m not the man who gave him the nickname. You’re free to discuss things with him. I’ve given it up.
Nor is this particular issue the reason. As I said, and as you guys didn’t seem to notice …
Not that “I’m giving up”, that I “pretty much have given up”. I’ve been carefully limiting the topics I engaged with him on to ones where there is a clear answer one way or another … and that’s not true about the question “If we should see an effect in 100 years, will there be an effect in 16 years”? No way to prove that, although I clearly think the answer is “yes” …
Finally, what people seem to ignore in discussing these things is that generally, we’re talking about exponential curves. The exponential response is why it’s supposed to take a hundred years to equilibrate. It’s characteristic of heat transfer.
As an example, suppose you shine a heat lamp on a block of iron. At first, it heats quickly, then over time the heating slows down.
My point is that even with things that don’t reach equilibrium for say a century,
most of the change happens in the first decades … something which Nick seems to think doesn’t occur, and something I’m unwilling to debate with him—he’ll just say his dog doesn’t bite.
Seriously. I’ve seen Nick clearly shown to be completely wrong by guys smarter than me, with all of them agreeing, and at the end Nick just smiled and said “I don’t have a dog”. People don’t get nicknames by accident … engage with him at your own risk.
I trust this clarifies my position.
w.
Mr Eshenbach, describes himself to a “T” in that comment.
He uses it regularly when he does not have the answers and repitition of his position has failed.
Next will come the put downs, the invective and finally downright insults.
Just read through any of his posts to see the pattern.
I just read this, and thought it might be worth adding to the conversation.
This explains why they see a co2 spectrum in outgoing IR. Even if it isn’t doing anything besides lighting up because it is exposed to 15u from condensing water vapor.
ferdberple December 18, 2017 at 11:10 pm
Thanks, ferd. Let’s put some numbers on that. The deep ocean conveyor goes from about 75°N or so to the equator. That’s about … hang on … about 8000 km in 800 years. That means that the currents are moving on the order of 10 km/year, which is about a metre per hour.
By comparison, the gulf stream in parts moves about a metre per second …
At that rate, even if the overturning is not constant, it is far too slow to make much of a difference on human timescales …
My best to you,
w.
Because I did not noticed any real figures about the emitted radiation fluxes, I attach here my figure, which shows that the emitted radiation flux of the Earth’s surface is essentially linear over a very broad temperature range:

Below is a figure showing a very linear relationship between the radiation forcing at the TOA and the surface temperature change showing that the IPCC simple climate model dT = CSP*RF is relevant:

how did you measure the outgoing IR and the mean global temp last time the COw was 1370? Total BS. you don’t even say what this is . model? proxy? WTF
That too shows surface cooling is well regulated. I thought it was saying something else, which started this reply.
There is a continuous flux out the optical window to space. This morning, T Zenith was -65F, air temps were 38, dropped almost 21F over night as the clouds went away, but the sky wasn’t very clear. I Looked to see if I was missing a good night for astrophotography, and I wasn’t.
On clear nights that cooling slows or stops, while, ~30% of the SB flux is going straight to space, those losses are being supplied by sensible heat from condensing water in the atm.
Oh, but for the same reason the first chart is right, the second is wrong.
Willis ,thanks for R^2 figures ,something I requested as soon as I read this.
Could you provide a table of the data used in the scatter plots. I have a couple of things I’d like to look at but don’t have the free time to do all that from the raw data.
regards.
Greg, I’ve posted them up here.
Best,
w.
I see Nick Stokes is frustrated … so he’s Tweeted that he’s retreating to his blog to explain to the faithful just how right he is.
Now if we could just get some others to follow him over there …
Plus, what’s that about an “IPCC model” and a straw man “model”. I certainly said nothing about models … but Nick’s never been a man to let the facts get in his way.
w.
Just checked. Not one person here has said one word about an “IPCC model”, Nick made that up out of the whole cloth … remember what I said about him and his dog? He steps out the door and he’s already lying …
w.
Willis, it’s not all about you. My post was about two recent WUWT articles. I said
“The threads in question are at WUWT. One is actually a series by a Dr Antero Ollila, latest here, “On the ‘IPCC climate model’”. The other is by Willis Eschenbach Delta T and Delta F.”
It’s right there in the heading.
Yes, you didn’t refer to it explicitly as a model. You said:
“According to this theory, in order to figure out what the change in global temperature will be between now and the year 2050, you just estimate the change in net forcing between now and then, multiply it by the magic number, et voilà—the change in temperature pops out!”
That sounds functionally the same as Dr Ollila’s “IPCC climate model”. And just as unsourced.
Nick Stokes December 20, 2017 at 9:12 pm Edit
Nick, your tweet opens like this:
Dude, when the first and *only* name mentioned in your tweet is mine, I fear that in the eyes of your readers it IS all about me. Mine is the only name they have to hang your accusations one. You specifically said that I was misusing definitions to make a straw man “model”, when I didn’t say one word about a model.
But yes, I get it, Racehorse. Your dog was tied up all night, and besides, you don’t have a dog. So take your dog back to your blog, where your sycophants can tell you how wonderful you are, and nobody will call you Racehorse.
w.
Willis, be nice. Poor Nick can’t help himself, his viewpoint is “mangled” by his own bias. And as we know, it’s not possible for Nick to ever be wrong.
Pity him.
Willis,
I wrote the post and title first. The tweet came later. If I had known that “model” was a sensitive term to you, I would have associated it more specifically with Dr Ollila in the tweet. But WUWT has been running now a series by Dr O about an IPCC model, proclaimed in the title of his latest post, “On the ‘IPCC climate model’”. You are using the same equation, derived from the same IPCC definition. You assign to it the same functionality. I was writing about both. I’m sorry if it seems important to you that I should not also refer to your use as a model. I will try not to do it again.
Nick Stokes December 21, 2017 at 3:10 am
Nick, you did your best to rubbish my name on Twitter. Now here you are telling us you did no such thing, your dog doesn’t bite, it’s all about the order in which you wrote the post and the tweet, and besides your dog was tied up the whole time, and it has to do with some series by someone named “Dr. O.” that as far as I know I’ve never read a word of, and in any case, you don’t have a dog …
You might get away with that underhanded attempt at guilt by association on your blog. Here, I’ll call you on it every time.
w.
PS—”Model” is not a “sensitive term” for me, that’s just another of your pathetic attempts at deflection. What I’m sensitive about is you lying by putting words about “IPCC models” in my mouth that I never said.
Willis, you know the old saying about the relationship between the amount of flak you’re taking and your proximity to the target?
Plus, here’s an instructive piece from the much-missed MemoryVault that might be relevant, as true today as when it was first posted.
https://libertygibbert.com/2010/08/09/dobson-dykes-and-diverse-disputes/
Delta T and delta F are all very well, but they’re actually rather like arguing about how many angels can dance on the head of a pin if the assumptions underlying the calculations are bogus. consider the following:
Is climate science “settled,” or perhaps “unsettling?” Since 1998, the elevated, but essentially flat temperatures of the so-called “global warming hiatus” (and one El Nino event), have shown no correlation whatever with steadily rising atmospheric CO2. This is extremely damaging to the credibility of the once almost universally trusted mechanism of CO2/warming. Despite this inconvenient reality, most of us still cling to this theory, failing to realize that it actually has no hard-data support in the peer-reviewed literature. Feldman, et al., 2015, is often cited as “proof” of the link, but even this “landmark” paper uses correlations and theoretical arguments rather than hard data which, of course, is scientifically indefensible. A realistic search for an alternative to this long-trusted link that better reflects what is actually happening to global temperature clearly seems warranted. The question is, what mechanism might better account for these actual, real-world observations? First, we might consider when warming has actually occurred.
The only episode of global warming during the past 50 years that can be clearly identified occurred from 1975 to 1998, when global temperatures shot up dramatically by almost a centigrade degree, making this an obvious first place to look for an alternative mechanism. This also happens to be the same period during which anthropogenic CFCs were freely introduced into the atmosphere. This was stopped in the 1990s by the Montreal Protocol, which banned further CFC production because it was found that the chlorine in CFCs was released by photodissociation on polar stratospheric clouds, whereupon it would destroy stratospheric ozone, thus depleting Earth’s protective ozone layer. This depletion, in turn, would permit greater irradiation of Earth’s surface by ionizing solar ultraviolet-B radiation, whose normal ozone-destroying function was taken over by anthropogenic chlorine. Concern at the time was limited to severe sunburn and genetic defects from UV-B, but if this powerful radiation could cause severe sunburn and genetic defects, it could certainly also cause global warming. It’s hardly unreasonable, moreover, to expect that significant climatic effects should have resulted from so large a disruption of such a major part of Earth’s atmospheric system.
Why, then, have we had two decades of elevated temperatures? Simply because most of the chlorine that we introduced to the atmosphere is still up there, and still destroying ozone catalytically, that is, the chlorine is not itself destroyed in the process, but a single chlorine atom can destroy over a hundred thousand ozone molecules in a cyclical process. Hence, the ozone shield is still depleted and likely to remain so for several more decades. Therefore, assuming the foregoing warming mechanism is valid, the so-called “hiatus” should be in effect at least through mid-century.
Why is CO2 not a likely warming agent? Because despite its well-documented absorption of a portion of Earth’s heat radiation, absorption and emission actually happens within a waveband (13 to 17 microns wavelength) that corresponds to temperatures well below those of Earth’s surface, an important fact unfortunately ignored by climate scientists, and as is well-known, cooler objects (here, CO2) can’t transfer heat to warmer ones (here, Earth’s surface). The fact is that back-radiation from CO2 is simply too weak (it emits only as a line spectrum) and too “cold” to have a significant greenhouse effect in the Earth environment.