Hot on the heels of the Lewis and Curry paper, we have this new paper, which looks to be well researched, empirically based, and a potential blockbuster for dimming the alarmism that has been so prevalent over climate sensitivity. With a climate sensitivity of just 0.43°C, it takes the air out of the alarmism balloon.
The Hockey Schtick writes: A new paper published in the Open Journal of Atmospheric and Climate Change by renowned professor of physics and expert on spectroscopy Dr. Hermann Harde finds that climate sensitivity to a doubling of CO2 levels is only about [0.6C], about 7 times less than the IPCC claims, but in line with many other published low estimates of climate sensitivity.
The paper further establishes that climate sensitivity to tiny changes in solar activity is comparable to that of CO2 and by no means insignificant as the IPCC prefers to claim.
The following is a Google translation from the German EIKE site with an overview of the main findings of the paper, followed by a link to the full paper [in English].
Assessment of global warming due to CO2 and solar influence
Currently climate sensitivity (discussed for example here ) is claimed by the IPCC mid-value to be 3.0 C (AR4) as the most probable value, but others have determined much lower values of 1.73C or 1C or even 0.43C. Prof. Hermann Harde, renowned physicist and Spektral analytiker has determined from his new paper the climate sensitivity is [0.6 C]


Advanced Two-Layer Climate Model for the Assessment of Global Warming by CO2
Hermann Harde* , Experimental Physics and Materials Science , Helmut-Schmidt-University, Hamburg , Germany
Open Journal of Atmospheric and Climate Change, In Press
Abstract

CO2 concentration, based on a combination of thermally and solar induced cloud feedback.
Based on the HITRAN-2008 database [4] detailed spectroscopic calculations on the absorptivities of water
vapour and the gases carbon dioxide, methane and ozone in the atmosphere are presented.
The line-by-line calculations for solar radiation from 0.1–8 mm (sw radiation) as well as for the
terrestrial radiation from 3–100 mm (lw radiation) show, that due to the strong overlap of the CO2 and
CH4 spectra with water vapour lines the influence of these gases significantly declines with increasing
water vapour pressure, and that with increasing CO2-concentration well noticeable saturation effects are
observed limiting substantially the impact of CO2 on global warming.
based on actual data of the water vapour content, which is considerably varying with altitude above ground
as well as with the climate zone and, therefore, with the temperature. The vertical variation in humidity
and temperature as well as in the partial gas pressures and the total pressure is considered by computing
individual absorption spectra for up to 228 atmospheric layers and then integrating from ground level up
to 86 km altitude.
atmosphere and therefore on the geographic latitude and longitude, is included by considering the Earth
as a truncated icosahedron (Bucky ball) consisting of 32 surface elements with well defined angles to the
incident radiation, and assigning each of these areas to one of the three climate zones.
by the atmosphere itself, as well as their variation with temperature are derived from radiation transfer
calculations for each zone.To identify the influence of the absorbing gases on the climate and particularly the effect of an
increasing CO2-concentration on global warming, we developed an advanced two-layer climate model,
which describes the Earth’s surface and the atmosphere as two layers acting simultaneously as absorbers
and Planck radiators. Also heat transfer by convection and evaporation between these layers is considered.
At equilibrium each, the surface as well as the atmosphere, deliver as much power as they suck up from
the sun and the neighbouring layer or climate zone.
considering multiple scattering between the surface and clouds. It also includes the common feedback
processes like water vapour, lapse rate and albedo feedback, but additionally takes into account the
influence of a temperature dependent sensible and latent heat flux as well as temperature induced and
solar induced cloud cover feedback.
budget scheme of Tremberth et al. [20], which at a reference CO2 concentration of 380 ppm and a ground
temperature of 16 °C can well be reproduced.
and the lower atmospheric temperature are calculated as a function of the CO2 concentration. From the
temperature variations, found at doubled CO2 concentration, the CO2 climate sensitivity and air sensitivity
are derived.
are extensively discussed. While the albedo- and to some degree the lapse rate feedback are adopted from
literature, the water vapour feedback is derived from the sw and lw absorptivity calculations over the
different climate zones. With an amplification at clear sky conditions of 1:5 and at mean cloud cover of
1:2 these values are smaller than assumed in other climate models [27, 28].Since it is found that with increasing CO2 concentration the air temperature is less rapidly increasing
than the surface temperature, the convection at the boundary of both layers rises with the concentration.
As a consequence more thermal energy is transferred from the surface to the atmosphere. Similarly, with
increasing temperature also evaporation and precipitation are increasing with the ground temperature.
Both these effects contribute to negative feedback and are additionally included in the simulations.
A special situation is found for the influence of clouds on the radiation and energy budget. From
measurements of the global cloud cover over a period of 27 years it is deduced that the global mean
temperature is increasing with decreasing cloud cover [25]. However, it is not clear, if a lower cloud
cover is the consequence of the increasing temperature, or if the cloud cover is influenced and at least to
some degree controlled by some other mechanism, particularly solar activities. In the first case a strong
amplifying temperature induced cloud feedback had to be considered, this for the climate sensitivity as
well as for a respective solar sensitivity, whereas in the other case the temperature induced cloud effect
would disappear for both sensitivities and only a solar induced cloud feedback had to be included due to
the solar influence.
on the climate and solar sensitivity can be derived from model simulations, which additionally include
the solar effect and compare this with the measured temperature increase over the last century. These
simulations, considering both effects, show that the observed global warming of 0.74 °C [51] can only
satisfactorily be explained, when a temperature feedback on the clouds is completely excluded or only has
a minor influence. Otherwise the calculated warming would be significantly larger than observed, or the
thermally induced cloud feedback would have been overestimated. With a combination of temperature and
solar induced cloud feedback we deduce a CO2 climate sensitivity of CS = 0.6 °C and a solar sensitivity,
related to 0.1 % change of the solar constant, of SS = 0.5 °C. An increase in the solar activity of only 0.1
% over 100 years then contributes to a warming of 0.54 °C, and the 100 ppm increase of CO2 over this
period causes additional 0.2 °C in excellent agreement with the measured warming and cloud cover.
From our investigations, which are based on actual spectroscopic data and which consider all relevant
feedback processes as well as the solar influence, we can conclude, that a CO2 climate sensitivity larger
1 °C seems quite improbable, whereas a value of 0.5 – 0.7 °C – depending on the considered solar anomaly
– fits well with all observations of a changing solar constant, the cloud cover and global temperature. A
climate sensitivity in agreement with the IPCC specifications (1.5 – 4.5 °C) would only be possible, when
any solar influence could completely be excluded, and only CO2 induced thermal cloud feedback would
be assumed, then yielding a value of 1.7 °C.
variations over some time period and, therefore, have to solve complex coupled nonlinear differential
equations with countless parameters, for tracing the climate sensitivity this is of no significance. We
calculate an equilibrium state and can average over larger local variations, for which a partitioning into
three climate zones is quite sufficient. In addition, a simple energy balance model, focussing on the main
physical processes, is much more transparent than any AOGCM and can help to better understand the
complex interrelations characterizing our climate system.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Your post on Hermann Harde’s work highlights a valuable analysis. He has shown that even if you adopt the Trenberth energy balance model (including the notion of the earth as a flat disk illuminated constantly by the sun at 1/4 power), empirical evidence gives a small fraction of the potential climate sensitivity claimed from increasing CO2.
I think the key inference is the saturation effect, “The line-by-line calculations for solar radiation from 0.1–8 mm (sw radiation) as well as for the terrestrial radiation from 3–100 mm (lw radiation) show, that due to the strong overlap of the CO2 and CH4 spectra with water vapour lines the influence of these gases significantly declines with increasing water vapour pressure, and that with increasing CO2-concentration well noticeable saturation effects are observed limiting substantially the impact of CO2 on global warming.”
As a conservationist and progressive and teacher, I’ve tried, mostly unsuccessfully, to educate my tribe in the science of climate science using, among other things, this elementary essay on climate sensitivity-
http://climatesensitivity.blogspot.com/
Well Doug, I notice you excerpted a section that has puzzled me.
“””… “The line-by-line calculations for solar radiation from 0.1–8 mm (sw radiation) as well as for the terrestrial radiation from 3–100 mm (lw radiation)…”””
Did they really integrate Solar radiation from 100 nano-meters all the way out to 8 millimeters, and IR from 3 microns out to 100 millimeters.
98% of solar radiant energy, is contained between 250 nano-meters, and 4.0 microns wavelength. Any solar radiant energy beyond even 10 microns, wouldn’t be worth a tinkers damn.
And 98% of the mean surface LWIR lies between 5 microns, and 80 microns, so once again, radiation beyond say 200 microns, would be quite negligible.
Now 0.1 to 8.0 microns, and 3.0 to 100 microns, would both be very reasonable and generous spectrum widths to deal with.
Could this be just a typo; read “micron” for “mm”.
The text as presented above reads impressively; I can’t say, I can grasp the detail without pictures.
I’m suspicious of the mention of the word “equilibrium” in there. That’s news to me, and if he uses Trenberth’s energy fluxes, it would seem to imply a non-rotating earth, as would the assertion of equilibrium.
I’ve done a lot of computer modeling; both electronic circuits (SPICE like, and down to the bare metal (Si)) and also Optical (imaging and non-imaging designs), but in every case, I have only modeled the real actual physical elements or objects, that were in the real system; or would be in it, when manufactured to my design, and strictly obeying laws of physics; sometimes approximations (geometrical ray optics) and sometimes more physical (diffraction based), and I’ve never ever gotten a result, that wasn’t replicated in the final as made system.
So I don’t quite grasp the concept of doing modeling on a system, that you know a priori is totally different from the actual real system you wish to explain.
But I guess, I will try and wade through this paper. It is a shame if it has typos that totally change the numbers.
Well Doug,
There may be overlap (what is “strong” overlap) of the absorption bands, of the various GHGs, with water notably having plenty of those, but those bands are hordes of very fine lines, and there is not much likelihood of a lot of overlap of those narrow lines. So they are more likely to be additive.
I didn’t see any reference to the influence of ocean cycles. This likely means either the solar or CO2 (or both) values are too high.
Would someone weigh in on the graph? i don’t get what this is showing. Isn’t the wavelength way off? If the gray line is a blackbody curve it’s pretty cool, like ATM average temp? That’s certainly not the BB curve for light from the sun which is peaked at around 0.5 um.
The plank curve in the top graph represents the emission from the surface of the Earth. It looks like it assumes a temperature of 300K, which is normal. It peaks about 10 microns. The graph is meant to show that radiation from the Earth can escape relatively unhindered through the ‘atmospheric window’, a wavelength band ranging from about 8 to 14 microns. Above 14 microns the radiation is blocked by the CO2 absorption band, below 8 microns it is severely attenuated by water vapour.
Thanks Mike! I’m dense at 7AM.
Randy
In the model used for this paper does the model make the usual prediction of a hot spot in the upper troposphere (found to be missing in reality)? If the model does not make a prediction of such a ‘hot spot’ that would certainly indicate a big improvement in modeling.
Versions of this work have been around since early 2011: Consider a Spherical Truncated Icosahedron
Another version and discussions from 2013: Hermann Harde: greenhouse effect 30% smaller than IPCC says.
which refers to this publication.: Radiation and Heat Transfer in the Atmosphere: A Comprehensive Approach on a Molecular Basis, Hermann Harde, International Journal of Atmospheric Sciences, Volume 2013 (2013), Article ID 503727, 26 pages, http://dx.doi.org/10.1155/2013/503727
In the first two articles above there are links to additional discussions.
Yet another paper which justifies abandoning the IPCC model outputs as the basis for even discussing future climate let alone using them as a basis for climate policy.
The modelling approach is inherently of no value for predicting future temperature with any calculable certainty because of the difficulty of specifying the initial conditions of a sufficiently fine grained spatio-temporal grid of a large number of variables with sufficient precision prior to multiple iterations. For a complete discussion of this see Essex: https://www.youtube.com/watch?v=hvhipLNeda4
Models are often tuned by running them backwards against several decades of observation, this is
much too short a period to correlate outputs with observation when the controlling natural quasi-periodicities of most interest are in the centennial and especially in the key millennial range. Tuning to these longer periodicities is beyond any computing capacity when using reductionist models with a large number of variables unless these long wave natural periodicities are somehow built into the model structure ab initio.
In addition to the general problems of modeling complex systems as in the particular IPCC models have glaringly obvious structural deficiencies as seen in fig 2-20 from AR4 WG1- this is not very different from Fig 8-17 in the AR5 WG1 report)
The only natural forcing in both of the IPCC Figures is TSI, and everything else is classed as anthropogenic. The deficiency of this model structure is immediately obvious. Under natural forcings should come such things as, for example, Milankovitch Orbital Cycles, lunar related tidal effects on ocean currents, earth’s geomagnetic field strength and most importantly on millennial and centennial time scales all the Solar Activity data time series – e.g., Solar Magnetic Field strength, TSI, SSNs, GCRs, (effect on aerosols, clouds and albedo) CHs, MCEs, EUV variations, and associated ozone variations.”
More and more people are realizing that the GCM’s are inherently meaningless.
It is well past time that the climate discussion moved past the consideration of the these useless models to evaluating forecasts using a completely different approach based on the natural quasi-periodicities so obviously seen in the temperature and driver record.
Earth’s climate is the result of resonances and beats between various quasi-cyclic processes of varying wavelengths combined with endogenous secular earth processes such as, for example, plate tectonics. It is not possible to forecast the future unless we have a good understanding of the relation of the climate of the present time to the current phases of these different interacting natural quasi-periodicities which fall into two main categories.
a) The orbital long wave Milankovitch eccentricity,obliquity and precessional cycles which are modulated by
b) Solar “activity” cycles with possibly multi-millennial, millennial, centennial and decadal time scales.
The convolution of the a and b drivers is mediated through the great oceanic current and atmospheric pressure systems to produce the earth’s climate and weather.
After establishing where we are relative to the long wave periodicities to help forecast decadal and annual changes, we can then look at where earth is in time relative to the periodicities of the PDO, AMO and NAO and ENSO indices and based on past patterns make reasonable forecasts for future decadal periods.
For forecasts of the timing and amount of the probable coming cooling based on the natural 1000 year and 60 year periodicities in the temperature record and using the 10Be and neutron count data as the best proxy for solar “activity”go to
http://climatesense-norpag.blogspot.com
Perhaps you should consider the “lag”‘ and through that consider what forces what. Warmer water releases more CO2…. Which is cause and which is effect? Perhaps they are both?
Dr. Page,
I have been to your site and find it interesting.
With regards to the climate I certainly don’t know the answers. There are many variables over many varying time scales. There is room for argument for the short term time frames and it is obvious that absolutely nothing is settled. One argument that I can’t buy is the one presented that CO2 is a major factor in the earth’s climate and negative at that. It would seem that 600 to 800 ppm could be beneficial overall.
I can see ECS taking some amount of time to stabilize with constant inputs. But I think TCS should be instantaneous, speed of light, er better speed of IR. A flashlight in a fog, happens immediately, DWLR should respond instantly as the Co2 get various levels of illumination in LW IR.
Even with this showing how much water vapor saturates at least the main 15-16u Co2 bands, when I measure the zenith (DWIR) with my IR Thermometer, when it’s cool and dry, I’m measuring -40F to -60F, and in the winter it was colder than the thermometer reliably reads (-80F). I’ve been measuring 80F-100F differences in temps between concrete and zenith. If there isn’t any water vapor, Co2 isn’t going to keep us warm.
On the other hand, I’ve long wondered if Co2 helps boost a planets temp from Co2 melting temps up to water melting temps, at which point a water cycle can warm it up even more.
~0.5! I guess that makes me a lukewarmer. I expected we would settle on ~1.0 the way the estimates were moving. If solar variations can be ignored, then ~0.7. I guess the big question has to be, why, with all these physicists in the fray and everyone telling us “its the physics”, they didn’t DO the physics!
My guess was zero ±1ish.
At the lowest levels the other factors are so significant as to make determining any effect from CO2… well, it’s impossible to distinguish from the unknowns over those time periods.
Dr Norman Page at October 14, 2014 at 6:24 am, just above, makes the point in detail (but with more certainty than I could muster).
Reduces co2 to background noise. I have it, when it was warming, at about 3% of the total warming, so about 0.015 C . Other factors simply overwhelm that warming. I’m not to sure co2 has any affect at all, other than maybe slowing the rate of fall of real temperatures by 3%. If co2 actually worked the way the IPCC says it does, then temps are actually falling. Which kind of proves that other mechanisms overwhelm co2 in climate control.
Still waiting for Mosher to tell us why this can’t be true ….
It looks as though band saturation by water vapour was under estimated in earlier studies and even the warming induced evaporation (seen by alarmists as positive feedback) would ensure further saturation, thus diminishing the potential of GHG warming by CO2.
Which has been obvious since AR3 squibbed and the Tropical Hotspot didn’t show up.
Not that I’m criticising your comment.
I’m merely asking how anyone was still able to think that warming induced evaporation could be a positive feedback without the Tropical Hotspot being easy to find?
No problem, now they can tell us how even such small increases in temperature will lead to ‘climate doom’ therefore we have to act now. It is a very rich and comfortable gravy train with lots of people on board so its going to take much more than is to derail it.
Oh oh!!! It’s not really looking too good for the IPCC. We were told its 2007 4th Assessment Report was the “gold standard in climate science”… the “settled science”… incontrovertible… based solely on peer reviewed science!
Well, Mother Nature has rendered that report obsolete. The report was all about rising temperature trends based on various rising CO2 emissions scenarios. The hotter temperatures supposedly would cause all those nasty extreme weather events. Nothing in the report about a flat temperature trend or possibly even slight cooling.
RSS has confirmed 18 years of no discernible warming. No warming. That’s big news… huge!
Since 2007, we’ve seen what… some 17 papers demonstrating climate sensitivity is significantly lower than that used in climate models.
So when is someone going to call this global warming alarmism for what it is … a massive deception… a scientific fraud?
Ok; for the (roughly) 2 millionth time here at WUWT:
Global warming alarmism is … a massive deception… a scientific fraud.
With 24 years of deductive reasoning under my belt as a chemist, I already knew that Gore’s claim about global warming didn’t make logical sense. http://chrisskates.com/novel-becomes-energy-crystal-ball/
These numbers agree with some of the studies that were produced years ago by Roy Spencer and Richard Lindzen.
In the meantime, Svalgaard continues to pretend that Svensmark’s research on cosmic rays doesn’t exist.
My take, for the last 8 years has been that the number is between .5C and 1.2C for 2XCO2.
“We present an advanced two-layer climate model…”
Sorry, but I thought we were supposed to be skeptical of model results…
We are sceptical of model results. But this model uses known (measured inputs) from a spectroscopy database , HITRAN08 and plugs them into a simple model. A simple model with a simple aim, to “calculate the influence of an increasing CO2-concentration and a varying solar activity on global warming.”
It isn’t trying to predict the oceans and the weather systems.
From the introduction:
In other words, they try to do too much and fail to replicate anything – except through fiddling the parameters to tune the past. You’ve seen our scepticism of this in the past.
But they go on to state that their model is meant to look at climate sensitivity only – and it describes how it works so anyone can see the assumptions involved. The assumptions involved in solving for the movement of the oceans and atmosphere are so many and complex that they don’t test anything except the faith of the reader.
But here:
So you can see what they are testing.
“… I thought we were supposed to be skeptical of model results…”
Yes, it would be foolish not to be skeptical of all models, because they all tend to be biased from unavoidable assumptions, approximations and economies of scale.
Having said that, I would like to point out that it is impossible to measure anything without using a model of some kind.
Temperature? Temperature is only an abstract concept defined by physicists. So, thermometers can’t measure temperature directly, but must rely on a model, based on a variety of proxies like thermodynamic thickness of a column of mercury or the electrical resistance of a thermocouple, all subject to errors of implementation and interpretation. All are severely limited in scale. The thermometer in your weather station would be useless for measuring temperatures inside a furnace (and vice versa).
But many of these so-called ‘thermometers’ are often useful, and sometimes seem to provide acceptably consistent readings over a narrow range of values in selected locations. Assuming, of course, that they are free from manufacturing defects, improper installation or misreading of values.
Yes, be very skeptical of _all_ models.
Models are essential to understanding the physical world we live in. The key is that to be useful the model results have to agree with experimental results and/or the physical world. So if the model correctly predicts observation then it is useful. If not, it is junk.
Okay. So, we are lowering the estimate for the rate of change of a dependent variable we can’t really measure. Yeah.
Letter submitted to Raleigh (NC) News Observer (Dec. 24, 2004) by S Fred Singer:
Fred Singer wrote that 10 years ago. My money has always been on him being about right and the recent slew of papers on climate sensitivity all seem to be heading in one direction – down.
And just for those (like me) who prefer SI to Imperial units, 1.0 deg F is approximately 0.6 deg C.
[Is that 0.6 C before or after adjusting for the TOBS adjustments and station movements between the equator, Paris and the north pole since the meter was defined back in the French Revolution? 8<) .mod]
I thought I’d check your math so I asked a search box to tell me what 1.0 deg F is when expressed as Celsius.
The answer provided was -17.22222.
Perhaps you meant what 1.0 F degree is expressed as Celsius.
John, ThinkingScientist was talking not about the temperature 1.0 degree F, but what percent of a Celsius degree is a Fahrenheit degree. That is easily derived by noting that freezing in F is 32 degrees but 0 degrees C, & that boiling is 212 F but 100 C. Hence, there are 180 F degrees between these states of water, while only 100 C degrees. Thus, there are 1.8 F degrees per C degree.
The 0.6C value is just about what the logical person would have guessed was the expected value. Given that ~1.1C is the non-feedback value, and in stable systems feedbacks are very highly likely to be net negative, an actual value less than 1.1C should have been expected. The surprising result would have been if it was greater than 1.1C and yet our climate system remains stable.
I went and read the entire paper. On the plus side, Table 5 inputs are well founded, and using them innthe absence of feedbacks produces either sensitivity of 1.11 or 1.2 depending on assumptions. Those are the cannonical values, and show that the GHG radiative transfer calculations by layer agree with other much simpler calculations of the zero feedback sensitivity to CO2 doubling.
But the lower feedback sensitivity results from two rather clear conceptual errors.
With respect to water vapor feedback, an incorrect saturation assumtion is made that effectively negated any positive water vapor feedback. The error is two fold. Even saturated layers still reradiate in all directions, so that some of all the absorbed wavelengths will reach the next higher layer, contrary to the explicit assumption made. Second, the number of layers saturated increases (optical depth, ‘top of fog’ rises. That is the classical no feedback mistaken physics.
And cloud feedback is taken to be strongly negative. The derivation is wrong, since it isnot linear as assumed, and the ISCCP observational value is for Cc 66%, not Cc zero. As used in the paper for example equation 77.
Lewis and Curry are quite persuasive. As mathematically intense as this model is, it is not persuasive.
On a demonstrably homeostatic, watery planet, net feedbacks should be assumed negative rather than positive. Ruling out net positive feedbacks from water vapor, clouds, etc, then the lab result of ~1.1 degree C for a doubling from 280 to 560 ppm at equilibrium ought to be considered the upper limit of potential warming.
I’m with Dr. Singer, quoted above, as estimating perhaps one degree F over GASTA in AD 1850 by AD 2100, assuming that earth reaches a concentration of 560 ppm by then. Much of that presumed warming must already have occurred transiently, given the logarithmic nature of the effect.
IOW, not only nothing to worry about, but indeed a good thing.
In addition they don’t seem to have considered LW from the top of cloud which seems a significant omission.
Rud Istvan – If you are up to date with these matters, what assumptions are made about the probability of collision of excited GH gases with atmospheric gas molecules before re-emission has had a chance to occur?
My question is (a) generally in GCMs and (b) in this paper.
Thanks in advance if you can answer.
I was initially concerned about the use of models, and I admit to not being in a position to evaluate their calculations or validity.
However, he fact they detail all the assumptions and parameters on the model and further, highlight some of the limitations fills me with confidence.
Have and pro cAGW papers done similar with their models?
According to this, if you reduce the CO2 level to zero, average temps drop only 2 deg C. That seems far too little. What water partial pressure does he propose when CO2 is at zero? Are there no changes in ice caps and albedo?
Or is it that as the CO2 drops, water vapor drops causing a faster drop in albedo via fewer clouds?
@ur momisugly Stephen Rasey.
You say:
According to this, if you reduce the CO2 level to zero, average temps drop only 2 deg C. That seems far too little.
————
Now been used to the IPCC’s climate science big numbers play, a drop of only 2 deg C seems really far too little indeed, but while looking in the climate data, the average temp drop from the Holocene Optimum (the warmer period of the holocene) to the mids of the LIA ( Little Ice Age) does not make it even to a full 1C drop. Imagine any further 1C drop from that point on…… and is no hard to picture a LIA becoming an Ice Age, provided that that drop of 2C lasts longer.
A drop of around 2.2C to 2.4C will mean a significant cooling in the case of a long term trend…..
cheers
“Climate scientists” have tried to get rid of the Holocene Climatic Optimum as well as the Medieval Warm Period. The Optimum was at least a degree C warmer than now globally (probably more), & much more at high latitudes. The LIA was about a degree C colder than now, so the worldwide average range from HCO peak to LIA trough was at least two degrees C.
Thanks for the reply.
YOU COOLD BE RIGHT, YES.
2 deg C sitll remains a significant cooling even in the way of your response…..besides I am a bit confused why should you estimate the cooling from the Holocene Optimum peak to LIA trough by taking in to consideration the warming at the present..strange…LIA did hapen prior and there is data directly showing the correlation without the need of the present… our present warming is not mature enough in the means of data as to be ready to compare (in) with long term data. In the future this warming of present you consider will look different in the long term data than now. Sorry but seems like a bit of acrobatics with the data….
But anyway that does not necesary mean that you are wrong….but as far as I can tell one of the reasons for the ridding off of Holocene data is that there is no enough cooling in the long term cooling trend and there is a lot of warming in the long warming trend prior to Holocene Optimum.
Your 2C cooling is a good enough one and very pleasing to the “Climate scientists” you mention.
Anyway is one thing that I would not bet either way at the moment.:-)
Thanks milodonharlani
cheers
– – – – – – – – – –
The paper makes a reasonable point, to get values within the IPCC specified climate sensitivity range requires excluding any solar sensitivity influence coupled with using an implausibly myopic view of only one aspect of cloud feedback.
John
Exactly. A fine summary statement, really nails the issue.
Still trying to see if anyone has noted that EPA reported that there had been no warming in last 100 years at http://www.epa.gov/climatechange/indicators. This was quite an admission.
Don’t see them saying what you say. Their page on US temps says +1.4F/century since 1901.
I assume this model continues to ignore all the other “stuff” in the atmosphere which would also mean it is overestimating the warming. Dust, salt, pollen, smoke, bacteria, etc. all absorb radiation as well which must reduce the amount available for GHGs.