I’m happy to present this essay created from both sides of the aisle, courtesy of the two gentlemen below. Be sure to see the conclusion. I present their essay below with only a few small edits for spelling, format, and readability. Plus an image, a snapshot of global temperatures. – Anthony

By Zeke Hausfather and Steven Mosher
There are a variety of questions that people have about the calculation of a global temperature index. Questions that range from the selection of data and the adjustments made to data, to the actual calculation of the average. For some there is even a question about whether the measure makes any sense or not. It’s not possible to address all these questions in one short piece, but some of them can be addressed and reasonably settled. In particular we are in a position to answer the question about potential biases in the selection of data and biases in how that data is averaged.
To move the discussion onto the important matters of adjustments to data or, for example, UHI issues in the source data it is important to move forward on some answerable questions. Namely, do the methods for averaging data, the methods of the GISS, CRU and NCDC bias the result? There are a variety of methods for averaging spatial data, do the methods selected and implemented by the big three bias the result?
There has been a trend of late among climate bloggers on both sides of the divide to develop their own global temperature reconstructions. These have ranged from simple land reconstructions using GHCN data
(either v2.mean unadjusted data or v2.mean_adj data) to full land/ocean reconstructions and experiments with alternative datasets (GSOD , WDSSC , ISH ).
Bloggers and researchers who have developed reconstructions so far this year include:
Steven Mosher
And, just recently, the Muir Russell report
What is interesting is that the results from all these reconstructions are quite similar, despite differences in methodologies and source data. All are also quite comparable to the “big three” published global land temperature indices: NCDC , GISTemp , and CRUTEM .
[Fig 1]
The task of calculating global land temperatures is actually relatively simple, and the differences between reconstructions can be distilled down to a small number of choices:
1. Choose a land temperature series.
Ones analyzed so far include GHCN (raw and adjusted), WMSSC , GISS Step 0, ISH , GSOD , and USHCN (raw, time-of-observation adjusted, and F52 fully adjusted). Most reconstructions to date have chosen to focus on raw datasets, and all give similar results.
[Fig 2]
It’s worth noting that most of these datasets have some overlap. GHCN and WMSSC both include many (but not all) of the same stations. GISS Step 0 includes all GHCN stations in addition to USHCN stations and a selection of stations from Antartica. ISH and GSOD have quite a bit of overlap, and include hourly/daily data from a number of GHCN stations (though they have many, many more station records than GHCN in the last 30 years).
2. Choosing a station combination method and a normalization method.
GHCN in particular contains a number of duplicate records (dups) and multiple station records (imods) associated with a single wmo_id. Records can be combined at a single location and/or grid cell and converted into anomalies through the Reference Station Method (RSM), the Common Anomalies Method (CAM), and First Differences Method (FDM), or the Least Squares Method (LSM) developed by Tamino and Roman M . Depending on the method chosen, you may be able to use more stations with short records, or end up discarding station records that do not have coverage in a chosen baseline period. Different reconstructions have mainly made use of CAM (Zeke, Mosher, NCDC) or LSM (Chad, Jeff Id/Roman M, Nick Stokes, Tamino). The choice between the two does not appear to have a significant effect on results, though more work could be done using the same model and varying only the combination method.
[Fig 3]
3. Choosing an anomaly period.
The choice of the anomaly period is particularly important for reconstructions using CAM, as it will determine the amount of usable records. The anomaly period can also result in odd behavior of anomalies if it is too short, but in general the choice makes little difference to the results. In the figure that follows Mosher shows the difference between picking an anomaly period like CRU does, 1961-1990, and picking an anomaly period that maximizes the number monthly reports in a 30 year period. The period that maximizes the number of monthly reports over a 30 year period turns out to be 1952-1983. 1953-82 (Mosher). No other 30 year period in GHCN has more station reports. This refinement, however, has no appreciable impact.
[Fig 4]
4. Gridding methods.
Most global reconstructions use 5×5 grid cells to ensure good spatial coverage of the globe. GISTemp uses a rather different method of equal-size grid cells. However, the choice between the two methods does not seem to make a large difference, as GISTemp’s land record can be reasonably well-replicated using 5×5 grid cells. Smaller resolution grid cells can improve regional anomalies, but will often result in spatial bias in the results, as there will be large missing areas during periods when or in locations when station coverage is limited. For the most part, the choice is not that important, unless you choose extremely large or small gridcells. In the figure that follows Mosher shows that selecting a smaller grid does not impact the global average or the trend over time. In his implementation there is no averaging or extrapolation over missing grid cells. All the stations within a grid cell are averaged and then the entire globe is averaged. Missing cells are not imputed with any values.
[Fig 5]
5. Using a land mask.
Some reconstructions (Chad, Mosh, Zeke, NCDC) use a land mask to weight each grid cell by its respective land area. The land mask determines how much of a given cell ( say 5×5) is actually land. A cell on a coast, thus, could have only a portion of land in it. The land mask corrects for this. The percent of land in a cell is constructed from a 1 km by 1 km dataset. The net effect of land masking is to increase the trend, especially in the last decade. This factor is the main reason why recent reconstructions by Jeff Id/Roman M and Nick Stokes are a bit lower than those by Chad, Mosh, and Zeke.
[Fig 6]
6. Zonal weighting.
Some reconstructions (GISTemp, CRUTEM) do not simply calculate the land anomaly as the size-weighted average of all grid cells covered. Rather, they calculate anomalies for different regions of the globe (each hemisphere for CRUTEM, 90°N to 23.6°N, 23.6°N to 23.6°S and 23.6°S to 90°S for GISTemp) and create a global land temp as the weighted average of each zone (weightings 0.3, 0.4 and 0.3, respectively for GISTemp, 0.68 × NH + 0.32 × SH for CRUTEM). In both cases, this zonal weighting results in a lower land temp record, as it gives a larger weight to the slower warming Southern Hemisphere.
[Fig 7]
These steps will get you a reasonably good global land record. For more technical details, look at any of the many http://noconsensus.wordpress.com/2010/03/25/thermal-hammer-part-deux/different http://residualanalysis.blogspot.com/2010/03/ghcn-processor-11.html models http://rankexploits.com/musings/2010/a-simple-model-for-spatially-weighted-temp-analysis/ that have been publicly http://drop.io/treesfortheforest released http://moyhu.blogspot.com/2010/04/v14-with-maps-conjugate-gradients.html
].
7. Adding in ocean temperatures.
The major decisions involved in turning a land reconstruction into a land/ocean reconstruction are choosing a SST series (HadSST2, HadISST/Reynolds, and ERSST have been explored http://rankexploits.com/musings/2010/replication/ so far), gridding and anomalizing the series chosen, and creating a combined land-ocean temp record as a weighted combination of the two. This is generally done by: global temp = 0.708 × ocean temp + 0.292 × land temp.
[Fig 8]
8. Interpolation.
Most reconstructions only cover 5×5 grid cells with one or more station for any given month. This means that any areas without station coverage for any given month are implicitly assumed to have the global mean temperature. This is arguably problematic, as high-latitude regions tend to have the poorest coverage and are generally warming faster than the global average.
GISTemp takes a somewhat different approach, assigning a temperature anomaly to all missing grid boxes located within 1200 km of one or more stations that do have defined temperature anomalies. They rationalize this based on the fact that “temperature anomaly patterns tend to be large scale, especially at middle and high latitudes.” Because GISTemp excludes SST readings from areas with sea ice cover, this leads to the extrapolation of land anomalies to ocean areas, particularly in the Arctic. The net effects of interpolation on the resulting GISTemp record is small but not insignificant, particularly in recent years. Indeed, the effect of interpolation is the main reason why GISTemp shows somewhat different trends from HadCRUT and NCDC over the past decade.
[Fig 9]
9. Conclusion
As noted above there are many questions about the calculation of a global temperature index. However, some of those questions can be fairly answered and have been fairly answered by a variety of experienced citizen researchers from all sides of the debate. The approaches used by GISS and CRU and NCDC do not bias the result in any way that would erase the warming we have seen since 1880. To be sure there are minor differences that depend upon the exact choices one makes, choices of ocean data sets, land data sets, rules for including stations, rules for gridding, area weighting approaches, but all of these differences are minor when compared to the warming we see.
That suggests a turn in the discussion to the matters which have not been as thoroughly investigated by independent citizen researchers on all sides:
A turn to the question of data adjustments and a turn to the question of metadata accuracy and finally a turn to the question about UHI. Now, however, the community on all sides of the debate has a set of tools to address these questions.









Steven Mosher says July 14, 2010 at 1:00 pm “If you want to blame somebody for the advances in climate science ( seeing the role of C02) blame the Air Force. Its data and code generated initially for them. Go figure.
We presume the US military has its own set of weather stations, like at each missile silo site; and that these just might be fairly free of the need for adjustment, being somewhat fixed in position, as the USAF airports might also be.
If the US military has its own climate set, it has potential value. You should ask them for it if they have not disclosed it. Go figure. I’m not a US citizen, they will not tell me. I just have to live with the USA stories that have been coming non-stop since Timothy Leary took LSD in California and Steven Schneider demonised global cooling.
And the Indian Ocean? What’s it doing these days? Seems Ocean Temp (and Tides) on the rise.
http://nsf.gov/news/news_summ.jsp?cntn_id=117322&org=NSF
“Indian Ocean Sea-Level Rise Threatens Coastal Areas: Rise is especially high along coastlines of Bay of Bengal and Arabian Sea, as well as Sri Lanka, Sumatra and Java”
Regardless of what data sets people manipulate, or how they report their results as far a Temperature anomalies go; the simple fact remains that the IPCC; as in InterGOVERNMENTAL Panel on Climate Change; advises these governments, as to what the “Mean Global Surface Temeprature” of planet earth is; and what it is projected to be at some future time; seemingly always 100 years into the future.
So the public perception of all of this is that what is being reported, and projected/forecasted, IS Global Temperature; not global anomaly.
And they widely publicise the “Climate Sensitivity” as being the increase in Mean Global Temperature, for ANY doubling of atmospheric CO2; thereby proclaiming a logarithmic relationship.
Now presumably, the Physical Origin of such a CO2 based relationship, has the be the surface emittance of LWIR radiation; that being the initial source of the energy which the CO2 (or other GHG) is supposed to capture, creating warming.
So taking the Temperature range of the colorful global map, at the top of the present paper; we have extremes of -81 to +47 deg C. This is far from the most extreme range; but it is good enough to illustrate some issues.
Taking just three Temperatures; -81, +15, and +47 deg C for the min, max and global average, we have 192 K, 288 K, and 320 K Temperatures. The SB BB radiation limits for these Temperatures, then have factors of 0.1975, 1.000, and 1.524 respectively, and taking 390 W/m^2 for the global average, we would have; 77.0, 390.0, and 594 W/m^2.
So these are the upper bounds for what surface emissions migh be, and as we can see the coldest regions are down by a factor of five, while the hotter regions are at least 1.5 times the global average; in terms of W/m^2 surface LWIR emissions.
These are the numbers that CO2 has to deal with in the way of energy to capture; so already we have a strong temperature bias on the fundamental driving source that is supposed to power the CO2 warming engine; that is claimed to result in a logarithmic rise in Temperature with CO2.
Now suppose we have some trace impurity (could be CO2) in the atmosphere, such that in a given thickness of atmosphere (maybe a cm) absorbs say 1E-6 of some incident radiation in that thickness; so the transmission through that (cm) is 0.999999 of what came in. The next cm of air is also supposed to transmit 0.999999 so the total transmission for the 2 cm layer should be 0.999999^2 which is 0.999998000001. near enough to 0.999998. We conclude that three cm of air will transmit 0.999997 of the incident radiation.
Clearly the light is being lost (absorbed) linearly with thickness; for all practical purposes. And we can deduce that if instead we keep the thickness constant, and raise the CO2 amount by 2 x or 3 x, that likewise the absorption should be linear with CO2 abundance; but we note those little round-off errors which we discarded, that show it is not exactly linear, but does in fact fit a behavior of the form:- t = exp (-alpha.s) where alpha is some absorption coefficient; and of course we know that that exponential decay formula is in fact nearly linear with very small arguments.
This is the standard form of assumption of ordinary Optical absorption; but there is a basic assumption that we have overlooked; and that is the common presumption that the absorbed light disappears forever; never to see the light of day again.
This is often true in a lot of optical materials; where the absorbed energy, ultimately is converted to heat and warms the absorbing glass or whatever; that heat to be ultimately conducted out of the sample.
Some common materials have a different idea in mind. An Example would be certain kinds of Optical Glass color filters, that have very sharp absorption cuoff at specified wavelengths.
Take Schott glass RG645 for example. A 3 mm thick sample of this wine red glass is supposed to have 50% internal transmission at 645 nm wavelength ( you also have to allow for perhaps 4-5% Fresnel reflection loss at each surface.
At 600 nm wavelenght the transmission of this glass may be less than 0.01%; these are very shartp cutoff optical materials.
You can prove this sort of high attenuation with a tunable laser and a monachromator and wide band sensor. You tune the laser to your desired wavelenght; say 600 nm; then locate that line in the monochromator, and measure the signal drop when you insert the filter glass; and you do measure these extreme values of signal loss at quite nearby wavelengths.
If you leave out the monochromator, and remeasure the transmitted signal; you find you don’t get anywhere close to 10^-4 extinction; the “transmission” is very much higher.
What has happened, is that this series of glasses, are quite fluorescent; or luminascent; and the absorbed laser energy simply stimulates some longer wavelength emission from the glass; so much of the energy passes right on through; but with a shift in wavelength. You can add another glass; say RG 665, or RG 695; and the same thing will happen; the energy is largely simply red shifted and re-emitted at a longer wavelength with much less energy loss than the data sheet says.
Well now we are getting into some familiar territory; because this is about what GHGs are doing. The CO2 absorbs surface emitetd LWIR in the 13.5 to 16.5 micron range; probably in a number of narrower closely spaced lines; but that energy is usually distributed to other atmospheric molecules in collisions; before the CO2 has a chance to re-emit the absorbed photon.
This warms the ordinary atmosphere; and that in turn radiates a thermal continuum spectrum.
The problem in this case, is that the spectrum emitted by the warmed atmosphere is not too different from that which the CO2 absorbed in the first place; and it is quite likely that some of that radiation will in turn be recaptured by some other CO2 molecule repeating the process.
As a result, we do not have a situation, where the absorbed energy simply vanishes from the scene; so the classical ordinary Optical absorption rules don’t work.
So what happens if say a metre of air with CO2 in it should capture 90 or 99 % of the incoming radiation; so there is little to transmit to the next layer of air; and we now double the CO2. Well we’ve already caught most of the energy; so what else could happen.
Well not so fast. The absorption/re-emission process goes on unabated; and all that will happen, is that that same percentage of the radiation will be captured in a thinner air layer; whcih will in turn presumably heat a bit more sicne the mass is now lower (of air) and then that air layer will re-emit in all directions to pass the energy on.
This is why arguments of CO2 “saturation” are not too convincing.
That is a process which works in the classical optical absorption case; where the light is captured forever. The capture/re-emission cascade does complicate the issue because the atmospoheric emitted LWIR radiation should be emitted isotropically, so it spreads around more, and doesn’t all go either up or down.
But the point is that more CO2 will continue to absorb the LWIR; it just takes a thinnner layer to do so, and the cascade string gets longer, before the radiation ends up either back on the ground; or escaped to space.
But I don’t see anything in this process, that leads to a simple logarithmic temperature response; at least theoretically.
The entire cascade process is something I have no intention of trying to work out; but I presume that it is doable with some supercomputer program; and has likely been done anyway.
But I think it is important to realize that the CO2 or other GHG absorption process, does not conform to the simple classical Optical absorption rules, since the absorbed energy is actually re-radiated, in the same general spectral region, as the original surface sourced radiation. It is a mistake to think of the CO2 band as being “saturated” at some CO2 level.
Luckily; we have the water to take care of all of that anyway.
Geoff:
“We presume the US military has its own set of weather stations, like at each missile silo site; and that these just might be fairly free of the need for adjustment, being somewhat fixed in position, as the USAF airports might also be.
If the US military has its own climate set, it has potential value. You should ask them for it if they have not disclosed it. ”
I think you missed the point.
The role the Air Force played isn’t generally talked about in the climate wars.
1. Creation of a database HITRAN.
http://en.wikipedia.org/wiki/HITRAN
2. Study of the stratosphere which is crucial in understanding how C02 operates throughout the ENTIRE column. ( started in the 50s i believe)
here is some recent work– sensor related
http://en.wikipedia.org/wiki/Stratospheric_Observatory_for_Infrared_Astronomy
3. Development of RTE
http://spiedl.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=PSISDG002309000001000170000001&idtype=cvips&gifs=yes&ref=no
here is the simple point. Those of us who worked building weapons and systems for the Airforce had a keen interest in how radiation transfers through the atmosphere.
We needed to understand how Aircraft at say 50,000 feet would be seen on the ground by different sensors. What did they look like in say X band? or IR, or you name it. Understanding how radiation transfered from the highest altitude to the ground was key in the development of Stealth. And the flipside is true as well,
If we wanted spot IR targets on the ground we had to understand how that radiation would come up through the atmosphere. Those problems are well understood and not debated ( seriously) by any person who ever had to build a system that people’s lives depended on. Period.
The tools we used were HITRAN, a database of molecules and their behavior. And ( in my case) MODTRAN , a computer model that simulated how radiation would transfer through the atmosphere. Back in the day, MODTRAN was classified.
its now public (since 2000)
http://www.kirtland.af.mil/library/factsheets/factsheet.asp?id=7915
MODTRAN and the higher fidelity TRE ( line by line models) are now everyday Engineering tools based in solid physics. If they didnt work, if they were not accurate we could not build stealth aircarft, or IR missiles, or sensors that performed as expected. So when people tell me that C02 has no effect on radiation escaping the earth, I have to say they don’t know what they are talking about.
“Why all the praise? Would be much more thorough if Min/Max/Avg trends were each calculated and displayed as that is where the real mystery lies.”
1. Two years ago I started with daily data. then I looked at hourly. convinced that the
“mystery” is there. it’s not. There is no mystery. The world is warmer now than it was back in 1850. Not much of a mystery. How much? that’s a matter for calculation
” C’mon, if we are going to dig into it, then let’s dig into it. Min/max/2 anomalies hide important data IMO. Please add to the reconstructions, maybe delineated by season, Spring/Summer/Fall/Winter, separate min and max and avg anomalies, for both Urban and Rural, and now we are talking. Otherwise, it’s all integrated noise…Until you take that step, you are just validating the integrated mess that we currently have to swallow. The mystery is in the mins no? and the urban vs rural? and the adjustments…dissect please…then put it back together.”
grab a compiler and chip in. plenty of work for you to do
tonyb,
. Let me make it CLEAR.
the QUALITY of the data is SEPARATE QUESTION. SEPARATE….QUESTION.
SEPARATE QUESTION.SEPARATE QUESTION.SEPARATE QUESTION.SEPARATE QUESTION.SEPARATE QUESTION
I build a calculator. to test it I ask two women their age and weight.
Woman A says: I’m 29 and i weigh 105 lbs.
Woman B says: I’m 39 and I weigh 129 lbs.
I’m testing a calculator. I’m not interested (YET) in whether or not these women are lying or telling the truth. i’m interested in whether the calculator gives the right answer ASSUMING the data is correct.
i’m sorry but I cant put it ANY simpler.
Maybe I’ll feed the beast some test data, just to make it clear
I cleaned the kitchen and both bathrooms.
so there!
“”” Steven Mosher says:
July 15, 2010 at 3:28 pm
Geoff:
“We presume the US military has its own set of weather stations, like at each missile silo site; and that these just might be fairly free of the need for adjustment, being somewhat fixed in position, as the USAF airports might also be. “””
Steven; How true, that a lot of the data we have access to was obtained by the airforce for their purposes.
Back in the 40s-50s they were doing high altitide studies, partly to find out what hazards future high altitude pilots would face.
These studies turned up early evidence, that the apparent color temperature of the sun, varied seasonally, and randomly; and they deduced that the difference was in the near UV region of the spectrum; where the sun deviated somewhat from the 6kK BB spectrum.
Those early spectral variations were clearly an indication that Ozone holes existed back then; before anybody ever thought about that.
George E. Smith says:
July 15, 2010 at 3:14 pm
Well not so fast. The absorption/re-emission process goes on unabated; and all that will happen, is that that same percentage of the radiation will be captured in a thinner air layer; whcih will in turn presumably heat a bit more sicne the mass is now lower (of air) and then that air layer will re-emit in all directions to pass the energy on.
This is why arguments of CO2 “saturation” are not too convincing.
That is a process which works in the classical optical absorption case; where the light is captured forever. The capture/re-emission cascade does complicate the issue because the atmospoheric emitted LWIR radiation should be emitted isotropically, so it spreads around more, and doesn’t all go either up or down.
But the point is that more CO2 will continue to absorb the LWIR; it just takes a thinnner layer to do so, and the cascade string gets longer, before the radiation ends up either back on the ground; or escaped to space.
But I don’t see anything in this process, that leads to a simple logarithmic temperature response; at least theoretically.
The entire cascade process is something I have no intention of trying to work out; but I presume that it is doable with some supercomputer program; and has likely been done anyway.
But I think it is important to realize that the CO2 or other GHG absorption process, does not conform to the simple classical Optical absorption rules, since the absorbed energy is actually re-radiated, in the same general spectral region, as the original surface sourced radiation. It is a mistake to think of the CO2 band as being “saturated” at some CO2 level.
Luckily; we have the water to take care of all of that anyway.
Well, George, I am not good at molecular level physics :), but I got a good precis from Tom Vonk a year or so ago, of what happens when a CO2 absorbs an infrared photon. In a nutshell, it has not enough time to re-radiate it, the energy cascades down into rotational and vibrational states, with much smaller frequencies of emission, and those states eventually thermalize the N2 O2 etc hitting the CO2. There is one little CO2 and 150 or so others around it. The result is thermalization, not re-emittance and re-absorption, from what I understood.
Steven Mosher
Obviously BOTH your women are lying-one about her weight, the other her age, and I’m not convinced that you cleaned BOTH bathrooms let alone the kitchen. 🙂
Yeah, why don’t you feed the beast some data. THEN we can start to examine the real truth about weight, age AND cleaning.
Tonyb
“”” anna v says:
July 15, 2010 at 9:19 pm
George E. Smith says:
July 15, 2010 at 3:14 pm
………………………..
Well, George, I am not good at molecular level physics :), but I got a good precis from Tom Vonk a year or so ago, of what happens when a CO2 absorbs an infrared photon. In a nutshell, it has not enough time to re-radiate it, the energy cascades down into rotational and vibrational states, with much smaller frequencies of emission, and those states eventually thermalize the N2 O2 etc hitting the CO2. There is one little CO2 and 150 or so others around it. The result is thermalization, not re-emittance and re-absorption, from what I understood. “””
Well Anna, I believe you have it exactly right there; we don’t differ on that. Throughout the lower reaches of the atmosphere; where all the action takes place anyway; there’s not much chance for spontaneous re-emission from the CO2 exited molecule. Phil says there is; at stratospheric levels where mean free paths get long enough for that to happen; and that makes sense to me.
But lower down the coollisions with ordinary air molecules distribute that extra energy, and result in heating of the air, as Tom evidently explained it to you. I’m sure that is the correct picture.
But the air and surface temperatures are not so different; so the air itself, ultimately radiates a thermal spectrum; nobody ever told the sun; or the Argon or whatever in ordinary incandescent lamps that gases are not supposed to emit black body like radiation. So far as I know they do that quite well; it’s just at 288 K, the 10.1 micron peaked spectrum that the air radiates, is not perceived by human senses as “heat”.
So my point was; and is, that the energy that is captured by CO2 from surface emitted LWIR; is not to different from what will be emitted from the (heated) atmosphere itself; so the GHG captured energy does not stay captured as happens in classical Optical absorption in solids; and that new emission from the air itself is a perfectly good target for other GHG molecules to go after for subsequent captures. That is what I mean by “cascades”, a continuous capture and thermalization, followed by further emission and further recapture of not too unlike LWIR radiation.
So the normal exponential transmission decay due to ordinary optical absorption does not describe what really happens in the atmosphere. One could almost say that the CO2 is acting as a catalyst, to capture energy and deliver it to the atmosphere which will ultimately re-emit a similar radiation spectrum.
So my view of the process is not different from what Tom evidently explained to you.
Well tonyb here is the thing.
you disbelieve on no evidence that I did not clean both bathrooms and the kitchen.
Now, I know in a way in which you will never know that I did in fact clean. yet, on no evidence available to you, you doubt. Perhaps you should be a complete skeptic and doubt your doubt.
I could of course have charles vouch for me, but you might doubt his testimony.
I could supply before and after pictures and you could doubt their provenance and whether I in fact did the cleaning. I could produce a movie, but george Lucas has shown us what can be done with that. Simply, you could remain unconvinced that the sun came up today. At some point to move forward people have to examine the evidence for themselves or describe what they will accept as evidence.. before hand.
Steven Mosher
I certainly would never doubt CTM, after all he could delete my posts. There’ a time for pragmatism and and a time for scepticism 🙂
Tonyb
Mosh did clean the kitchen and both bathrooms, just a short amount of time after I purchased the Swiffer refills and cleaning solution.
He still has a bunch of stuff by the front door to take down to Goodwill though.
CTM
So Steve presented his data selectively then? Hmmm. 🙂
TonyB
George E. Smith says:
” CO2 absorbs surface emitetd LWIR in the 13.5 to 16.5 micron range; probably in a number of narrower closely spaced lines; but that energy is usually distributed to other atmospheric molecules in collisions; before the CO2 has a chance to re-emit the absorbed photon. This warms the ordinary atmosphere; and that in turn radiates a thermal continuum spectrum.”
You are correct that the radiationally “inert” constituents of the atmosphere are the recipients of energy from GHGs. They , rather than the GHGs themselves , provide what little thermal retentivity the atmoshpere posseses. But band-saturation simply implies that there is no more energy at that wavenumber available, which in the case of CO2 is a few tens of meters above the ground. Every CO2 molecule above that level is redundant because radiation at that wavenumber has become extinct.
After acceptance of the several mathematical ways global avg temps are computed, the residual problems are not just whether the data is properly manipulated or UHI accounted for and the like. As good as it all will be when we can all agree on proper treatement of the temperature data, the chief problems remain:
a) Is the instrumental record – left as it is or twisted into knots- a big enough sample of earth temp history – even of the last millenia or two let alone millions of years, to be of any help in deciding whether the present temperatures and trends are within natural variability – we are talking here about a degree or two over two centuries. Let me be of help here. The answer is no. Even the most highly engineered system couldn’t match the performance of a theorectical planet with all its chaotic and multi-sourced influences (weather patterns, orbital, solar, clouds, cosmic radiation, progress of the solar system in the loops of its host spiral arm, volcanoes, earthquakes, planetary perturbations, asteroid/meteor impacts, movement of tectonic plates…) that kept itself within temp changes of a degree or two, up or down in time periods of centuries. It is clear that modern climate science began with non earth scientists (astronomers and physicists) who noted an upward trend in temperature during the instrumental record of the last century or two. They shot their mouths off and grabbed the attention of the media. When skeptics belatedly came out of the woodwork – notably among them geologists, archeologists and historians- to point out that Swiss villages that had been flourishing for a millennium or so in the lower valleys were crushed by advancing glaciers during the 18th century, that in the 18th/19th the Bosphorus froze over, New York harbour froze over and people walked to Staten Island and London had “Frost Faires”, etc. And the Middle Warming period when Vikings settled and farmed Greenland and were subsequently frozen out in the LIA. And the Roman W P when Hannibal crossed the Alps with his elephants (they had to eat a lot of grass where there is now snow)
and we found a Swiss chap in leather with a quiver of arrows who died over 4000 years ago in an accident while hunting in a mountain pass and only came to light when hundreds of metres of snow and ice, which had subsequently buried him, melted down by 1980 or so. The modern climate scientists faced, with this damning evidence burned up over 50B dollars concocting proxies to erase the LIA and the MWP and the RWP, instead of taking steps to advance the discipline and put things in perspective.
b) Nothing one does about advancing methodologies of mincing temperatures into acceptable anomaly sausages will settle the issue of the magnitude of the effect of increased CO2. I can probably be of help here, too. If the industrial revolution of the past 150 years has resulted in only a degree C increase in temp (even forgetting about the contribution of natural rebound from the LIA) then we have nothing to worry about here. Natural variability will overwhelm it-many think the dip into cooler may be in the offing.
c) Back to the business of adjusting temps a half a degree here or there. I have a proposal that makes good sense. If we have 60 thermometers (Nick Stokes) of long record, lets keep them going or replace them with something new along side to have a duplicate record for when the old ones clap out. If GHG are so powerful as to subdue natural variability, then we should eventually see this in multi-degree increases. It matters not what the average global temp might be or if we could ever generate this artifact. The 60 instruments are enough on their own to answer the question of how serious this will become. If the record starts to bend down again after all this new CO2, then lets relax.
See my post on the First Difference Method over on CA, at
http://climateaudit.org/2010/08/19/the-first-difference-method/ .
Thanks for the article.
1) I am a non -scientist. I enjoyed the article since it seems to be written to make it easy for folks like me to follow along.
2)Is there a 101 type article slowly tracing some calculation from the database to the station anomaly and then from the station anomalies to the global anomaly (with confidence interval)?
2)It would be nice to see the graphs with confidence intervals around the results
3)It would be nice to see some graphs with , say, 1deg X 1 deg boxes and confidence intervals.