Guest post by Clive Best
Perhaps like me you have wondered why “global warming” is always measured using temperature “anomalies” rather than by directly measuring the absolute temperatures ?
Why can’t we simply average the surface station data together to get one global temperature for the Earth each year ? The main argument to work with anomalies (quoting from the CRU website) is: ”Stations on land are at different elevations, and different countries estimate average monthly temperatures using different methods and formulae. To avoid biases that could result from these problems, monthly average temperatures are reduced to anomalies from the period with best coverage (1961-90)….” In other words although measuring an average temperature is “biased”, measuring an average anomaly (deltaT) is not. Each monthly station anomaly is actually the difference between the measured monthly temperature and so-called “normal” monthly values. In the case of Hadley Cru the normal values are the 12 monthly averages from 1961 to 1990.
The basic assumption is that global warming is a universal, location independent phenomenon which can be measured by averaging all station anomalies wherever they might be distributed. Underlying all this of course is the belief that CO2 forcing and hence warming is everywhere the same. In principal this also implies that global warming could be measured by just one station alone. How reasonable is this assumption and could the anomalies themselves depend on the way the monthly “normals” are derived?
Despite temperatures in Tibet being far lower than say the Canary Islands at similar latitudes, local average temperatures for each place on Earth must exist. The temperature anomalies are themselves calculated using an area-weighted yearly average over a 5×5 degree (lat,lon) grid. Exactly the same calculation can be made for the temperature measurements in the same 5×5 grid which then reflect the average surface temperature over the Earth’s topography. In fact the assumption that it is possible to measure a globally averaged temperature “anomaly” or DT also implies that there must be a globally averaged surface temperature relative to which this anomaly refers. The result calculated in this way for the CRUTEM3 data is shown below:
Fig1: Globally averaged temperatures based on CRUTEM3 Station Data
So why is this never shown ?
The main reason for this I believe is that averaged temperatures highlight something different about the station data. They instead reflect an evolving bias in the geographic sampling of the station data used over the last 160 years. To look into this I have been working with all station data available here and adapting the PERL programs kindly included. The two figures below show the location of stations used dating from 1860 compared to all stations.
Fig 2: Location of all stations in the Hadley Cru set. Stations with long time series are marked with slightly larger red dots.
Fig 3: Stations with data back before 1860
Note how in Figure 1 there is a step rise in temperatures for both hemispheres around 1952. This coincides with a sudden expansion in included land station data as shown below. Only after this time does the data properly cover the warmer tropical regions, although there still remain gaps in some areas. The average temperature rises because gaps for grid points in tropical areas are now filled. (There is no allowance made in the averaging for empty grid points neither for average anomalies nor temperatures). The conclusion is that systematic problems due to poor geographic coverage of stations affects average temperature measurements prior to around 1950.
Fig 4: Percentage of points on a 5×5 degree grid with at least one station. 30 % is roughly the land surface of Earth
Can empty grid points similarly affect the anomalies? The argument against this, as discussed above, is that we measure just the changes in temperature and these should be independent of any location bias i.e. CO2 concentrations rise the same everywhere ! However it is still possible that the monthly averaging itself introduces biases. To look into this I calculated a new set of monthly normals and then recalculated all the global anomalies. The new monthly normals are calculated by taking the monthly averages of all the stations within the same (lat,lon) grid point. These represent the local means of monthly temperatures over the full period, and each station then contributes to its near neighbours. The anomalies are area-weighted and averaged in the same way as before. The new results are shown below and compared to the standard CRUTEM3 result.
Fig5: Comparison of standard CRUTEM3 anomalies(BLACK) and anomalies calculated using monthly normals averaged per grid point rather than averaged per station (BLUE).
The anomalies are significantly warmer for early years (before about 1920), changing the apparent trend. Therefore systematic errors due to the normalisation method for temperature anomalies are of the order of 0.4 degrees in the 19th century. The origin of these errors is due to the poor geographic coverage in early station data and the method used to normalise the monthly dependences. Using monthly normals averaged per lat,lon grid point instead of per station causes the resultant temperature anomalies to be warmer before 1920. Early stations are concentrated in Europe and North America, with poor coverage in Africa and the tropics. After about 1920 these systematic effects disappear. My conclusion is that anomaly measurements before 1920 are unreliable, while those after 1920 are reliable and independent of normalisation method. This reduces evidence of AGW since 1850 from a quoted 0.8 +- 0.1 degrees to about 0.4 +- 0.2 degrees
Note: You can view all the station data through a single interface here or in 3 time slices starting here. Click on a station to see the data. Drag a rectangle to zoom in.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Without taking humidity into account, temperature is meaningless in terms of energy!
DaveE.
Exactly, they are not measuring total heat in Btu/# or Kcal/kg, they are only measuring partial heat in the form of sensible heat.
Additionally, the lower the RH% of the air, the greater the swing in sensible heat measure in response to heat input, i.e. temperature. A 5 degree rise in dry air is NOT the same energy increase as a 5 degree rise in humid air. Hence using the anomaly method introduces a bias dependent on RH%. So if the prevalent number of monitoring stations are in high altitudes, high latitudes and areas of consistent low rainfall, i.e. areas of low RH%, these will bias the anomaly UP. This is probably why the SH anomaly trend shows NO INCREASE in the past 30 years being a water dominated area. And no, the decrease is NOT going to be proportional on the down side as you are more apt to encounter the DEW Point on the down side skewing/minimalizing the temperature decrease due to the energy conversion while condensation is occurring, i.e. the latent heat of condensation. It’s called basic physical science. Anyone who knows the psychrometric chart and knows the mechanics of water state change (meteorologists and engineers) realizes how truly scientifically ignorant the AGW cultist are. It is stunning to me as an engineer that any scientist is fooled by the AGW argument. If I were said scientist I would demand my money back from the degree issuing institution.
Clive, I like your T**4 plots.
Question – why is the temperature so much higher? Is that the actual data, or did you add an offset? I expected the data to cluster around 15°C, not 23°C.
I find it very interesting that the change in temperature using T**4 is about one fourth the official value. Do you have any idea why the discontinuity around 1951 disappeared? Very strange.
Robert,
Sorry – I did it too fast and I made a mistake in coding !! I actually calculated ((sum(T^4)^0.25-273)/N ! and it should be ((sum(T^4)/N)^0.25-273. The correct result can now be seen here.
The red curves are the T4 averaging. The 1951 step up is still there . What is interesting is the Southern Hemisphere results are almost identical. It is the NH which changes. From 1951 onwards there is no evidence of warming in the Southern Hemisphere.
Apologies for mistake.
So the warmest that the Northern hemisphere has ever been as like right now, is 16 deg C, and the coldest that the Southern hemisphere has ever been as in 1865 is also 16 degrees.
Now we know that the earth is furthest from the sun during the Southern winter, and closest during the Northern winter, so maybe that is why the Northern hemisphere has never been as cold as the Southern hemisphere.
Maybe something is wrong with this picture.
Well any global Temperature data based on a 5 x 5 degree grid cell is bound to give eroneous impressions. That sort of gridding makes the SF Bay area Temperature the same as the Mojave Desert.
A lot of this total silliness can be traced right back to Trenberth et al’s global energy budget. He has an average of 342 W/m^2 all over the planet arriving, and 390 W/m^2 emitted from the surface.
Well the fatal error is in that 342 W/m^2 arrival rate.
WATTS IS POWER; NOT ENERGY !!!
The current official value of TSI released recently by NASA is (roughly) 1362 W/m^2. It is NOT 342 w/m^2.
If you have 342 W/m^2 arrival power density, and 342 W/m^2 exit power density, then basically nothing happens; it’s roughly an equilibrium situation; and the earth IS NOT in thermal equilibrium.
The arrival power density on earth; averaged over the year for the radial orbit variation, is 1362 W/m^2, not 342, which is only about 1/4th as much.
That means that the point directly in line with the sun has a net Insolation over exit power density of maybe 3/4 of that 1362 value or about 1020 W/m^2.
actually, it will be a bit less than that because the sunlit portion of the earth will be substantially hotter than the average of 288 K from which Trenberths 390 W/m^2 surface power density emission dervives, and some hot desert areas can actually emit as much as twice the power rate as for the global average Temperature.
So the earth is absorbing far more solar energy than Trenberth gives credit for, because the incoming power density is four times his number, and the large portion of that high power density goes right into the deep oceans, which never reach the very high Radiant Emittance as the tropical deserts. As the earth rotates, each portion of the surface that comes under sunlight, receives an incoming solar power density that is much higher than Trenberth’s numbers. That 1362 is of course reduced by atmospheric losses such as cloud isotropic scattering (of sunlight) , blue sky scattering losses, GHG atmospheric trapping losses, at least by H2O, O3, and CO2, all of which have absorption bands within the Solar spectrum, where at least 99.9 percent of the solar energy resides.
Any cook knows that you do not get the same result while cooking, if you supply four times the power density for one quarter of the time to your soufle, as what is stated in the recipe book.
And you don’t get the same result in meteorology or climatism either.
Need I repeat it. TSI is the POWER DENSITY of arriving solar energy; you cannot integrate POWER to get an averaged total integrated ENERGY accumulation, and expect that Physical phenomena will respond exactly the same to those changed conditions.
Trenberth’s cartoon global energy budget, is at the heart of the phony story that climatists spread, and Trenberth is one of those members of the Climate Science Rapid Response Teamthat has been dreamed up by the desparados to counterract the effects of people like Chris de Freitas, Roy Spencer, Fred Singer, Willie Soon, and Sallie Baliunas, all of whom are known to be in the deep pockets of BIG OIL. WELL THAT’S WHAT THE PARTY LINE CLAIMS.
The feb 2012 issue of Physics Today carries a piece of tripe by one Toni Feder about the “harrassment” of climate scientists; no doubt (s)he is thinking of the arrest of James Hansen for his public antics.
No the article doesn’t say a word about the hounding of Soon/Baliunas or Cris de Freitas, or the much publicized threat to bash someone’s head in, or the equivalent, next time he met one of the well known skeptics.
So now we have the climate Physics police force to tell us what science to believe. Believable science is self convincing; it’s the observed facts that you can believe not the terracomputer simulations, and certainly not factually incorrect depictions like Trenberth et al’s phony “global energy budget”.
I should add to the above that I AM NOT knocking Dr Kevin Trenberth above. Don’t know him; never met him; his name just happens to be on that silly chart of global energy budgets. I have no idea what his formal academic credentials are; but I suppose I could giggle that.
As a Kiwi, it is rather embarassing for me to think that sloppy Physics is being purveyed by someone who presumably had the benefit of the same education system that was available to me; so my criticism is of the work; not the person, who I assume is a good Kiwi chap. Well so are Vincent Gray, and Chris de Freitas. Now I suppose Trenberth and Gavin Schmidt’s climate police will go after Professor Davies for noticing that THE CLOUDS ARE FALLING. Hey it’s the clouds NOT the sky.
R. Gates In short, camliing ?natural variability? as a cause in no explanation at all as science is all about finding the reasons behind that variability. Thanks. This seems to be a particularly hard point for some to get.