Mosher: “microsite bias matters more than UHI, especially in the first kilometer”

Urban Stations in GHCN V4.
The urban heat island effect further raises summer temperatures in cities. CREDIT NASA
Guest post by Steven Mosher

Background

The recent post at WUWT covered a new analysis by Goddard & Tett (hereafter GT) showed how UHI has biased measurements in the UK. The paper concludes:

For an urban fraction of 1.0, the daily minimum 2‐m temperature was estimated to increase by 1.90 ± 0.88 K while the daily maximum temperature was not significantly affected by urbanisation. This result was then applied to the whole United Kingdom with a maximum T min urban heat island intensity (UHII) of about 1.7K in London and with many UK cities having T min UHIIs above one degree.

This paper finds through the method of observation minus reanalysis that urbanisation has significantly increased the daily minimum 2‐m temperature in the United Kingdom by up to 1.70 K.

The paper represents a trend in UHI studies toward using urban area or urban fraction to define areas as urban and to parameterize the effect: to express UHI as a function of urban area: This is in contrast to the early studies, for example, Oke (73) that tended to use population to parameterize UHI

clip_image002

Since Oke there has been considerable progress in understanding the complex phenomena of UHI and the science has moved beyond the simple approach of looking at population as a parameter that uniquely determines UHI. If everyone leaves a city, it will still have UHI.

Recently, at WUWT the following claim was made

The present situation is one of large, continuing lack of research attention. There is not even a detailed description of how large the UHI effect is, using a representative set of city examples, let alone its uncertainty.”

This is actually not the case. This is a tiny fraction of the types of studies done.

There are global maps of UHI

Maps of individual states

Studies of over 400 large cites

Studies of the relationship between the shape and size of 5000 cities and UHI

A study of hamburg

Urban cool and hot zones

And there are a growing number of papers (here, here, here ,here, ) that detail urban cool parks that may explain why UHI is so difficult to find the global record. Sites located in cities are not necessarily warmer than those in rural setting.

One of the most important advances has come in the area of quantifying the definitions of urban and rural. Oke and Stewart have transformed the field with their concept of the LCZ or local climate zone. Anyone who took pictures of temperature stations for Anthony’s surface station program will enjoy watching the entire video below and especially the parts after 23 minutes where microsite bias is discussed.

And now with the power of satellite imagery researchers can quantifiably categorize various type of urban/rural areas. This can be done automatically or manually: http://www.wudapt.org/lcz/ Stewart was motivated to do this categorization in part because a large number of urban/rural studies never objectively defined the difference between urban and rural and because they assumed that “urban” was a discrete category rather than a continuum.

GT Findings

GT found that the UHI effect in the UK was limited to biasing Tmin upwards, a result consistent with other findings. Wang (2017) looked at 750+ stations in China and also found a bias in Tmin of up to 1.7C at 100% urban cover. A figure that matches the result of GT.

Trends in urban fraction around meteorological station were used to quantify the relationship between urban growth and local urban warming rate in temperature records in China. Urban warming rates were estimated by comparing observed temperature trends with those derived from ERA-Interim reanalysis data. With urban expansion surrounding observing stations, daily minimum temperatures were enhanced, and daily maximum temperatures were slightly reduced. On average, a change in urban fraction from 0% to 100% induces additional warming in daily minimum temperature of +1.7 +- 0.3°C; daily maximum temperature changes due to urbanization are -0.4 +-0.2°C. Based on this, the regional area-weighted average trend of urban-related warming in daily minimum (mean) temperature in eastern China was estimated to be +0.042 +- 0.007 (+0.017 +- 0.003)°C decade1 , representing about 9% (4%) of overall warming trend and reducing the diurnal temperature range by 0.05°C decade . No significant relationship was found between background temperature anomalies and the strength of urban warming.

To many readers the maximum bias figure of 1.7C in Tmin at 100% urbanity may seem low, especially when you consider the figure at the top from Oke which shows a UHI of up to 8C. The difference lies in the methodology. Much of the early work done on UHI focuses on UHI max for any given day. They select conditions that show the largest values of UHI that can occur. Oke’s chart, for example, represents the maximum value of UHI observed on a given day. For example, he would select summer days with no clouds, and no wind and measure the max difference between a rural point of reference and a city point of reference. In the studies that show high UHI values they typically do not calculate the effect of UHI on monthly Tavg over the course of many years, as GT and Wang did. Since cloud free wind free days do not occur 365 days a year for years on end, the overall bias of UHI is thus lower for monthly records, annual records, and climate records. In one study the number of ideal days in a year for seeing a difference between urban and rural was 7 days of the year. A 40 year study of London nocturnal UHI, found that the average UHI was ~1.8C, and only 10% of the days experienced UHI over 4C. In short, Average monthly UHI is less than the maximum daily UHI observed at optimum conditions for UHI formation.

The current best estimate by the IPCC is that no more than 10% of the century trend for Tavg is due to UHI and LULC. If we take the century trend in land temperatures to be 1.7C per century, for example, then the 10% maximum bias would be .17C on Tavg. The IPCC does not make an independent estimate for Tmin or Tmax, only Tavg, because the major analysis products only use Tavg.

In summary, it is indisputable that UHI and LULC are real influences on raw temperature measurements. At question is the extent to which they remain in the global products (as residual biases in broader regionally representative change estimates). Based primarily on the range of urban minus rural adjusted data set comparisons and the degree of agreement of these products with a broad range of reanalysis products, it is unlikely that any uncorrected urban heat-island effects and LULC change effects have raised the estimated centennial globally averaged LSAT trends by more than 10% of the reported trend (high confidence, based on robust evidence and high agreement). This is an average value; in some regions with rapid development, UHI and LULC change impacts on regional trends may be substantially larger.

GT approach

Both GT and Wang look at the urban fraction over a 10km buffer surrounding the station. This is probably at the radius limits of the LCZ. There is no “typical” range for LCZ analysis, but in general analysts consider the zones 1 to 10km in size. In LCZ analysis the fraction of imperious surface is one of the quantifiable features that determine the LCZ type. In general, urban fraction divides LCZ thusly:

A) Areas with less than 10% impervious surface are “unbuilt”

B) Areas with 10-20% impervious surface are sparsely built

C) Areas with 20+ % built are what we would typically call urban

There are some notable exceptions to this, in particular some heavy industry areas may have small urban fractions less than 10%. From field testing we know that different LCZ zones have different temperatures. See table 2 here for a study of LCZ in Berlin over the course of a year.

Armed with this metric we can begin to classify temperature stations by the percentage of urban fraction in their local climate zone. In theory we don’t have to make a bright line distinction between rural and urban, but rather we have a metric for the relative urbanity of a site that goes from 0% impervious surface in the LCZ to 100%.

In Berkeley Earths study of UHI we broke some ground by being the first study to use satellite data for urban surface to classify the urban and the non urban. We used a MODIS data set with a 500m resolution. However, two things concerned me about that dataset: 1) the imagery was taken during northern hemisphere winter and could falsely classify snow covered urban as rural. 2) the true resolution was more like 1km as a pixel wasn’t defined as urban unless 2 adjacent 500m pixels were urban. 1kmsq is not a small area. To accommodate for this and to accommodate for location errors we looked at 10km radius around each site and a site was classified as Non rural if it had 1 urban pixel. Our results found no difference in trend between urban and non urban. Still, the 1 km sq resolution bothered me. We can now address that issue with higher resolution data.

Available satellite imagery has expanded since the publication of that paper and much more accurate data is now available. GT used 250m data, for example and “paywalled” data is available below 30meter resolution. For my study of GHCN version 4 metadata I considered two different sources:

A) ESA 300 meter data

B) 30 meter data made available here http://www.globallandcover.com/GLC30Download/index.aspx.

Each dataset has pro’s and cons. The 30 meter data is quite voluminous and comes in tiles complicating the process of determining urban fraction. The 300 meter data is easier to work with but doesn’t really work very well if you want to know what the surface is like within 100 meters of the station. It cannot work well for microsite analysis. Also, neither dataset is perfect. Every land classification system has errors: natural pixels (typically bare earth) that are classified as urban, and urban pixels that are misclassified as natural. It’s helpful, thus, to compare the 30meter data with the 300 meter data and to cross check both with other signs of urbanity such as population and night lights.

GHCN v4 will be the next land dataset published by NOAA for use in global average temperature studies. It is currently in beta and going through a validation and verification process. NASA GISS will adopt adjusted GHCN v4 as its primary data source for global land temperatures. And then they will apply their UHI correction which in practice does not reduce the trends in any substantial way. The number of stations in GHCN V4 has increased over V3 to more than 27,000 total stations. The dataset will come in two variants: Uncorrected by NOAA; and debiased by NOAA’s PHA algorithm.

To create enhanced metadata for this new set of stations the procedure is fairly straightforward. You take the latitude and longitude of the station and then locate it in the appropriate GIS dataset. For 30meter data which exists in UTM tiles, you have to re-project and stitch 2 tiles together to handle cases where a station may be located near to a tile border, or 4 tiles together when a station is located near a tile corner.

For every station we can create “buffers” or collections of all the land class within various radii. For this post I’ll report on the 10km radius to be consistent with GT and Wang who also look at 10km buffers.

One important note. The purpose of this is not to assess the specific site micro characteristics: surface properties within 0- 500 meters that are within the viewshed of the sensor. Rather I will look at the LCZ, the local area climate zone out to 10km and answer the question: just how urban are the temperature stations used by climate scientists who study the global average? Do we actual draw our samples from heavily urban areas as defined by Oke’s and Stewart’s LCZ classification system.

The map from GT is instructive here

clip_image004

Are the stations that will be used by NASA GISS in red zones or in blue zones? What fraction are in red? And what fraction are in blue areas? And what shade of blue?

Some other things to note. The land classification data is taken at 2015 for 300 meter data and 2010 for the 30 meter data. Underlying this analysis is the assumption that site areas are not “unbuilt” over time. I assume a station that shows 0% built area in 2010 did not have any built area before that time. One other subtlety that people miss is that stations that register as heavily built in 2015 may have been rural during their recording time. For example, you can have a station that reports temperatures for 1850 to 1885, and then stops reporting. The urban fraction data refers to the urban cover of that site at 2015 or 2010. If you simply classify this site as urban, it may not be accurate as you are interested in the temperature data that was collected in the 1850 to 1885 time period. If the station was rural during that period, and you classify it as urban because of its urban cover today, then you can confound urban/rural studies.

Using the same criteria as GT and Wang (2017) we can see that the vast majority of stations are located in LCZ’s that have less than 10% urban cover (blue line below).

clip_image006

clip_image008

clip_image010

The using 30 meter data results in slightly fewer stations in the 0-10% ranking. This is to be expected as 300 meter data is not small enough to detect roads or airport runways while 30 meter data can in most cases. Using the regression approach of GT and Wang, we can also make a first order estimate of the size of the Tmin bias in a global record constructed from stations with this magnitude of urban cover: ~.13C. This would translate into a ~.06C bias in Tavg, within the estimate made by the IPCC. Note this is a simplistic estimate that does not take the spatial distribution of the stations into account, and it could be higher, or lower, but not substantially.

One thing to note is that we are able to check how robust the procedure of looking at 10km buffers around the site is by using the same procedure with CRN stations which have been selected to minimize their urban exposure: over 95% of CRN stations have less than 10% urban cover within a 10km radius of the site.

The big picture takeaway is this. UHI studies like GT and Wang focus on UHI over long periods of time: years instead of days. When you just focus on UHI max during selected days at selected cities, you will get high values for max UHI. However, when you look at dozens to hundreds and thousands of stations over months and years, the bias figures for UHI drop substantially. It’s these figures that matter for UHI bias in the global land record. Further when you look at all the stations in the inventories rather than the worst cases, you see that the vast majority of stations are located in areas of low urban cover:0-10%

This brings me to my last two points. While the fraction of urban cover within a 10 km radius does give you comparability with GT, it misses two things. These two things could be more important and I think they deserve some more attention. Those issues are: UHI in small area towns and microsite bias. The potential UHI issue in the global record is not a large city issue. The charts above should tell you that. Areas with large dense urban cover do not dominate the inventories of stations. They just don’t. The more plausible cause of UHI in the global record would come from small areas of urban cover. It’s unfortunate that most people focus on the photos of large cities and the papers about large cities, when actually, the problem may be smaller cities, at least as the global record is concerned. My suggestion is to aim at the right target with your analysis and critiques.

The second issue is the issue of microsite. Wang 2017 wrote

Changes associated with urbanization may impose influences on surface-level temperature observation stations both at the mesoscale (0.1–10 km) and the microscale (0.001–0.1 km). For a specific observing station, small local environmental changes may overwhelm any background urban warming signal at the mesoscale. Due to the lack of a high-quality data set of urban fraction at the microscale, we can hardly quantify the microscale urban influence on the observed temperatures.

In other words, the metadata that matters most is the metadata of the first kilometer. A good site in an urban setting can be better than a bad site in a rural setting. My bet is this: if you expect to find bias in the record, you should be looking at that first kilometer. Microsite is more important than UHI.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
384 Comments
Inline Feedbacks
View all comments
KTM
May 4, 2019 12:56 am

What is the scientific basis for CO2 having minimal impact on daytime max temperatures?

In the daytime, there is more infrared radiation for CO2 to interact with. At night there is less infrared radiation for CO2 to interact with. It seems that CO2 should have a more pronounced effect in the daytime temperatures.

Since there is clearly asymmetric warming occurring in the nighttime temps, and since we know that nighttime temps are heavily impacted by non-CO2 effects like UHI, that suggests the Warmists are fixated on noise instead of a true signal.

The same could be said for lower and higher latitudes. There was supposed to be an equatorial hotspot, which again makes sense if CO2 is influencing the already high amounts of infrared radiation near the equator. Instead, the warming is happening in the arctic winter, when there isn’t much sunlight and isn’t much infrared radiation for CO2 to interact with. Maybe there’s a simple physical explanation, but to me these sorts of fundamental conflicts between theory and measurements should be lighting up every scientist’s BS detector.

F1nn
Reply to  KTM
May 4, 2019 2:12 am

KTM

“Maybe there’s a simple physical explanation, but to me these sorts of fundamental conflicts between the largest BS scam ever and measurements should be lighting up every scientist’s BS detector.”

Sorry. I fixed your last sentence to reflect reality. And you are right, it should. But money talks, always.

Steven Mosher
Reply to  KTM
May 4, 2019 2:58 am

‘What is the scientific basis for CO2 having minimal impact on daytime max temperatures?”

You need to back to square 1. your question is ill posed

KTM
Reply to  Steven Mosher
May 4, 2019 11:58 am

Are you disagreeing with my statement of the data, with assymetrical warming at night instead of day?

There are plenty of things that could cause assymetric warming at night, but CO2 isn’t one of them. If the theory doesn’t match the data, the theory needs to change.

Like I said, I’m asking for an explanation, i want to understand. If I’ve missed something simple please educate me. Buy if I haven’t missed anything and there is no rational basis for CO2 to cause this asymmetry, the clear interpretation is that the CO2 theory is fundamentally wrong.

Frank
Reply to  KTM
May 4, 2019 1:19 pm

KTM asks: “What is the scientific basis for CO2 having minimal impact on daytime max temperatures?”

Rising CO2 isn’t expected to have a “minimal impact for daytime max temperature”. For the planet as a whole over a long time, rising GHGs slow down the rate of radiative cooling to space (the GHE) creating a radiative imbalance that eventually is negated by warming. The situation is far more complicated for short periods of time at a particular location 2 meters above the surface.

Observations show that the amount the temperature falls on a clear night depends on how much water vapor (a GHG) is in the air. The emission of thermal infrared by GHGs in the atmosphere depends on the local temperature and the number of GHGs present. The fraction of thermal infrared photons emitted downward from any altitude (say 1 km above the surface) that reach the surface depends on how many GHGs lie between (1 km and the surface) that can absorb. More GHGs means BOTH more emission of thermal infrared photons and more absorption of thermal infrared photons, meaning they travel a shorter distance. These two large effects almost cancel.* However, when the average photon arriving at the surface comes from a shorter distance, it was emitted from an altitude where it was likely warmer. So more DLR is delivered from the atmosphere to the surface when the air is more humid.

*See https://en.wikipedia.org/wiki/Schwarzschild%27s_equation_for_radiative_transfer

A decrease in humidity, therefore, decreases the amount of DLR arriving at the surface. During winter and in deserts, the diurnal range (difference between Tmax and Tmin) is larger than in summer and in more humid locations. Since CO2 is a GHG, you might expect it to influence the diurnal range too. However, the changes in relative between deserts and more humid locations can be a factor of 2 and the difference in saturation vapor pressure (7%/K) between winter and summer would be about a factor of 2 if summer were 10 degC warmer. With rising CO2, we are also on our way to a 2-fold change. However, near the surface, there is much more water vapor (about 10,000 ppm) than CO2 (400 ppm). The radiative forcing from 7% more water vapor (1.5 W/m2) is almost half of the forcing from 100% more CO2. So the combined effects of rising CO2 (and the rising absolute humidity that accompanies it, water vapor feedback) are predicted to have a much smaller impact on diurnal range than the change from desert to a humid climate and from winter to summer. Rising anthropogenic GHGs are predicted to very modestly decrease the diurnal range and seasonal change, meaning there will be more warming at night and in the winter.

Local heat capacity has a dramatic influence on diurnal range. In the ocean, diurnal range is typically only about 1 degC. In cities, vertical structures add heat capacity (and surface area to absorb SWR) and lower the diurnal range, the main mechanism of UHI. In most of the troposphere, the diurnal range is low because there is little absorption of SWR by clear air during the day to raise the temperature. Best discusses the changes in diurnal range that their methodology has found, which doesn’t have a simple explanation. The average diurnal range at land stations is about 11 degC, and the changes in diurnal range amount to a few tenths of a degC, while global warming (in T_average) is more than 1 degC.

http://static.berkeleyearth.org/papers/Results-Paper-Berkeley-Earth.pdf

Above, I mostly discussed the effect of GHGs on DLR, but one really needs to consider energy transfer from all sources: SWR, LWR (OLR and DLR), latent heat and simple heat. The surface absorbs most SWR and can get really hot in the absence of any wind (blacktop or a beach in sunshine). Just getting that heat out of the surface and up to a thermometer in a station 2 meters above the ground has its complications and there can be large differences without wind. And the ground has a higher emissivity than air, so it cools faster at night than the atmosphere immediately above, creating a thermal inversion in the early morning hours in many locations. No one should claim rising GHGs aren’t the reason for rising temperature simply because the predicted effect on diurnal temperature range hasn’t been observed. Since climate models often make calculations every 15 minutes of run time, some have trouble reproducing the observed diurnal cycle.

Alarmists like to promote simple explanations for complicated climate phenomena such as diurnal range, so the public thinks climate science is “settled science”. Diurnal range is discussed in Section Section 2.4.1.2 of AR5 WG1 and summarized by:

Confidence is medium in reported decreases in observed global diurnal temperature range (DTR), noted as a key uncertainty in the AR4. Several recent analyses of the raw data on which many pre- vious analyses were based point to the potential for biases that differently affect maximum and minimum average temperatures. However, apparent changes in DTR are much smaller than reported changes in average temperatures and therefore it is virtually certain that maximum and minimum temperatures have increased since 1950. {2.4.1.2}

The ability of climate models to properly reproduce the observed diurnal cycle is discussed in Section 9.5.2.1 and shown in Figure 9.30. The Executive Summary doesn’t even mention the problems with modeling the diurnal cycle, so this is a major issue for many models

https://www.ipcc.ch/site/assets/uploads/2018/02/WG1AR5_Chapter09_FINAL.pdf

KTM
Reply to  Frank
May 5, 2019 12:24 am

You spent a lot of time discussing the impact of greenhouse gases on nighttime temperatures. Are there more thermal infrared photons in the atmosphere during the day or during the night?

If there are more infrared photons in the atmosphere during the day, CO2 should have a more pronounced effect during the day than during the night. This should be especially true in summer and in hotter latitudes. The signal should be most apparent in hot and arid regions, although as you pointed out part of the amplification attributed to CO2 is due to hotter air being able to hold more humidity.

So i think there are two massive problems with the CO2 conjecture if we don’t see the predominant signal in arid daytime max temperatures, and an even bigger problem for the amplification conjecture if we don’t see it even moreso in the humid areas.

Where am i going wrong?

Frank
Reply to  KTM
May 5, 2019 3:48 pm

KTM: Please consider reading the Wikipedia article on the Schwarzschild equation for radiative transfer. (I wrote it, because there was no good resource that addressed common misconceptions that I struggled with for years. )

https://en.wikipedia.org/wiki/Schwarzschild%27s_equation_for_radiative_transfer

Emission of thermal infrared depends on temperature, absorption (to a first approximation). Therefore, there will alway be some temperature at which absorption and emission are in equilibrium with the local radiation intensity. That equilibrium is given by Planck’s law, aka Planck’s function B(lambda,T) that varies with wavelength and temperature. In other words, blackbody radiation is radiation in thermodynamic equilibrium with its environment of emitting/absorbing molecules. When integrated over all wavelengths, blackbody emission is W = eoT^4. GHGs are don’t behave like blackbodies; their emission is proportional to n*o*B(lambda,T), where n is the density of GHG molecules, o is their absorption cross-section (the sharp lines in the spectrum of a GHG mean o changes rapidly with wavelength) and B(lambda, T) is Planck’s function for emission of blackbody radiation.

The intensity of radiation traveling through any medium is being CHANGED by absorption and emission so as to approach equilibrium – blackbody intensity B(lambda,T). The RATE of change with distance traveled is proportional to the density of the GHGs and their absorption coefficient. At some wavelengths (in the “atmospheric window”), photons emitted by the surface escape directly to space unchanged because the absorption coefficients are effectively zero. For the strongest lines of CO2, 90% of the photons emitted by the surface are absorbed within the first meter – AND REPLACED BY ESSENTIAL THE SAME NUMBER OF PHOTONS EMITTED BY CO2 IN THE ATMOSPHERE. By the time that radiation has reached 1 km about the surface, the temperature is roughly 6.5 K lower and the intensity is about 5% lower because B(lambda,T) is lower. Since temperature usually drops with altitude where most absorption and emission occurs, upward traveling radiation emitted where it is warmer and B(lambda,T) is larger experiences more absorption than emission.

KTM asks: “Are there more thermal infrared photons in the atmosphere during the day or during the night?”

Absolutely.

KTM concludes: “If there are more infrared photons in the atmosphere during the day, CO2 should have a more pronounced EFFECT during the day than during the night. This should be especially true in summer and in hotter latitudes. The SIGNAL should be most apparent in hot and arid regions, although as you pointed out part of the amplification attributed to CO2 is due to hotter air being able to hold more humidity.” [My capitalization]

You not have specified what effect and signal you are talking about. Temperature change at is the result of all processes moving energy to and from a particular location: incoming and outgoing radiation both SWR and LWR, latent heat and sensible heat (conduction). The signal I was discussing above was the diurnal temperature range/change (DTR). Humidity has a large well-understood effect on the DTR, because it changes the altitude from which the average DLR photon reaching the surface is emitted. Clouds have a bigger effect on DLR.

KTM asks: “Where am I going wrong?”

You may be relying on intuition. There are a lot of non-intuitive aspects to radiation transfer in our atmosphere! Calculations using the Schwarzschild equation for radiative transfer can be done online using the MODTRAN program at:

http://climatemodels.uchicago.edu/modtran/

You can gradually improve your intuition by exploring simple situations. MODTRAN uses absorption coefficients measured in the lab and the temperature vs altitude profile shown on the right (which you can select using locality, which is tropical by default). You choose the density of each GHG, the locality and direction to look. The predictions of this program have been validated by many experiments run in the real atmosphere.

I suggest you start with 0 ppm for the density of all GHGs and “looking up” from the surface. The intensity of DLR is 0 when there are no GHGs in the atmosphere to emit thermal infrared photons. Now add 10 ppm of CO2. You should see a lot of energy arriving at 15 um/666 cm-1. At the peak, the amount of energy arriving at the surface from CO2 is about right for a blackbody at 300 K (tropical atmosphere), because the strong absorption of CO2. radiation traveling downward reaches equilibrium with the local surface temperature, producing radiation of blackbody intensity. Now double the CO2 concentration until you reach 300 ppm. The peak emission remains at blackbody intensity, but the CO2 band gets fatter; the weaker lines are contributing more. With 300 ppm, DLR from CO2 alone is 77.4 W/m2 and 600 ppm, 85.8 W/m2.

Now make CO2 0 ppm and water vapor scale 0.01. Water vapor varies with altitude and locality. Now gradually grow water vapor to 1.0 (the normal amount of water vapor at that location at all altitudes). Water vapor in the tropical atmosphere is delivering 358.9 W/m2 of DLR to the surface, substantially more than CO2. Now add 300 ppm CO2 to normal water vapor. DLR grows to only 366.1 W/m2, a measly 7.2 W/m2, because water vapor is already emitting the maximum intensity possible at many wavelengths, blackbody intensity. Now double CO2 to 600 ppm; 368.0 W/m2. Now up the water vapor scale to an impossible 10. At all wavelengths, the atmosphere is shining down with blackbody intensity appropriate for the surface temperature – the most energy that can be emitted by any object according to Planck’s Law.

Now change to “looking up” to “looking down from 70 km” to see how much energy the planet is emitting to space. Looking down, you see the photons emitted by the surface (which emits like a blackbody given the surface temperature) at transparent wavelengths and the photons reaching space emitted by GHGs in the atmosphere. Looking up is simpler to understand.

You can also change temperature by a few degree, while keeping either absolute humidity or relative humidity constant. You can look down from 1 km, to see how the blackbody radiation emitted by the surface and changed by absorption and emission by GHGs in the atmosphere traveling the first 1 km upward through the atmosphere. (Answer: little change because the temperature hasn’t changed much.

Starting with the simplest situations and working towards more complicated ones, you may gradually understand the non-intuitive combined effects of absorption and emission by many GHGs in an atmosphere whose temperature and density varies with altitude. (There is no GHE if temperature doesn’t vary with altitude.)

The strangest thing you will find is that net radiative cooling (OLR-DLR) doesn’t change much as the surface warms a few degC if relative humidity remains constant.

Clyde Spencer
Reply to  KTM
May 6, 2019 9:36 am

KTM
Something that you are overlooking is that the interactions between IR and CO2 are not broadband. The wavelength of the IR emissions is dependent on the surface temperature and the interaction is at specific absorption wavelengths. Those coincide almost perfectly at night. However, during the day, the peak emission shifts away from the maximum absorption.

Also, even if the IR adsorptions were equal in day and night, the daytime contribution is negligible compared to the direct heating from the sun.

Rod Evans
May 4, 2019 1:54 am

The question most people are debating, is not how accurate the thermometer stations are, or where the thermometers are sited, important though these details are.
The question is whether the world is on a continuous warming trend outside the normal variation of climate change as evidenced by history.
If the answer is yes, the next question is what is causing that inexorable rise. Is it mankind’s activities or is it climate drivers beyond mankind’s influence?
Going back to the very detailed work on this article by Steve Mosher. It all looks very worthy and sound to me. The thing is. If the climate debate hinges on such detail, do we actually have a real world problem or just a complexity of data requiring manipulation and adjustment, to make it “credible” by those who gather the data?
I fully accept the old, “garbage in garbage out” maxim. Clearly data must be verified and validated as accurate before entering it into any system.
What concerns me is who controls the validation? The records show the computer models are all running hot.
Why is that?
Is it because the programs are badly constructed? If so why has it taken so long for nothing to be done to refine the computer programs? If the CO2 forcing ratio is driving incorrect outputs, change it to more accurately harmonise with real observed data?
I must declare an interest at this point. As a hobby beekeeper I quite like the prospect of a degree or two warmer, especially here in the UK.

Steven Mosher
Reply to  Rod Evans
May 4, 2019 2:56 am

“The question most people are debating, is not how accurate the thermometer stations are, or where the thermometers are sited, important though these details are.
The question is whether the world is on a continuous warming trend outside the normal variation of climate change as evidenced by history.”

Nope, Not the question

the question is:

How do you explain the observed warming?
Not, “is the observed warming unprecedented”

The reason this is so is because “Normal” only has a “conventional” meaning.

Bengt Abelsson
Reply to  Steven Mosher
May 4, 2019 5:10 am

I beg to differ:
As Giavier said, the temperaure the last 150 years have varied between 288 and 288,8 K.
What is keeping it so stable?

Rod Evans
Reply to  Steven Mosher
May 4, 2019 8:48 am

Well Steve, I guess therein lies the big difference then, between those who advance the Man Made Global Warming story, or alarmists as they have come to be known, versus the climate realists who want to know if what is happening is anything unusual, i.e. should we be concerned?
As the generally accepted change in temperature globally is around the 1 deg C mark, over the past 170 years, and as there does not appear to be any data suggesting human inputs correlate with that temperature increase, we are left to ask, is the temperature change unusual, over historic time scales?
I would also suggest +/- 3 standard deviations is regarded as being within “normal” variation of a system.
Has the Earth’s temp moved outside that range in the past 170 years?
I don’t think so. Has it quietly increased over that time scale? yes it has, as far as our best efforts to measure it, can tell us.

Editor
May 4, 2019 1:55 am

Great stuff Mosh! I have been thinking this way for a long time, but lacked the skill and tenacity to do the right kind of analysis.

May 4, 2019 2:11 am

Steven Mosher May 4, 2019 at 1:30 am
…the effects of UHI are only seen below winds speeds of 7m/sec
—————————————-
Would there be a rising convection current produced by the thermal mass (at night) in a city? The rising warm current would have to suck cooler air in from the cooler non urban surroundings like a “sea breeze”. This effect would be quite localised.

Urban temps are a few degrees hotter than non urban during the day also, so ythis breeze would perhaps be present day and night>

Steven Mosher
Reply to  ghalfrunt
May 4, 2019 3:12 am

“Would there be a rising convection current produced by the thermal mass (at night) in a city? The rising warm current would have to suck cooler air in from the cooler non urban surroundings like a “sea breeze”. This effect would be quite localised.”

I’ve read a few papers that suggest this.

particularly for coastal cities

The issue is pretty complex with lots of exceptions that lead to

“what about my city”

The main point here is the vast majority of sites are in areas with less than 10% urban cover

Next up, I will walk through the process of restricting it MORE…

May 4, 2019 2:36 am

Two main conclusions from UHI studies:

1. 10% of the warming measured is likely anthropogenic but not related to emissions and due to urbanization. 10% is very significant (<2% insignificant is a usual criterion). It means only 90% of the warming remains to be explained by natural and other anthropogenic factors. And nothing practical can be done about that 10%.

2. Although that 10% is pooled with the rest of the warming and distributed over the entire planet it is actually taking place at small very hot spots. If the planet is close to ~ +1 °C those hotspots have already been at +1, +1.5, +2, +2.5, and probably +3 °C. They are not only very liveable, but most people rather lives there.

Where is the climate crisis?

Steven Mosher
Reply to  Javier
May 4, 2019 3:01 am

“1. 10% of the warming measured is likely anthropogenic but not related to emissions and due to urbanization. 10% is very significant (<2% insignificant is a usual criterion). It means only 90% of the warming remains to be explained by natural and other anthropogenic factors. And nothing practical can be done about that 10%."

err NO.

The UPPER BOUND on the bias is 10% of the century trend in LAND

if land has warmed by 1.2C, then the upper bound on bias is .12C

land is 30% of the globe.

math is left to the student for the total bias in the global record.

'°C those hotspots have already been at +1, +1.5, +2, +2.5, and probably +3 °C. They are not only very liveable, but most people rather lives there.

Where is the climate crisis?"

In the future.
and its a challenge not a crisis

F1nn
Reply to  Steven Mosher
May 4, 2019 5:34 am

“In the future.”

Oh really? Based on what? Models? Sheesh…

We are now living the coldest warm period of Holocene. Where is the challenge? Is the challenge in too warm past, which we must get rid off? You (climate “scientists”) have already tortured climate history from past century. Are you going to torture whole Holocene?

Good luck with that.

Reply to  Steven Mosher
May 4, 2019 10:47 am

The UPPER BOUND on the bias is 10% of the century trend in LAND

You are not talking to ignoramus here. We all know that global warming is not global
– It happens mainly in the northern hemisphere
– It happens mainly on land
– It happens mainly on winter
– It happens mainly on T(min) at nights

Whoaa, exactly like UHI effect. What a coincidence.

and its a challenge not a crisis

So you say without evidence. The fact is that global warming is a 300-year phenomenon and there is no evidence of the challenge you talk about. Quite the contrary, the challenge coming from the LIA has been alleviated by global warming.

Anthony Banton
Reply to  Javier
May 4, 2019 1:30 pm

“You are not talking to ignoramus here. We all know that global warming is not global
– It happens mainly in the northern hemisphere
– It happens mainly on land
– It happens mainly on winter
– It happens mainly on T(min) at nights”

The Arctic is not land … and it’s where most warming is taking place…..
comment image

Antarctica is a unique pole of cold because it is surrounded by ocean and cut-off from warmer mid-latitude air by the ACC and a strong polar vortex.
Not to mention it has an average height of 8000ft.
(might as well throw in the O3 hole – as that is an absence of a GHG in the Strat)
So, no wonder it is happening “mainly in the NH”.
Yes, winter sees most surface inversions form, where the GHE is maximised.
So, of course that is where most GW is taking place. ON LAND. IN WINTER. IN THE NH.
Stating the places where it (noticeably) happens most does not AGW.
~ 93% of the inbalance twixt incoming SW and outgoing LW is being sunk into the oceans.
Are they not global? Not count (because it’s hidden by virtue of mass and SH – the temp is divided by 1000x)?
It does not mean it’s not global because it happens most noticeably in the places on Earth where it has most effect.
The “D” word is powerful in you.

Reply to  Anthony Banton
May 4, 2019 4:17 pm

The Arctic is not land … and it’s where most warming is taking place

It is not sea surface either during half of the year, that is curiously the half of the year were warming is taking place.
http://ocean.dmi.dk/arctic/plus80n/anoplus80N_summer_winter_engelsk.png

So I guess that Arctic warming is not what you think it is. When there isn’t sunshine there isn’t radiative warming, and there isn’t albedo effect. All that is left is heat transported from lower latitudes and no amount of CO2 in the atmosphere can prevent that heat from escaping to space.

Latitude
Reply to  Javier
May 4, 2019 1:31 pm

“Whoaa, exactly like UHI effect. What a coincidence.”…..isn’t it

Anthony Banton
Reply to  Latitude
May 4, 2019 2:16 pm

If you refer to my comment.
There’s no UHI in the Arctic …. or do you want to argue there is?

F1nn
Reply to  Javier
May 4, 2019 3:11 am

That is very good question. 1°C does not crisis (or even catastrophy) make.

MrZ
May 4, 2019 3:37 am

Steven,
Have you or somebody compared RAW values between GHCNM v1-v4?
I did this by calculating on unique lon/lat locations as read from inventory files (locations with two or more stations were skipped). I randomly selected January of 1980 where all sets has a good base of data.

GHCNMv1 had 3008 unique locations
GCHNMv2 had 2693 unique locations (here I excluded multiset locations)
GHCNMv1 and v2 had 1100 common locations with OK flagged data.

– 13% (143) had more than 0.5C change in RAW data

GHCNMv3 had 5428 unique locations
GHCNMv4 had 13253 unique locations
GHCNMv3 AND v4 had 4343 common locations with OK flagged data

– 9% (390) had more than 0.5C change in RAW data. We also have 157 re-flagged (-9999) locations.

All versions had 842 unique locations in common

– 36% (306) had more than 0.5 change in RAW data

I also compared v3 QCA and v4 QCF. There are 3919 common unique locations with OK flagged data

– 26% (1000) had the adjustment re-adjusted more than 0.5C. We also have 418 re-flagged (-9999) locations.

The source data does not look very stable…

Steven Mosher
Reply to  MrZ
May 4, 2019 7:49 am

“Steven,
Have you or somebody compared RAW values between GHCNM v1-v4?
I did this by calculating on unique lon/lat locations as read from inventory files (locations with two or more stations were skipped). I randomly selected January of 1980 where all sets has a good base of data.”

Hmm that would not be the best way of proceeding, since the versions can all have different source decks

For Example there is a change in the source data for V4.

lets see if I can explain.

take country X. in the past they could have submitted 2 seperate files to NOAA as unadjusted data

1. daily ( which is always raw)
2. Monthly unadjusted. who knows?

In GHCN v4 the change has been to source everything to daily IF you can.

This is the basic approach we use in berkeley.

the primary source is daily raw.

you only use monthly “raw” if there is no extant daily source

Why? Well even though monthly files say “raw” you cant be sure. For example. Some countries
may think 7 missing daily data points is fine, compute the average.
Others may think… 10 missing days is fine unless there are 5 days in a row.. ect ect.
When you go to daily data you can set a consistent rule.

So. always go back to the most primary source you can find. daily.
Use monthly data only if you have to.

I laud your effort to try to track the differences between versions, but in the end it really does not
get you anywhere. And the reason is that the main source files, the primary sources, continue
to improve.

a station in v1, may have had the WWR as the source of its monthly data. not always raw.
in version 2 that same station may have been upgraded to a better source: the home country NWS Monthly
in version 3 the NWS may have added missing records it found in its archive.
in version 4, the NWS could have submitted daily data recently rescued.

So 4 different versions, 4 different sources, and the last source is arguably the best source
since it is constructed form daily data.

In short, you cant understand the differences without doing a full provance study.
Other wise what you will show is already known. When you find better sources between
version changes, you will get version to version differences.

MrZ
Reply to  Steven Mosher
May 4, 2019 10:01 am

Thanks Steven!
I honestly don’t think the local MET offices have a catalog of data to choose from for each location some 40 years ago. I do accept your explanation but you have to agree it is very surprising that the local MET offices withheld the daily info instead of sharing it. And now in v4 they can provide a better documented location. If it was 5 year old data yes, but almost 40!?
As an example, between v3 and v4 there were some 754 locations excluded. Could very well be because of the reasons you mention ie only monthly;- “but we now do have a close by locations with daily measurements”. Chocking!

Question is how high are the error bars from location/data selection vs UHI? And combined?

Paul Stevens
May 4, 2019 4:27 am

Steve Mosher, a very useful addition to the conversation. Thank you for posting here.

It seems clear that there is a warm bias to existing measurements that has not been generally accounted for, albeit a smallish one. So the warming predicted by the models is slightly more incorrect than we already know it is.

Any effort to reduce the exaggeration of the predicted average global temperature change is to be welcomed.

Steven Mosher
Reply to  Paul Stevens
May 4, 2019 8:14 am

You are welcomed.

I look at it as follows

There was an LIA

its warmer now than in 1850.

The record shows about 1-1.2C of warming.

Some small fraction of it is due to UHI.

there are better arguments for skeptics to concentrate on.

instead, you have crazies who say average temperatures dont exist
you have nutjobs saying its a hoax, fraudulent records,
blah blah, blah,

on climate conservatives are speaking with a thousand different voices and not focusing on the best
arguments.

1sky1
Reply to  Steven Mosher
May 4, 2019 3:37 pm

The record shows about 1-1.2C of warming. Some small fraction of it is due to UHI.

This oft-repeated trope is grossly inconsistent with what rigorously VETTED century-long records show for the DISCREPANCY of deviations of CONUS mean yearly temperatures:

http://s1188.photobucket.com/user/skygram/media/Publication1.jpg.html?o=0

Ca. 0.7C/century of UHI is scarcely a “small fraction” of the nearly trendless non-urban aggregate average.

Spalding Craft
Reply to  Steven Mosher
May 4, 2019 3:57 pm

OK I give up. What are the best arguments?

Scott W Bennett
May 4, 2019 5:10 am

Local v micro is probably true as it goes but UHI clearly has an affect well beyond the local scale, particularly at coastal locations. The sea breeze where I grew up is a strong onshore created by the land sea heating differential it can extend to 60km and beyond out to sea depending on the angle of the coastline. It is known that the mountains of the Great Dividing range limits this circulation inland by various amounts on the East Coast of Australia while in Western Australia there are no topographical barriers and it is felt further inland and starts further out. Beyond the affect of synoptic winds, that the coastal urban sprawl has an effect on the strength duration and extent of this “local” circulation pattern is well documented.

In the almost 50 years since my father built his house in undeveloped bush, the surroundings have become an urban sprawl completely filling the 20km gap between it and the next town along the coast. How then, is it possible to ignore the effect of UHI at the larger scale of its surroundings, be they rural or sea surface temperatures?

Local v micro is probably true as it goes but UHI clearly has an affect well beyond the local scale, particularly at coastal locations. The sea breeze where I grew up is a strong onshore created by the land sea heating differential it can extend to 60km and beyond out to sea depending on the angle of the coastline. It is known that the mountains of the Great Dividing range limits this circulation inland by various amounts on the East Coast of Australia while in Western Australia there are no topographical barriers and it is felt further inland and starts further out. Beyond the affect of synoptic winds, that the coastal urban sprawl has an effect on the strength duration and extent of this “local” circulation pattern is well documented.

In the almost 50 years since my father built his house in undeveloped bush, the surroundings have become an urban sprawl completely filling the 20km gap between it and the next town along the coast. How then, is it possible to ignore the effect of UHI at the larger scale of its surroundings, be they rural or sea surface temperatures?

To be very clear, it seems completely arbitrary to make the distinction between UHI and rural when clearly there is a demonstrated continuum in-between that might very well, turn out to be impossible to separate out!

Scott W Bennett
Reply to  Scott W Bennett
May 4, 2019 6:13 am

Admins, admin, my post went to moderation and now appears multiple times. Please delete the botched post above, if possible.

richard
May 4, 2019 5:49 am

Urban sites are severely compromised-

“At the same time meteorological services have difficulty in taking urban
observations that are not severely compromised. This is because most developed sites
make it impossible to conform to the standard guidelines for site selection and
instrument exposure given in the Guide to Meteorological Instruments and Methods of
Observation (WMO 1996) [hereinafter referred to as the Guide] due to obstruction of
airflow and radiation exchange by buildings and trees, unnatural surface cover and
waste heat and water vapour from human activities”

“Microscale – every surface and object has its own microclimate on it and in its
immediate vicinity. Surface and air temperatures may vary by several degrees in
very short distances, even millimetres, and airflow can be greatly perturbed by even
small objects”

“Mesoscale – a city influences weather and climate at the scale of the whole city,
typically tens of kilometres in extent. A single station is not able to represent this
scale”

https://www.wmo.int/pages/prog/www/IMOP/publications/IOM-81/IOM-81-UrbanMetObs.pdf

Steven Mosher
Reply to  richard
May 4, 2019 6:19 am

“Urban sites are severely compromised-”

actual we can test your hypothesis

and quantify the alarmist adverb

richard
Reply to  Steven Mosher
May 4, 2019 7:09 am

The WMO have spoken out on this subject many time and use the accurate verb carefully.

HankHenry
May 4, 2019 6:23 am

Someone should study temperature effects of irrigation. I would guess how quickly a surface dries has more to do UHI than thermal mass. My impression is that on a sunny day dry sidewalk equals hot and damp lawn equals cool.

JMurphy
May 4, 2019 6:48 am

Very good work from Steven Mosher, without the usual hyperbole seen on this site – no wonder it’s hard to swallow for so-called and self-called skeptics!
By the way, Steven, you must be an expert at Whac-a-mole, judging by your patient and constant replies….

Gator
Reply to  JMurphy
May 4, 2019 7:27 am

Yes, it is totally awesome having an English major here who knows everything, even climate! He has an answer for everything, and anyone who questions him is permanently wrong.

Great stuff! LOL

(Then you have no dispute with his essay in detail then? Mocking the guest writer is a poor way use your time and waste my time having to watch you) SUNMOD

Ossqss
May 4, 2019 8:10 am

Interesting stuff.

One would wonder what the impact of infrastructure has on overall temp data (IHI?). All those highways and byways across the globe? Here in SWFL, you can frequently watch T-storms grow rapidly when they cross I75. Physics in action 😉

May 4, 2019 8:26 am

As a follower of BBC weather forecasts for over 50 years, initially because I worked outdoors and later out of habit I think that ln windless days UHI in UK cities is 4-5’C. This is all hear round and max and min temperatures. What ecfect the wind and cloud have I don’t know. It’s long been my belief that a blanket adjustment for UHI is almost as bad as none, and it’s doubtful that there is enough data available to make tbe calculations. All these papers are interesting and increase knowledge but in terms of getting to a solution it’s just p*ss**g into the wind as my dad used to say.

May 4, 2019 8:26 am

Thank you Steven for a monumental effort and an update on my education on this topic. I’m guilty of not reading the linked papers but read many of your responses to critiques and queries and am content that the work done was conscientious as I expected it would be from you (perhaps I’m wrong that there is an old Steven and a new Steven!).

As a jaded sceptic, I usually don’t let my guard down in the climate scrum, but accepting your findings has more clearly defined the real issues I am not happy about in the larger picture. Besides, despite criticisms of satellite temperatures by most of the mainstream, I know this data constrains departures in temperatures at the present end of the record (possibly the reason the mainstream doesn’t like this eye in the sky), but lets a free-for-all take place on the pre-1979 series.

What was done to flatten down the late 1930s -early 40s temperature highs to make the warming more concentrated on a hockey stick blade for the end of the century, I say without qualification is a felonious act. I was only a child in the 30s but the topic of the scortching continental drought in which records for temperature highs and heatwaves has never been surpassed was the topic of conversations for twenty years or more during gatherings of family and friends.

The swoon in temperatures during the subsequent 35 years of the “ice age cometh” fears I remember clearly. As a newspaper boy, my mother came with me on frigid winter evenings when I collected for the paper in the early 50s and I had degrees in engineering and science (geology) by the middle of this period and was current on the worry. The effect of the ‘flattening’ was to also erase this bitter cold third of a century.

The third thing is the cavalier thrusting down of the early part of the record on no good evidence. It can be no coincidence that all these interventions suited and fitted the alarmist theory. They couldnt have virtually all the warming of the post industrial rev having occurred by 1940! They couldn’t have a third if a century cooling during the era of accelerating CO2 emissions and warming to 1998 just to crawl vack out if the steep cooling period.

Finally, the algotithms that are still changing the entire series of temperatures constantly are totally unjustified. BESTs fix on all apparent shifts is too facile. The cooling itself which was real would have fallen victim to this procedure. I feel all these changes have robbed us from ever making real discoveries in climate for another century.

Patrick MJD
May 4, 2019 8:33 am

Microsite and bias is all I took from this massive post. It means the record is biased warm.

Pamela Gray
May 4, 2019 8:39 am

Nope. Doesn’t meet the criteria to measure the exacting anomalies we are told to take to the bank. Way too wide error bands which by the way we hardly ever get to see. Scrap the pig. Then you won’t need lipstick.

Why is it that climate temperature research gets this big wide door through “good enough” on uncontrolled and confounding variables while product research such as new medicines, new varieties of grass seed, or new types of cattle feed goes through the eye of a needle before they get to say good enough?

JMurphy
Reply to  Pamela Gray
May 4, 2019 9:05 am

“…product research such as new medicines, new varieties of grass seed, or new types of cattle feed goes through the eye of a needle before they get to say good enough?”

‘One-Third Of New Drugs Had Safety Problems After FDA Approval’
https://www.npr.org/sections/health-shots/2017/05/09/527575055/one-third-of-new-drugs-had-safety-problems-after-fda-approval

That’s one big eye there!

Pamela Gray
Reply to  JMurphy
May 5, 2019 10:19 am

That issue has to do with replication by an independent researcher. The research design requirements for new drugs are still way more stringent than climate research on temperature data.

Edwin
May 4, 2019 9:25 am

Fascinating discussion! So how many angels can dance on the head of a pin?
I found this site when I found Anthony’s weather stations review. I had been asked to give my opinion to several policy makers on what really was going on with AGW. I had spent most of my professional life dealing with all sorts of data, data collection methods, accuracy, precision, etc and how they affected models we were attempting to use. (I am not a climatologist, though spent 30 years of my career discussing climate so my continued interest here.)

It appears to me that Mosher’s paper is little more than a defense of climate temperature trends and trying to at least mitigate the argument of UHI affects. All on equipment that when deployed never intended to be used to feed computer models or predict climate a decade, century or millennia in future. Weather stations were created to better predict weather not climate. Yet we don’t have to worry as much about siting of each station but just consider the instruments used over time and how they were maintained.

Some have CAGW starting with the Industrial Revolution, ca 1850. Does anyone imagine the same thermometers have been used since 1850? since 1930? Does anyone imagine that every time when a station’s thermometer was replaced due to damage or updated to a new style that the thermometers were standardized? My guess is the accuracy of thermometers are no more than plus or minus a degree prior to WWII and maybe half a degree since then. Yet it makes international news if a model or data system predicts an increase in the average temperature for the Earth of 0.1 degree C.

Mosher and others tend to pass over that for much of the Earth we have no weather stations, e.g., oceans, Sahara, Antarctica, etc. I spent several synopsis cruises collecting “bucket” temperatures while at the same time running hydrographic arrays. Temperature data from both entered some researcher’s data base.

If one plots urbanization against the Earth’s average temperature I will bet it is a pretty dang good correlation. In 1959 there were 3 million people in Florida; today over 20 million (double that with tourists). Land use went from farming, forest and wetlands to theme parks, hotels, lots of pavement, etc. In Central Florida alone we went from huge amounts of green spaces to huge amounts of buildings, concrete and asphalt. Shanghai in 1985 had a population of around 7 million today it is 34 million. Shanghai went from having a 1930s skyline in 1985 to a Tokyo skyline today.

Reply to  Edwin
May 4, 2019 10:03 am

Edwin, I ‘d actually like to see such a correlation! I bet it would be instructive.⁷

Clyde Spencer
Reply to  Gary Pearse
May 6, 2019 9:57 am

Gary
See Figure 4 at https://wattsupwiththat.com/2016/02/26/analysis-of-the-relationship-between-land-air-temperatures-and-atmospheric-carbon-dioxide-concentrations/

There is a sufficiently high correlation between world population and urbanization that I think it is safe to consider population to be a proxy for urbanization.

richard
Reply to  Edwin
May 4, 2019 11:02 am

“Mosher and others tend to pass over that for much of the Earth we have no weather stations, e.g., oceans, Sahara, Antarctica, etc”

Yep, it’s pretty bad, mostly estimated in Africa, one fifth of the world’s land mass-

WMO- “Because the data with respect to in-situ surface
air temperature across Africa is sparse, a one year regional assessment for Africa could not
be based on any of the three standard global
surface air temperature data sets from NOAA NCDC, NASA-GISS or HadCRUT4. Instead, the
combination of the Global Historical Climatology
Network and the Climate Anomaly Monitoring
System (CAMS GHCN) by NOAA’s Earth System
Research Laboratory was used to estimate
surface air temperature patterns”

Michael
May 4, 2019 9:35 am

SM, I also find this post very interesting, and am just an interested reader trying not to get lost in the math. You encouraged a previous poster to ask questions, so here is mine. You gave the following reply to Latitude:

“if fall is 0
if winter is 0
if spring is 0
if summer is 3C

then you cant argue that annual averages 3C”

In such a scenario, how would the adjustment be done? Do you average the 3C over 4 seasons, warming 3 and cooling 1 resulting in something that doesn’t exist and has never existed, and then use it to adjust the temperature record for that station, or are the summer temperatures adjusted down the whole 3C first before the temperature record is used for whatever purpose? Or is something else done?

Greg
May 4, 2019 12:34 pm

Mosh’:

answer?

Nothing to see.

This Why Anthony’s work is more important than you guys understand.

There three spitballs to throw at the wall

1) UHI ( which is also land use)
2) Land use ( for example, natural to irrigated agricultural)
3. Microsite: the effects in first 500 meters

1. Looked at UHI, ya dont find much <10% of the century trend
2. Looked at land use, same thing, you dont find much

That leaves your best argument, which is Anthony's argument

Well all the “nothings to see” add up as do all the tweaks “corrections” and blatent fiddling. That is the name of the game. A tenth of a degree here, a tenth of a degree there; 10% here, 10% there. No chance is lost put a little thumb on the scales. Nothing too controversial individually and easily dismissed as “nothing to see” but there is whole industry of data vendors with a thumb on the scales.

Probably the only adjustment which does not warm the present or cool the past is HadSST 0.5 deg C post WWII drop. However, what they also do is remove most of the variability from most of the record: the early 2/3 of the data period. It is yet another attempt to make the data show acceleration at about the right time to fit the agenda.

Climategate clearly dismissed the quaint idea we pretty much all had about objective apolitical scientists working for the simple pleasure of discovery and science.

May 4, 2019 2:44 pm

Steve Mosher says: “What I am showing is that at the MESO scale, at the LCZ scale, the vast major of sites
are in “unbuilt” areas. less than 10% built.
[…..]
at the micro scale?
unstudied except for Anthony’s work….”

This is where the statisticians come into it.
Thousands of different instruments
Thousands of different conditions.

Yet minuscule error bars?

Reply to  markx
May 4, 2019 3:46 pm

Errors propagate?

Who knew?

Reply to  Robert Kernodle
May 4, 2019 4:50 pm

Who knew? Engineers designing steel girders and splice plates to connect them. Ignore the error propagation and you can wind up with a splice plate too short to span the distance between two girders on the short end of the error band. Doesn’t matter what the “average” length of the girder is. You have to design to the minimum and maximum values, i.e. the error band!

Steven Mosher
Reply to  markx
May 5, 2019 3:29 am

“Yet minuscule error bars?”

the erorr bars are actually tested out of sample.

take 43000 stations.

hold out 38000

build your average from 5000

construct the error bars.

Now test the 38000

Do they fall within the error bars?

yup

so not theory. Errors are propagated as a part of the process. then tested.
numbers

Reply to  Steven Mosher
May 5, 2019 5:34 am

Steven: “the erorr bars are actually tested out of sample.”

This is a confusing statement.

You can’t eliminate error bars by comparing one average with another.

You can test to see if the error bars are correct, which is what your process seems to be doing. The problem is that the error span is never mentioned in any study. If the delta of the average is 1.5degC and the error span is +/- 0.5degC it makes a big difference in analyzing the delta!

Frank
Reply to  Tim Gorman
May 6, 2019 12:50 am

Tim: You can divide the data use are using into two parts at random. Do the analysis on half of your data, including error-propagation. Then use the other half of the data to prove your error propagation is correct.

For example, you could divide 1000 pairs of independent and dependent variables (x,y pairs) randomly into two sets of 500. Perform a linear regression with 500 pairs, which affords slope and intercept along with confidence intervals for each. Now do a linear regression with the second 500 pairs. Are the new slope and intercept found inside the confidence interval calculated for the first regression?

Now randomly divide your data into two different sets of 500 pairs 100 times. Are about 95% of the slopes and intercepts calculated from the second half of your data found inside the 95% confidence intervals calculated with the first half of your data?

Ordinary least-squares regression (and derived confidence intervals) are based on certain assumptions about the noise in your data: random, same size for both small and large x, no error in independent variable. If your data doesn’t have these properties, incorrect confidence intervals might be exposed this way.

Wikipedia has articles on “resampling” methods for using part of your data to establish empirical confidence intervals for your analysis. I believe what Steve is describing is called cross-validation statistics in Wikipedia (aka rotation estimate or out-of-sample testing).

Reply to  Frank
May 6, 2019 5:23 am

Frank: I understand everything you said. That’s why I said “You can test to see if the error bars are correct, which is what your process seems to be doing.”

The big problem comes in when you assume the (x,y) data points are perfectly accurate themselves. A confidence interval based on assumed accurate data points is *NOT* truly an error bar. It is basically a standard deviation of *assumed* accurate data points. Doing a linear regression on temperatures such as T +/-0.5 by only using T simply abandons the actual errors associated with the data.

Reply to  Frank
May 6, 2019 7:36 am

The “y” values you are using have a measurement error associated with them. Prior to at least 1950 these errors were +/- 0.5 degrees. If you look at the possible temperature at any one station temp, you can’t determine between +0.5 and -0.5 therefore the probability is 1 for any value between +0.5 and -0.5.

Trying to use anomalies doesn’t change this although the mathematicians would like you to believe it does. Since you are subtracting a constant value from the temp reading, the original measurement error still applies. In fact, the percent error simply increases. For example, I subtract 15 degrees from 20 +/-0.5 degrees, I get 5 +/- 0.5 degrees. Why, because the max value is 20.5 – 15 = 5.5 and the min is 19.5 – 15 = 4.5 so the real value should be 5 +/-0.5 degrees.

The other problem is averaging numbers with measurement errors. Another example that illustrates this. Average 50 +/- 0.5 and 53 +/-0.5. The average can lie anywhere between 52 and 51. So you have 51.5 +/-0.5. You simply can’t say that averaging numbers with measurement errors reduces the error. Measurement errors do propagate throughout.

Mathematicians will tell you that the error is reduced by multiple measurements. What they fail to tell you is that this only applies when you measure the same thing multiple times with the same device. Then you can assume the measurement errors are random (Gaussian) and will tend toward a central value. Temperatures from different stations are never the same thing nor measured with the same device.

richard
Reply to  Steven Mosher
May 5, 2019 6:59 am

WMO want to install 5000 temp stations in Africa alone to get adequate Coverage.

Frank
Reply to  richard
May 6, 2019 1:33 am

Richard: The number of stations needed depends on how accurate you want to be. The US Climate Research Network (USCRN) has only about 135 stations in the continental US, but is intended to produce a superior record to many times more ordinary stations that use inferior equipment and poor siting. The paper defined the accuracy goals for the network and then determined that that goal could be met with 135 reasonably evenly space stations. That network has been running for more than a decade. There is little benefit to adding more stations because temperature anomalies are highly correlated over several hundred kilometers.

https://journals.ametsoc.org/doi/pdf/10.1175/1520-0442(2004)017%3C2961%3AAMTDSD%3E2.0.CO%3B2

richard
Reply to  Frank
May 6, 2019 7:29 am

Frank,

When the whole of Africa, one fifth of the world’s land mass, is estimated I am guessing you want a lot of stations.

The MET illustrate with their microclimate fact sheet why you would need them.

https://www.metoffice.gov.uk/binaries/content/assets/mohippo/pdf/n/9/fact_sheet_no._14.pdf

Even state of the art temp measuring equipment in a controlled environment have troubles-Pico technology ” Consider what you are trying to measure the temperature of. An example that seems simple at first is measuring room temperature to 1°C accuracy. The problem here is that room temperature is not one temperature but many.

Figure 1 shows sensors at three different heights record the temperatures in one of Pico Technology’s storerooms. The sensor readings differ by at least 1°C so clearly, no matter how accurate the individual sensors, we will never be able to measure room temperature to 1°C accuracy”

michael hart
May 4, 2019 3:27 pm

So things very close to a thermometer affect it much more than things a long way away.
Certainly seems plausible, but I don’t feel much wiser.

Steven Mosher
Reply to  michael hart
May 5, 2019 3:30 am

I dunno, I said it so some here have to disagree!

John Endicott
Reply to  Steven Mosher
May 5, 2019 10:29 am

Wrong!

u.k.(us)
May 4, 2019 6:17 pm

When the heavyweights start throwing the furniture around, it is best to wait for them to get tired before attempting reconciliation.

May 5, 2019 2:02 am

Steve

Thank you for the interesting post. You seem to have patience of Job on this one considering the number of replies to posts you have made. One thing that does puzzle me about this whole problem of contaminated temperature data is that if the contamination or bias from a given site is clear, then why on earth is it important to adjust or correct the data rather than simply discarding the site from the ensemble.

My main point is that surely the grossly obviously biased sites should be discarded from the record. Which leads to the next issue. I recall a post that I made on Real Climate a long time ago, and Gavin’s view was that the global trends could be adequately described by only 100 “pristine” sites. I am no statistician but that number seems reasonable if maybe a little light. Im sure that there is a statistical rule governing the minimum number. But the question still remains, why is it necessary to keep the biased sites at all? Adjustment after adjustment is still lipstick on a pig.
Regards

Steven Mosher
Reply to  Terry
May 5, 2019 3:26 am

“Steve

“Thank you for the interesting post. You seem to have patience of Job on this one considering the number of replies to posts you have made. One thing that does puzzle me about this whole problem of contaminated temperature data is that if the contamination or bias from a given site is clear, then why on earth is it important to adjust or correct the data rather than simply discarding the site from the ensemble.”

I actually do BOTH.
A) reducing stations increases your spatial uncertainty
B) deciding that some stations are “bad” can lead to false catagorzation
C) we have evidence from CRN studies that the adjustments work

“My main point is that surely the grossly obviously biased sites should be discarded from the record. Which leads to the next issue. I recall a post that I made on Real Climate a long time ago, and Gavin’s view was that the global trends could be adequately described by only 100 “pristine” sites. I am no statistician but that number seems reasonable if maybe a little light. Im sure that there is a statistical rule governing the minimum number. But the question still remains, why is it necessary to keep the biased sites at all? Adjustment after adjustment is still lipstick on a pig.
Regards”

see above. yes with 100 or so perfect stations you can.

what if I told you the answer from these hundred matched the answer from the bad?

Hint: bias cancels

Reply to  Steven Mosher
May 5, 2019 3:36 am

If the biases cancel then that is more likely chance or possibly the use of a large enuf sample. To be pure and robust surely it is better to discard those with known or suspected bias. It would certainly go a long way to removing a lot of suspicion regarding the validity of multiple adjustments.

Steven Mosher
Reply to  Terry
May 5, 2019 4:07 am

“If the biases cancel then that is more likely chance or possibly the use of a large enuf sample. To be pure and robust surely it is better to discard those with known or suspected bias. It would certainly go a long way to removing a lot of suspicion regarding the validity of multiple adjustments.

Except you cant be SURE of suspected bias. So you test that.– to be more robust.

I get the idea of removing suspecion. however, when I have done these studies with
only the “best stations” folks will even question that.

is the data really raw! we want the paper copies!
etc ect ect.

no matter how you cut the data its warming.

Scott W Bennett
Reply to  Steven Mosher
May 5, 2019 5:29 am

“no matter how you cut the data its warming.- Steven Mosher”

That’s the equivalent of saying, no matter how you cut the data, the sea level trend is rising! It has been rising since records began but it is so insignificant that it’s not even detectable above noise and error, without great mental contortions and/or massive ideologically driven leaps of faith!

And there are countless examples – worldwide – of flat or cooling trends in the raw data that only show warming after adjustment. If you don’t think that is a problem, you are not really looking at the data!

richard
Reply to  Steven Mosher
May 5, 2019 6:56 am

Strange that- hmmm what did Phil Jones say about removing the blip- it sure was cut!

Scott W Bennett
May 5, 2019 5:47 am

Why are all my comments going into moderation?

Admin, Admins, Anthony?

You might warn me first!

Let me know and I won’t waste your time or mine!

That would seem like an sensible administrative approach!

It is so cumbersome to comment here, since the “upgrade”. Now, you can’t edit typos, you can’t post images and the test page puts my comments into moderation – without there being an offending word – when that would seem the appropriate place to find out!!

I keep ending up with multiple posts or duplicated paragraphs that I didn’t intend to create.

No one can sensibly enter a conversation here, if every comment is disappeared without explanation, only to appear after an unspecified period and at a whim!!

richard
May 5, 2019 7:15 am

WMO- “Homogenization of climate data series and spatial interpolation of climate data play a growing
role in the meteorology and climatology. The data series are usually affected by
inhomogeneities due to changes in the measurement conditions (relocations, instrumentation)
therefore a direct analysis of the raw data series can lead to wrong conclusions about climate
change”

What climate change- Prairie grass grew across the US until ploughed up , a drought resistant plant (illustrating the climate) no change there then though I believe precipitation has increased. Same old , same old drought resistant plants growing across Africa. Climate change would be the Sahara desert turning tropical again. There again the planet and deserts are greening.