An 'inconvenient result' – July 2012 not a record breaker according to data from the new NOAA/NCDC U.S. Climate Reference Network

I decided to do myself something that so far NOAA has refused to do: give a CONUS average temperature for the United States from the new ‘state of the art’ United States Climate Reference Network (USCRN). After spending millions of dollars to put in this new network from 2002 to 2008, they are still giving us data from the old one when they report a U.S. national average temperature. As readers may recall, I have demonstrated that old COOP/USHCN network used to monitor U.S. climate is a mishmash of urban, semi-urban, rural, airport and non-airport stations, some of which are sited precariously in observers backyards, parking lots, near air conditioner vents, airport tarmac, and in urban heat islands. This is backed up by the 2011 GAO report spurred by my work.

Here is today’s press release from NOAA, “State of the Climate” for July 2012 where they say:

The average temperature for the contiguous U.S. during July was 77.6°F, 3.3°F above the 20th century average, marking the hottest July and the hottest month on record for the nation. The previous warmest July for the nation was July 1936 when the average U.S. temperature was 77.4°F. The warm July temperatures contributed to a record-warm first seven months of the year and the warmest 12-month period the nation has experienced since recordkeeping began in 1895.

OK, that average temperature for the contiguous U.S. during July is easy to replicate and calculate using NOAA’s USCRN network of stations, shown below:

Map of the 114 climate stations in the USCRN, note the even distribution.
In case you aren’t familiar with his network and why it exists, let me cite NOAA/NCDC’s reasoning for its creation. From the USCRN overview page:

The U.S. Climate Reference Network (USCRN) consists of 114 stations developed, deployed, managed, and maintained by the National Oceanic and Atmospheric Administration (NOAA) in the continental United States for the express purpose of detecting the national signal of climate change. The vision of the USCRN program is to maintain a sustainable high-quality climate observation network that 50 years from now can with the highest degree of confidence answer the question: How has the climate of the nation changed over the past 50 years? These stations were designed with climate science in mind. Three independent measurements of temperature and precipitation are made at each station, insuring continuity of record and maintenance of well-calibrated and highly accurate observations. The stations are placed in pristine environments expected to be free of development for many decades. Stations are monitored and maintained to high standards, and are calibrated on an annual basis. In addition to temperature and precipitation, these stations also measure solar radiation, surface skin temperature, and surface winds, and are being expanded to include triplicate measurements of soil moisture and soil temperature at five depths, as well as atmospheric relative humidity. Experimental stations have been located in Alaska since 2002 and Hawaii since 2005, providing network experience in polar and tropical regions. Deployment of a complete 29 station USCRN network into Alaska began in 2009. This project is managed by NOAA’s National Climatic Data Center and operated in partnership with NOAA’s Atmospheric Turbulence and Diffusion Division.

So clearly, USCRN is an official effort, sanctioned, endorsed, and accepted by NOAA, and is of the highest quality possible. Here is what a typical USCRN station looks like:

USCRN Station at the Stroud Water Research Center, Avondale, PA

A few other points about the USCRN:

  • Temperature is measured with triple redundant air aspirated sensors (Platinum Resistance Thermometers) and averaged between all three sensors. The air aspirated shield exposure system is the best available.
  • Temperature is measured continuously and logged every 5 minutes, ensuring a true capture of Tmax/Tmin
  • All stations were sited per Leroy 1999 siting specs, and are Class 1 or Class 2 stations by that siting standard. (see section 2.2.1 here of the USCRN handbook PDF)
  • The data goes through quality control, to ensure an errant sensor hasn’t biased the values, but is otherwise unchanged.
  • No stations are near any cities, nor have local biases of any kind that I have observed in any of my visits to them.
  • Unlike the COOP/USHCN network where they fought me tooth and nail, NOAA provided station photographs up front to prove the “pristine” nature of the siting environment.
  • All data is transmitted digitally via satellite uplink direct from the station.

So this means that:

  1. There are no observer or transcription errors to correct.
  2. There is no time of observation bias, nor need for correction of it.
  3. There is no broad scale missing data, requiring filling in data from potentially bad surrounding stations. (FILNET)
  4. There are no needs for bias adjustments for equipment types since all equipment is identical.
  5. There are no need for urbanization adjustments, since all stations are rural and well sited.
  6. There are no regular sensor errors due to air aspiration and triple redundant lab grade sensors. Any errors detected in one sensor are identified and managed by two others, ensuring quality data.
  7. Due to the near perfect geospatial distribution of stations in the USA, there isn’t a need for gridding to get a national average temperature.

Knowing this, I wondered why NOAA has never offered a CONUS monthly temperature from this new network. So, I decided that I’d calculate one myself.

The procedure for a CONUS monthly average temperature from USCRN:

  1. Download each station data set from here: USCRN Quality Controlled Datasets.
  2. Exclude stations that are part of the USHCN-M (modernized USHCN) or USRCRN-Lite stations which are not part of the 114 station USCRN master set.
  3. Exclude stations that are not part of the CONUS (HI and AK)
  4. Load all July USCRN 114 station data into an Excel Spreadsheet, available here: CRN_CONUS_stations_July2012_V1.2
  5. Note stations that have missing monthly totals data. Three in July 2012, Elgin, AZ, (4 missing days) Avondale, PA,(5 missing days) McClellanville, SC, (7 missing days) and  set their data aside to be dealt with separately.
  6. Do sums and calculate CONUS area averages from the Tmax, Tmin, Tavg and Tmean data provided for each station.
  7. Do a separate calculation to see how much difference the stations with missing/partial data make for the entire CONUS.

Here are the results:

USA Monthly Mean for July 2012:   75.72°F 

(111 stations)

USA Monthly Average for July 2012:   75.51°F 

(111 stations)

USA Monthly Mean for July 2012:   75.74°F 

(114 stations, 3 w/ partial missing data, difference  0.02)

USA Monthly Average for July 2012:   75.55°F 

(114 stations, 3 w/ partial missing data, difference  0.04)

============================

Comparison to NOAA’s announcement today:

Using the old network, NOAA says the USA Average Temperature for July 2012 is: 77.6°F

Using the NOAA USCRN data, the USA Average Temperature for July 2012 is: 75.5°F

The difference between the old problematic network and new USCRN is 2.1°F cooler.

This puts July 2012, according to the best official climate monitoring network in the USA at 1.9°F below the  77.4°F July 1936 USA average temperature in the NOAA press release today, not a record by any measure. Dr. Roy Spencer suggested earlier today that he didn’t think so either, saying:

So, all things considered (including unresolved issues about urban heat island effects and other large corrections made to the USHCN data), I would say July was unusually warm. But the long-term integrity of the USHCN dataset depends upon so many uncertain factors, I would say it’s a stretch to to call July 2012 a “record”.

This result also strongly suggests, that a well sited network of stations, as the USCRN is designed from inception to be, is totally free of the errors, biases, adjustments, siting issues, equipment issues, and UHI effects that plague the older COOP USHCN network that is a mishmash of problems that the new USCRN was designed to solve.

It suggests Watts et al 2012 is on the right track when it comes to pointing out the temperature measurement differences between stations with and without such problems. I don’t suggest that my method is a perfect comparison to the older COOP/USHCN network, but the fact that my numbers come close, within the bounds of the positive temperature bias errors noted in Leroy 1999, and that the more “pristine” USCRN network is cooler for absolute monthly temperatures (as would be expected) suggests my numbers aren’t an unreasonable comparison.

NOAA never mentions this new pristine USCRN network in any press releases on climate records or trends, nor do they calculate and display a CONUS value for it. Now we know why. The new “pristine” data it produces is just way too cool for them.

Look for a regular monthly feature using the USCRN data at WUWT. Perhaps NOAA will then be motivated to produce their own monthly CONUS Tavg values from this new network. They’ve had four years to do so since it was completed.

UPDATE: Some people questioned what is the difference between the mean and average temperature values. In the monthly data files from USCRN, there are these two values:

T_MONTHLY_MEAN

T_MONTHLY_AVG

http://www.ncdc.noaa.gov/crn/qcdatasets.html

The mean is the monthly (max+min)/2, and the average is the average of all the daily averages.

UPDATE2: I’ve just sent this letter to NCDC – to ncdc.info@ncdc.noaa.gov

Hello,

I apologize for not providing a proper name in the salutation, but none was given on the contact section of the referring web page.

I am attempting to replicate the CONUS  temperature average of 77.6 degrees Fahrenheit for July 2012, listed in the August 8th 2012, State of the Climate Report here: http://www.ncdc.noaa.gov/sotc/

Pursuant to that, would you please provide the following:

1. The data source of the surface temperature record used.

2. The list of stations used from that surface temperature record, including any exclusions and reasons for exclusions.

3. The method used to determine the CONUS average temperature, such as simple area average, gridded average, altitude corrections, bias corrections, etc. Essentially what I’m requesting is the method that can be used to replicate the resultant 77.6F CONUS average value.

4. A flowchart of the procedures in step 3 if available.

5. Any other information you deem relevant to the replication process.

Thank you sincerely for your consideration.

Best Regards,

Anthony Watts

===================================================

Below is the response I got to the email address provided in the SOTC release, some email addresses redacted to prevent spamming.

===================================================

—–Original Message—–
From: mailer-daemon@xxxx.xxxx.xxx
Date: Thursday, August 09, 2012 3:22 PM
To: awatts@xxxxxxx.xxx
Subject: Undeliverable: request for methods used in SOTC press release
Your message did not reach some or all of the intended recipients.
   Sent: Thu, 9 Aug 2012 15:22:43 -0700
   Subject: request for methods used in SOTC press release
The following recipient(s) could not be reached:
ncdc.info@ncdc.noaa.gov
   Error Type: SMTP
   Error Description: No mail servers appear to exists for the recipients address.
   Additional information: Please check that you have not misspelled the recipients email address.
hMailServer

===============================

UPDATE3: 8/10/2012. This may put the issue to rest about straight averaging -vs- some corrected method. From http://www.ncdc.noaa.gov/temp-and-precip/us-climate-divisions.php

It seems they are using TCDD (simple average) still. I’ve sent an email to verify…hopefully they get it.


Traditional Climate Divisional Database

Traditionally, climate division values have been computed using the monthly values for all of the Cooperative Observer Network (COOP) stations in each division are averaged to compute divisional monthly temperature and precipitation averages/totals. This is valid for values computed from 1931 to the present. For the 1895-1930 period, statewide values were computed directly from stations within each state. Divisional values for this early period were computed using a regression technique against the statewide values (Guttman and Quayle, 1996). These values make up the traditional climate division database (TCDD).


Gridded Divisional Database

The GHCN-D 5km gridded divisional dataset (GrDD) is based on a similar station inventory as the TCDD however, new methodologies are used to compute temperature, precipitation, and drought for United States climate divisions. These new methodologies include the transition to a grid-based calculation, the inclusion of many more stations from the pre-1930s, and the use of NCDC’s modern array of quality control algorithms. These are expected to improve the data coverage and the quality of the dataset, while maintaining the current product stream.

The GrDD is designed to address the following general issues inherent in the TCDD:

  1. For the TCDD, each divisional value from 1931-present is simply the arithmetic average of the station data within it, a computational practice that results in a bias when a division is spatially undersampled in a month (e.g., because some stations did not report) or is climatologically inhomogeneous in general (e.g., due to large variations in topography).
  2. For the TCDD, all divisional values before 1931 stem from state averages published by the U.S. Department of Agriculture (USDA) rather than from actual station observations, producing an artificial discontinuity in both the mean and variance for 1895-1930 (Guttman and Quayle, 1996).
  3. In the TCDD, many divisions experienced a systematic change in average station location and elevation during the 20th Century, resulting in spurious historical trends in some regions (Keim et al., 2003; Keim et al., 2005; Allard et al., 2009).
  4. Finally, none of the TCDD’s station-based temperature records contain adjustments for historical changes in observation time, station location, or temperature instrumentation, inhomogeneities which further bias temporal trends (Peterson et al., 1998).

The GrDD’s initial (and more straightforward) improvement is to the underlying network, which now includes additional station records and contemporary bias adjustments (i.e., those used in the U.S. Historical Climatology Network version 2; Menne et al., 2009).

The second (and far more extensive) improvement is to the computational methodology, which now addresses topographic and network variability via climatologically aided interpolation (Willmott and Robeson, 1995). The outcome of these improvements is a new divisional dataset that maintains the strengths of its predecessor while providing more robust estimates of areal averages and long-term trends.

The NCDC’s Climate Monitoring Branch plans to transition from the TCDD to the more modern GrDD by 2013. While this transition will not disrupt the current product stream, some variances in temperature and precipitation values may be observed throughout the data record. For example, in general, climate divisions with extensive topography above the average station elevation will be reflected as cooler climatology. A preliminary assessment of the major imapacts of this transition can be found in Fenimore, et. al, 2011.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
260 Comments
Inline Feedbacks
View all comments
Bill Marsh
August 9, 2012 3:28 am

So I’m an American taxpayer and I’m sitting here reading that NOAA spent millions of my tax dollars four years ago to create a this new network and they aren’t using it? Sounds like the IG of NOAA should be investigating this as an example of waste of government funds (IGs are tasked with investigating fraud,waste, and abuse in their respective agencies). The IG for other agencies will investigate things like this with a far lower dollar value.

Graham
August 9, 2012 3:42 am

Another question pertinent when comparing temperatures to 1936 would be how accurate 1936 thermometers were, where they were sited and how their temperature was recorded.
To my knowledge getting 0.01degC/F accuracy from a 1930s thermometer was impossible, another reason to dismiss the meaningless babble from the priests of pseudo-science.

Peter in MD
August 9, 2012 3:52 am

It would be interesting to see how many “old” stations are near enough to any of these new stations and compare what they each report for the same time period.

John Doe
August 9, 2012 3:55 am

Wonderful, Anthony. Take full advantage of it. At 2.1F lower than COOP-USHCN record lows ought to balance record highs making yet another point.

JJB MKI
August 9, 2012 4:11 am

Stokes
Last time I looked, the GISS data set for England was constructed from a selected homogenised set of over 70 stations in the early 20th century, spanning both rural and urban locations, narrowing down to about a dozen stations, all located at busy airports in the present day, with the information presented as anomalies. No obvious reason given for the data cull btw, as the culled stations did not stop reporting. By your own logic, it would be fallacious to use GISS to claim warming over this period.
J Burns

August 9, 2012 4:17 am

Anthony:
I am a little confused here with your terminology. You say
USA Monthly Mean
USA Monthly Average
Mean and average are synonymous. If you are using them to express a conceptual distinction, could you please define the terms?
— Sinan

JJB MKI
August 9, 2012 4:17 am

Where have we seen this before? Government agency drunk on hubris and funding spend a large amount of money on a new project designed to empirically prove AGW. New project delivers results that fail to prove AGW. Government agency pretend project never existed, rush back to safety of computer models until the data can be statistically massaged. ARGO / Envisat anyone?

Fred
August 9, 2012 4:42 am

Love the idea of a USCRN temp like Roy Spencer does for the sat record.
I am a PhD, know how hard this is, and still do not trust the ever increasing temp offered by Hansen. This ignores the clear bias of the key temp players at NOAA and NASA.
Also, please start using both poles in the ice reports…

starzmom
August 9, 2012 4:50 am

In reply to Nick–no one is talking about anomalies except you. NOAA said July 2012 was the hottest ever at 77.6 degrees, and Anthony showed that a better network came in at 75.something (depending on how you calculate). Where is the anomaly in that?

Editor
August 9, 2012 4:54 am

So, are you going to redo all that for the previous Julys? My guess is given the widespread heat, this July is warmer than the others, but worth doing anyway.

Dodgy Geezer
August 9, 2012 5:00 am


“..You can bet your bottom dollar they looked at it. And if it showed a record you can bet your bottom dollar they would have announced it as coming from an untainted, pristine source. ..”
Since the USCRN network is ‘new’, then ALL data on it will probably be a record. I’m surprised that NOAA haven’t tried that one….

August 9, 2012 5:03 am

Nick Stokes said (August 8, 2012 at 11:57 pm):
“…This is comparing the absolute temperatures of two different networks. You don’t know whether, for example, the USCRN network has relatively more high altitude stations. And while you say that USCRN has “near perfect geospatial distribution” (measured?), the validity of the comparison will depend on whether the old network is comparably distributed.
That’s why climate scientists generally prefer to deal with anomalies. Otherwise you can’t compare across networks…”
This is why people suggested that the new USCRN stations should be compared to the closest COOP/USHCN stations over the same period of time (say, for example, four years).
That way, any biases in the old stations can be identified, making it easier to merge the records.
Anthony, since you appear to have data on both, how about a comparison of the new USCRN stations to the closest COOP/USHCN stations (including their Leroy 2010 ratings)?

Editor
August 9, 2012 5:05 am

Nick Stokes says:
August 8, 2012 at 11:57 pm

… the validity of the comparison will depend on whether the old network is comparably distributed.
That’s why climate scientists generally prefer to deal with anomalies. Otherwise you can’t compare across networks.

I thought people liked to use anomalies because they’re easier to compare between different months. Suppose the average April temperature is 50°F. Even within the same old USHCN network where you’d say the spatial distribution is unchanged or the satellite record, people talk about anomalies because that allows comparison over time.
The CRN record is too short to have a really good mean to compare against, so computing an anomaly would be problematic. Perhaps if we took monthly averages from CRN and subtract the anomaly of the “compliant” USHCN stations, you’d have a less noisy record until CRN can stand on its own.

Editor
August 9, 2012 5:13 am

Bob Koss says:
August 9, 2012 at 12:00 am

There don’t seem to be any closely paired stations west of eastern Montana. Do you average the temperatures from the individual sets of paired stations?

The intent of the paired station is to test the hypothesis that two stations close together (a few miles for the Durham NH stations) will report similar data under similar conditions. I assume things like thunderstorms and sea breezes will produce mismatched data at Durham from time to time.
It would be appropriate to discard one station or average the pair.
There are also some stations that are mal-sited as experiments to see how they track better ones. After all, I sure can’t find a Leroy #1 site at my mountain property.
One is sited at Mauna Loa where the CO2 measurements are made. There may not be any others.

Steve Keohane
August 9, 2012 5:22 am

This is excellent Anthony. Thanks for letting us know. Sounds like this is just what is needed for a climate reference base.

Nick Stokes
August 9, 2012 5:24 am

Update. In my calc (2.48 am) of the average altitude of CRN stations, I hadn’t excluded AK and HI, and the form in which they were returned (from this page) included some duplicate values. Fixing this brought the average altitude to 2263 ft, or 690 m. That makes the altitude difference 178 m and the consequent difference (at lapse rate 6 °C/km) between USHCN and CRN to be 1.9°F.
REPLY: Maybe if your assumptions are correct, but you are just making guesses, see my note above. One of the things I’ve been doing the past few months is looking at the daily temps from CRN stations -vs- some USHCN and GHCN stations nearby. Since NOAA doesn’t tell us how they calculate the CONUS Tavg for their press release, we don’t know if they apply a lapse rate adjustment or not or whether they use moist adiabatic lapse rate or dry adiabatic lapse rate.
Once they publish their method, we’ll know if your approach has any merit. – Anthony

Tom in Florida
August 9, 2012 5:29 am

Stokes does make a valid point. Comparing raw data from different sets is not correct when looking for changes. If one is concerned about how much of a change has happened then anomalies are used because it really doesn’t matter what the actual data is, it is the change that matters. So the purpose of using the old temperature sets when looking for changes over time can be valid. However, in this case, the ability to accurately compare the temperature changes over time from the old COOP/USHCN network is totally compromised by siting issues, adjustments to data and the reliability of the people reporting the readings. Of course if the anomaly from the corrupted set tends to favor your point of view, you are more likely to use it and hope no one notices the problems with it. (et tu Nick?).
I believe it is better to use the new system and start over. We will now quite soon if there is a warming problem.

Kev-in-Uk
August 9, 2012 5:32 am

Gawd, I wonder if Nick Stokes has any idea how stupid he sounds (writes!)? It’s really simple Nick, there has been created a nice spanking new instrument set and subsequent data retrieval system which is all state of the art. Get lost with your anomaly usage rubbish – that’s just a way of hiding bad data via some assumed ‘self cancelling’ arrangement of errors (which is a bogus assumption to start with!) – the trouble is, this new system has NO errors (to speak of) and thus in real terms is the new reference ‘line’ – or at least darn well should be!
Anthony is perfectly correct and justified in asking the question of how a supposed perfect system does NOT actually confirm a ‘derived’ record temperature – and clearly shows said ‘derived’ data is obviously flawed …..
Excellent work Anthony – though I’d suggest that replying to Stokes was rather futile – horses to water and all that…..

Coach Springer
August 9, 2012 5:32 am

Thanks very much.
I’m with cearhill. Inhofe or someone should start an inquiry just to draw attention to the whole measurement issue. When a governemnt agency rushes out with numbers they know – or purposely ignore – are contradicted or at least seriously questioned by better measurements and compare them to measurements that have been adjusted downward contrary to the measurement bias from urbanization *and* they don’t report the contradictions with equal prominence, they are committing to misleading propaganda. Seriously, if a company tried this type of non-disclosure in financial statements, they’d be hauled up by the SEC and investors would be suing for misrepresentation and winning. (Of course, the chair of the SEC is a warmist herself and would be discinclined to take action if in charge of policiing disclosure by government scientists.)

pochas
August 9, 2012 5:37 am

Anthony deserves lots of credit for having the courage to be a critic of the waste and prevarication that comes from Big Government in charge of anything. I hope he will persist, because Big Government will resist shutting down that old useless network and saving the expense needed to maintain it. Big Government wants only to get bigger.

August 9, 2012 5:40 am

I, too, wondered what the difference between mean and average was. While I have a limited understanding of statistics, using averages and anomalies seems problematic. Averages, as another comment noted, are notoriously “average”, showing virtually nothing about the input data itself. Anomalies, I thought, were values that did not fit the “average” and may or may not have any significance. A pattern of anomalies, at some point, becomes just a pattern. This obsession with global averages seems like smoke and mirrors. It’s hot in the US and snowing in Norway. So what? Since we have these giant supercomputers, maybe we need to do data on many, many cities and towns, then compare the ups and downs of the various locations to other locations and look for trends upward and downward. It makes more sense, though I am sure there is some statistician who can explain why this is not being used.

Editor
August 9, 2012 5:41 am

George says:
August 8, 2012 at 11:59 pm

Look for a regular monthly feature using the USCRN data at WUWT.
If you do that, how much you want to bet they either cut off access to the data or begin to “adjust” it for some reason?

From the start of http://www.ncdc.noaa.gov/oa/about/open-access-climate-data-policy.pdf :

NOAA/National Climatic Data Center Open Access to Physical Climate Data Policy
December 2009
The basic tenet of physical climate data management at NOAA is full and open data access. All raw physical climate data available from NOAA’s various climate observing systems as well as the output data from state-of-the-science climate models are openly available in as timely a manner as possible. The timeliness of such data is dependent upon its receipt, coupled with the associated quality control procedures necessary to ensure that the data are valid. In addition, the latest versions of all derived data sets are made available to the public. NOAA also provides access to all of its major climate-related model simulations.

The NCDC lists “Examples of Potential Benefits” at http://www.ncdc.noaa.gov/crn/programoverview.html and refers to “Commercial sector” and ten US gov’t entities but leaves off “Citizen science.” If they do restrict access to us, it would be fun to stage a Frankenstein-like storming of NCDC HQ with pitchforks, torches, and a video production company.
Perhaps someone would like to take on the task of getting NCDC to include “Citizen science” in their list. A good candidate would be someone active in the US Variable Star Observing Program, which may be the purest example (after the NWS Coop observer program, modulo the term “high”) of citizens providing high quality data to a government research program.

Alan D McIntire
August 9, 2012 5:47 am

I’m confused by “monthly mean” and “monthly average” . “MEAN” IS the arithmetic average.
Do you mean “monthly median” = 1/2 (Hi + Lo) for monthly average,?

donald penman
August 9, 2012 5:52 am

I look forward to your paper on US stations being published and also the USCRN data being featured every month.We all know that you and others have put many years of work into this project and deserve a more even handed review from the media than you have so far had.We have to hope that the truth is given a chance to be heard and not just what some people would like to believe is going wrong with the Earth’s climate(AGW).

Colin Porter
August 9, 2012 5:53 am

Would it not be a good idea to take Google Earth screen shots of all USCRN station locations now, plus Google Earth Street View where available? Then in years to come, as urban development envelops some of these stations, but NOAA/NCDC claim that, these are quality sites not requiring UHI compensation adjustments, Anthony Watts Junior will be able to prove otherwise.
REPLY: They already have this in GE on the USCRN web page so yes you can get the pics now, but there’s no streets near these stations, so street view is out.
I suspect we won’t see much change, the leases are on places like national parks and nature reserves. – Anthony