By: Geoff Sherrington.
Scientist, Australia.
This article tests the hypothesis:
Measured historical factors can be used to distinguish between urban and pristine stations.
Australian weather stations and data were tested. Australia’s low population density plus its many weather stations allow many “pristine” station candidates to be examined for studies of Urban Heat Island (UHI) effects.
From an initial list of 1,000+ stations, those with adequate data were narrowed down subjectively to 45 pristine station candidates. Some of their general properties are tabled below.
https://www.geoffstuff.com/45 pristine candidate comparison stations.xlsx
If several stations in a set are truly pristine, they should have similar temperature trends over time. If they do not have similar time series trends, then some factor more than natural variation must be affecting them; they cannot all be pristine.
Here is a graph of the trends of the 45 candidate stations.

https://www.geoffstuff.com/trens 45 pristine candisates.jpg
This is for maximum temperatures, Tmax, years 1910 to 2020. In the following table, the linear least squares fit is calculated and expressed in the equivalent of ⁰C per century for maximum (Tmax), minimum (Tmin) and average (Tav) temperatures.
| PRISTINE | Tmax trend 1910 to 2020 | Tmin trend 1910 to 2020 | Tav trend 1910 to 2020 | RMS error Tmax | RMS error Tmin |
| BALLADONIA | 0.2 | 0.6 | 0.4 | 6.6 | 4.99 |
| BENCUBBIN | 1.1 | 1.2 | 1.1 | 7.63 | 5.62 |
| BIDYADANGA | 2.0 | 1.9 | 1.9 | 3.52 | 5.08 |
| BRUNETTE DNS | 0.9 | 1.3 | 1.1 | 5.29 | 5.98 |
| CAPE BORDA | 2.4 | 1.1 | 1.7 | 4.64 | 3.21 |
| CAPE BRUNY | 1.6 | 1.4 | 1.5 | 4.05 | 2.96 |
| CAPE CLEVELAND | 0.1 | 2.9 | 1.5 | 3.37 | 3.15 |
| CAPE LEEUWIN | 0.9 | 0.7 | 0.8 | 3.37 | 2.89 |
| CAPE MORETON | 0.8 | 1.1 | 1.0 | 3.28 | 3.67 |
| CAPE OTWAY | 0.9 | 0.7 | 0.8 | 4.66 | 3.09 |
| COCOS ISLAND | 0.4 | 2.0 | 1.2 | 1.09 | 1.17 |
| EDDYSTONE PT | 1.4 | 1.1 | 1.2 | 3.59 | 3.49 |
| ELLISTON | 2.9 | 1.2 | 2.1 | 5.45 | 4.29 |
| EMU CREEK | 3.2 | 2.3 | 2.8 | 6.42 | 5.59 |
| EUCLA | 1.8 | 0.2 | 1.0 | 5.78 | 4.61 |
| FLINDERS IS | 1.7 | 1.6 | 1.6 | 4.35 | 4.39 |
| GABO IS | 1.4 | 0.3 | 0.9 | 3.48 | 3.49 |
| HUME RESERVOIR | 3.3 | 2.3 | 2.8 | 7.57 | 5.67 |
| JERRYS PLAINS | 3.1 | 1.0 | 2.1 | 6.21 | 5.91 |
| KYANCUTTA | 0.9 | 1.0 | 1.0 | 7.41 | 5.38 |
| LADY ELLIOT IS | 2.1 | 0.9 | 1.5 | 3.24 | 3.17 |
| LARRIMAH | -0.3 | 1.5 | 0.6 | 3.69 | 5.33 |
| LEARMONTH | 3.1 | 0.8 | 2.0 | 5.72 | 5.04 |
| LOCKHART R | 1.5 | 2.4 | 1.9 | 2.24 | 2.79 |
| LOW ISLES | 1.0 | 0.8 | 0.9 | 2.98 | 2.17 |
| MAATSUYKER IS | -0.1 | 1.3 | 0.6 | 3.79 | 2.68 |
| MACQUARIE IS | 1.0 | 0.9 | 1.0 | 1.99 | 2.53 |
| MANDORAH | 2.1 | 1.8 | 2.0 | 4.35 | 5.49 |
| MANGALORE | 2.6 | 0.6 | 1.6 | 7.28 | 5.33 |
| MARDIE | 0.8 | 3.1 | 1.9 | 5.01 | 5.47 |
| MARRAWAH | 1.5 | 1.5 | 1.5 | 3.72 | 3.07 |
| MARREE | 3.1 | 2.8 | 3.0 | 7.87 | 4.31 |
| MONTAGUE IS | 1.4 | 2.5 | 2.0 | 3.51 | 3.31 |
| NEPTUNE IS | 1.7 | 1.7 | 1.7 | 3.34 | 2.54 |
| OENPELLI NT | 0.4 | 1.7 | 1.0 | 2.52 | 3.16 |
| OMEO VIC | 0.5 | 1.6 | 1.1 | 6.78 | 5.02 |
| PALMERVILLE | 0.6 | 0.3 | 0.4 | 2.82 | 4.04 |
| POINT HICKS | 3.1 | 1.4 | 2.3 | 4.95 | 3.59 |
| RABBIT FLAT | 2.8 | 0.8 | 1.8 | 5.92 | 7.31 |
| TABULAM | 3.3 | 1.7 | 2.5 | 5.14 | 4.14 |
| TARALGA | 3.1 | 1.2 | 2.1 | 6.92 | 5.36 |
| VICTORIA RIVER | 0.8 | 0.4 | 0.6 | 4.16 | 6.21 |
| WARMUN | 0.4 | 0.5 | 0.5 | 4.39 | 5.52 |
| WARRUWI | 1.4 | 1.3 | 1.3 | 2.03 | 2.27 |
| WILLIS ISLAND | 0.4 | 1.0 | 0.7 | 1.99 | 1.77 |
| Simple average | 1.5 | 1.3 | 1.4 | 4.54 | 4.14 |
The linear trends for Tmax range from a high of 3.3 ⁰C per century at Hume Reservoir to a low of -0.3 ⁰C per century at Larrimah. These trends are far larger than the uniformity expected for true pristine sites with similar natural variation.
Therefore, the hypothesis that Measured historical factors can be used to distinguish between urban and pristine stations is shown to fail, because measured historical factors cannot even distinguish between one plausibly pristine station and another.
https://www.geoffstuff.com/trend table pristine.xlsx
For wriggle matchers, here is the same data without the linear line of best fit, but with some lightly smoothed character retained.

The wriggles do not easily fall into a recognisable, simple, or systematic pattern. One might infer that there is “noise” from sources such as different start and end dates for the stations, plus station shifts at various dates, instrument changes, changes after quality testing and so on. Known system changes such as from ⁰F to ⁰C in November 1972 and from Liquid-In-Glass to Pt Resistance thermometry mostly in 1991 to 2001 have been studied for diverse stations. Their step changes if any are likely to be below 0.5 ⁰C, but the “noise” in these wriggles is an order of magnitude greater as shown by the RMS error figures tabled above in ⁰C units.
Willis Eschenbach has calculated a figure that can be compared with the wriggle graph above. It shows the expected earth surface temperatures derived from incoming radiation measured by the Ceres satellite system.

The trends from the Eschenbach graph using CERES data are about 1.5 ⁰C per century equivalent.
To the extent that comparison is valid – and I know of no reason to doubt – these satellite-based temperature estimates for random grid cells on the earth show time trends similar to each other and with matching wriggles over the last 20+ years. Something happens to the relation between the temperatures from satellite measurements and the land surface thermometer estimates we are discussing. Part of that “something” could be UHI, but my stations were chosen to have minimal UHI. How small is minimal? At Palmerville station,
“The same observer performed observations for 63 years from 1936 to 1999, and was the area’s sole remaining resident during the later part of that period. “
http://www.bom.gov.au/climate/data/acorn-sat/stations/#/23000
https://www.www.geoffstuff/palmerville.jpg
It is possible to create “adjusted temperatures” for these land surface stations. Most of the lighthouse stations in this candidate set have been separately adjusted by the ACORN-SAT procedures published by the Bureau of Meteorology. (A future article looks at the future for future adjustments).
Most past studies of Urban Heat Island effects start with this reasonable, logical, simple equation:
Tuhi = Turban – Tpristine
That framework works only if Turban and Tpristine can be measured. This article starts to show that there are impediments to the useful definition of Tpristine by measurement rather than by assignment. The next article in this series compares these 45 pristine candidate stations with 37 urban stations, to seek systematic differences between the two groups.
Through these articles, you are invited to consider if these temperature sets from Australia are much different to those from other countries, with the point in common that they are not fit for the purpose of influencing very expensive government policies.
(END)
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
You could say that once there’s a weather station the area is no longer pristine.
But there are areas where there are fewer humans now than in the early 20th century.
In Argyllshire there were 78,477 persons in 1925; there are now 59,600. The population of Ross and Cromarty declined in the same period from about 68,500 to 57,568. The population of Sutherland fell from 16,568 to 13,313; that of Harris from 5,114 to 3,050; North Uist had 2,990 in 1925, knocked down to 1,850 and South Uist’s numbers fell from 6,899 to 3,650 in the same period. In that period, too, the population of Lewis dropped from 23,287 to 16,300. In other words, where there were about 191,900 people in 1925 there are now only 155,331—a loss of 36,569 persons in 40 years. And that at a time when the population of Scotland grew from 4.880,000 to 5,200,000, and when the number of live births in the country exceeded the number of deaths. In the Western Isles alone the decline has been from 38,000-odd to 26,000 in that same time—a population drain of 30 per cent.
Ben,
Before you write about population size, which I did not mention, you have to satisfy yourself that previous studies connecting UHI and population are adequately accurate. Part of my argument is that they are not. Geoff S
Even when populations are shrinking, roads, parking lots, buildings, etc. don’t disappear.
Sometimes they do, sometimes they don’t – sometimes they even increase. We use far more concrete, asphalt, etc., now than even a few decades ago, let alone back in the 1920’s. Home sizes are also typically far larger. For example, in the USA, the average home square footage has drastically increased:
All while the number of people living in the typical home has plummeted. In addition, the percentage with paved driveways, sidewalks, etc., has skyrocketed… all while roads have gone from mostly dirt or gravel to almost entirely paved, and so on. Then add in all the parking lots, public sidewalks, etc., etc….
Do you have the same situation as here, where houses get larger, but the blocks get smaller…
… until the house takes up basically all the block with no room for grass, or anything but, maybe, contained garden beds ?
A recipe for growing urban heat.
In general, cities that are losing population don’t have much in the way of new construction.
Dear Ben,
So what. Your comments are all theory and no data. In any event, it used to be undestood that warm air rises. It does not spread side-ways and thereby infect the local weather station. As their location is fixed relative to columns of rising air that cause light planes to bob up and down, there is little likelihood that a population change over there, will affect temperatures measured somewhere else (Halls Creek in Western Australia is an exception – https://www.bomwatch.com.au/bureau-of-meterology/part-6-halls-creek-western-australia/).
The UHI thing was a myth invented by strapping a T-probe on the roof of a car and driving across town to locations where there was never a weather station!
Yours sincerely,
Dr Bill Johnston
http://www.bomwatch.com.au
And then the wind blows……Sideways….. From the hot asphalt toward the weather station.
Or doesn’t that happen anymore?
Nah,
Wind is not horizontal. Cities, highways and bare wheat-fields act as chimneys.
Haven’t you flown across the country or a city to an airport and felt the bumps?
Those bumps are where UHI and other convective forces end up.
Kind Regards,
Bill
Some of the north-westerlies during drought years felt like the came straight out of a blast furnace 🙁
There was no “urban” NW of our farm, and it must have been a good sized island as well.
Dear old cocky,
I have experienced those winds as well.
However, if we are discussing UHI, heat dissipation is via convection, and anyone who has flown across a city, of has seen a bushfire or stubble fire, or a tropical cumulonimbus tower resulting from an ocean warm-pool, can see that spot-sources of heat (as opposed to general synoptic effects) result in turbulence and convection.
Atmosphere is not a homogeneous medium. Look across a ploughed paddock, a bitumen road, or an airport runway on a hot day, The heat shimmer effect evidences convection. Drive down the highway and watch the difference in outside temperature when the surface changes from white/grey reflective concrete, to heat-absorbing bitumen – it is convection. Look across the roof of a black car in a parking lot, then a white car – convection acts like a prism and distorts the passage of light. Convection transfers heat vertically not horizontally.
The horizontal component is still active, but it is convection (hot air rising) that dissipates UHI.
All the best,
Bill Johnston
http://www.bomwatch.com.au
The heat dissipation in still air would be largely vertical, but wind will add a horizontal component.
Fires are probably a special case because of the amount of heat involved. We’ve had a couple of little scrub fires this summer, which were more drama than we would prefer. A scrub or grass fire with a stiff wind behind it doesn’t just dissipate heat vertically. Would that it did.
The CSIRO did some really good work on grass fires and bushfires donkey’s ages ago, as did somebody in WA. Unfortunately, I’ve forgotten many of the details. I think the work was discussed here around the time of the 2019 east coast bushfires.
wind is not horizontal ? so how do windmills work ? (don’t die on this hill Dr.)
Uh, how do sailboats on the open ocean work if the wind doesn’t have a horizontal component?
Dear Dark Lord and Tim,
I did not say wind does not have a horizontal component. What I was referring to was the convective component of UHI, and I’m sure you get that.
We also cannot generalise across weather station sites.
Many weather stations are affected by the combination of UHI and prevailing winds, but UHI is still dominantly convective. Bega NSW is a good example of where they moved the Stevenson screen from east of the town (which is the direction of the cooling on-shore easterlies in summer), to west of the town where the prevailing wind blows right across the urban centre, bringing with it a convective component of UHI. (They also replaced a 230-litre screen with a more sensitive 60-litre one.) However, because there was no overlap between sites and the screen size changed the UHI component cannot be apportioned.
Temperatures measured in Stevenson screens are affected mainly by local convection not UHI. Sites at Mildura airport, Launceston AP, Sydney airport and Tennant Creek (to name a few), were affected by installation of wind profiler arrays, that require about 0.4ha of a gravel (or concrete) base …. Halls Creek (WA) is an example of a UHI effect, however they moved the screen to a stony gravel-pit -like site, which blended the effect (see:https://www.bomwatch.com.au/bureau-of-meterology/part-6-halls-creek-western-australia/).
The site at Marble Bar (the hottest place in Australia) was stripped of what passed for topsoil in 2021, The screen now sits in the equivalent of a red-gravel depression (See Figure 14 in https://www.bomwatch.com.au/wp-content/uploads/2022/12/Marble-Bar-back-story-with-line-Nos.pdf). It is local convection from the red rocks that has caused the recent spate of ‘record’ temperatures, not UHI and not the climate.
Having some understanding of these factors on a site-by-site basis is important in understanding ‘trend’.
All the best,
Bill Johnston
Convection can be natural or forced. Natural is primarily from buoyancy and is upward. Forced would be from wind. Natural convection will cause heat to move up into the troposphere. Forced convection moves heat primarily in the horizontal plane. Both will affect a measuring station. Separating them out is impossible. And they can be either natural or man-made.
Heat from an air conditioner is usually ejected upward. But wind can carry it horizontally. The same applies for almost all man-made “things”. A window air conditioner is an exception.
And, yes, microclimate plays a huge roll in what is being measured.
A long-winded way of agreeing that hot air rises regardless of source or ‘classification’. UHI does not spread out and envelop the landscape, which is the picture often portrayed.
All the best,
Bill
I take it your parents have never let you outside?
And I suppose the last time you went in a hot-air balloon you peddled it down the street like a dinky-car.
Of course there is a horizontal (synoptic) component, but in the context of UHI, the convective component dominates.
b
In your world, there is no UHI and cities are the same temperature as the surrounding countryside?
Good on you MarkW. Response more typical of an angry child than a well-informed adult.
b
“The UHI thing was a myth invented by strapping a T-probe on the roof of a car and driving across town…”
So, doing a practical experiment that generates actual numeric results showing higher temperatures in the middle of a city is a myth?
Forrest M. Mims III was the earliest experimenter that I’m aware of, and the most recent was a gentleman named Anthony Watts. You may have heard of them…..
Dear Tombstone Gabby,
I mean no disrespect to anyone including Anthony Watts, but such experiments were also done in Sydney and Melbourne and other places in Australia. These days, you can do them by looking at the outside T-gauge in your car and getting someone to write the numbers down. Heck, you could even do an experiment …
The point of the post is about the effect of UHI on weather station datasets – which are fixed locations in the landscape. While you can run a transect to measure UHI at ground-level on a particular day, does not mean that on that day UHI is affecting the weather station at the local airport or the park in downtown Brisbane.
The other point I have made repeatedly is that hot air rises and UHI is dissipated by convection not laterally. Although wind may provide a horizontal component to the process, UHI is still dissipated by convection.
Why I said “The UHI thing was a myth invented by strapping a T-probe on the roof of a car and driving across town”, is that applying some blanket adjustment to weather station data based on population, is wrong in principle. Due to their exposure most weather station datasets that I have examined, including Sydney Observatory do not exhibit a UHI signal.
In Sydney’s case the weather station at Observatory Hill senses the airstream being drawn into the UHI chimney, not air being exhausted upwards by convection. Fly across the city even in a big jet, and you can feel the turbulent effects of UHI.
Melbourne is different, and for different reasons, and comparing historic aerial photographs, Melways street directories and other information, provides insights into how the differences arise.
All the best,
Dr Bill Johnston
http://www.bomwatch.com.au
G”Day Bill,
First: UHI is a myth.
Second: Fly over Sydney and experience the convection from UHI.
Third: “…including Sydney Observatory do not exhibit a UHI signal.” To see the urban heat effect city readings have to be compared to readings from distant ‘rural’ locations. Not easy in Sydney.
If an ‘allowance/correction’ is to be made, and a “blanket adjustment – based on population” is unacceptable, what do you propose?
Nice article Geoff.
I would be interested in a regression up to 1980 and 1980 onward to see if MMTS might have caused a difference
Jim ==> I agree — the difference will be small-ish but seen in both the Tmax and the Tavg record.
See mine here.
But when dealing with the thermometer record, over which so much hay is being made, a few tenths of a degree is NOT small.
Since that is the most the activists have to work with, that is what they have from which to make a case. However, the question of whether or not that small difference makes any difference in the real world is another matter. I suspect that it does not make any difference worth noticing. As has been pointed out many times, larger differences are seen in an hour’s time on any given day in many places.
I disagree Kip,
The instrument uncertainty of a single observation in Australia is +/- 0.3 DegC (0.25 DegC rounded-up). The rounded uncertainty of comparing two values (the sum) is therefore +/- 0.5 degC.
I am unconvinced that ‘a few tenths of a degree’ over time is measurable.
Yours sincerely,
Bill Johnston
Thanks, Tim.
I have studied the MMTS transition separately, as have several colleagues here. It is hard to discern in routine daily temperature time series, mainly because the is next to no public data comparing LIG with MMTS. One analysis by Chris Gillham at waclimate.com detects statistical changes in T distribution, with MMTS showing more hot days in the higher T percentile sector. See “Have Automatic Weather Stations Corrupted Australia’s Temperature Record?”
But this article is not about that matter. Geoff S
Geoff ==> Tmax trends may be the wrong metric to use. ASOS stations, using rapid response thermistor-based systems, can capture spurious but extremely short Tmax readings.
Geoff,
“If several stations in a set are truly pristine, they should have similar temperature trends over time.”
Why? Who said so?
Whose hypothesis are you testing?
Here is a map of trends over the last century. There is no obvious connection to pristineness. And there is variation over the ocean too. It’s just natural – variation is not due to site issues.
Nick,
What causes the locational variation that is claimed by the map that you show?
Are you inferring indirectly that UHI studies should be done only where the trend is lowest?
Geoff S
“What causes the locational variation that is claimed by the map that you show?”
I don’t know. But it isn’t UHI.
“””””But it isn’t UHI”””””
How do you know that?
In the Atlantic?
Show us where the measurement came from in the Atlantic for 1920. !
And have you ever heard of ocean cycles, like the AMO?
Ships
Located where. ?
“Ships” is a totally meaningless answer… as you are no doubt well aware.
Thanks for proving that you don’t have the vaguest clue where the data was actually located.
You have proven my point.
Nick,
“But it isn’t UHI”.
Is there not a danger in drawing conclusions from a set of numbers who cause of variation you do not know?
Geoff S
Geoff,
When you were looking for gold or uranium or whatever, I’m sure you found variations with unknown cause. But you still mined.
I’m sure Nick has nothing sensible to say.
But he still says it.
Those are the ones to look at first 🙂
Straw man. We used sophisticated statistics to outline which blocks to process and which blocks to go to waste dump and which blocks to go to low grade stockpile. Many blocks, each a few m x few m x few , We helped advance the then new method of geostatistics because of problems in then classical methods for mine design.
Different problems, different solutions. Geoff S
Nick is the master of logical fallacy.
Nothing logical about his fallacy…
… he doesn’t have that capability any more (even if he ever did)
Just plain WRONG or DISINGENUOUS.
Absence of evidence is not evidence of absence.
Nick has a propensity to using “made-up” non-data…
Always look at if what he is present is actually “real”..
“variation is not due to site issues.”
Huh? Microclimate differences ARE a site issue. And even ocean measurements from Argo floats are involved in varying microclimate in the ocean.
If you want to classify microclimate as a “natrual” impact then you have to do so for *ALL* measurement stations, be they land or water. That would mean that a part of the so-called anomalies would have to be attributed to NATURE making it pretty damn hard to separate out any man-caused changes in the anomalies from natural changes in the anomalies.
Unless, of course, you want to claim that the microclimate at a station never changes because of natural causes. Please, PLEASE, make that claim!
Agree Tim.
I reckon each place where there happens to be a weather recording station has its own unique climatic characteristics.
Drive ~ 60 miles poleward in either hemisphere and you’re in a different temperature zone experience all year round.
So averaging temperature recordings from instruments in vastly different local climates situations all around the world amounts to arrant nonsense.
The real problem is averaging disparate temperatures and ignoring the variance that is created by doing so with different temperature zones as you say.
Climate science ignores all variance because it would ruin the propaganda of using milliKelvin as a ruse for global warming.
“the point in common that they are not fit for the purpose of influencing very expensive government policies.”
”The real problem” is not measurement uncertainties, it is very expensive and ineffective government policies.
Can you give an example of all this variance Jim?
Cheers,
Bill
How about some anecdotal evidence? Temperatures here in the central US have varied from -10F to 68F so far, a range of 78F. When in summer has anyone seem the temperature vary from say 105F to 30F or so?
Since range has a direct impact on variance it’s pretty obvious that the winter variance here has been much larger than any summer variance that will be seen.
It’s doubtful that the central US is an outlier for this.
The *real* issue is that using only the median daily value for tracking temperature HIDES this. The actual seasonal variance gets lost since it is driven by the absolute temps and not by the median temp.
Yes.
Note that I change temps to Kelvin.
Look at the variance decrease as summer comes and increases as winter starts.
I don’t know why people in climate science always want to ignore variance and its impacts. Variance is a direct metric for uncertainty.
Are you two related?
Yes, we are related.
“The real problem is averaging disparate temperatures” – and then averaging them together. That’s a Bozo no-no. The temperature measured at one point is an intensive property of that point, and should NOT be averaged with temp measurements from other locations.
Averaging or comparing temperature and trends from a place with a range of say 60C, with those with a range of say 5C..
… is just darn awful non-science. !!
Climate scientists have apparently never heard the word “intensive” before. I’m not surprised based on much of their literature.
Natural microclimate changes .. vs human microclimate changes..
… a very blurry line at times. !
Not “at times”, at ALL times!
It doesn’t matter what substance is below a measuring station, it will change the station readings as Mother Nature changes it. Ice on concrete in the winter, rainwater on the concrete in summer, brown grass vs green grass, tall grass vs short grass, and on and on and …..
How do you separate those out from man-made impacts? Anomalies won’t do it since weather is *never* the same from year to year, it can easily cause milli-kelvin differences!
No Nick, that is not a map of trends over the last century.
MOST areas on the map do not have thermometers or temperature records that long.
It is a FABRICATION. !
“There is no obvious connection to pristineness.”
That is a manifest DUMB statement,
You have zero idea where anything “pristine” might be or might not be.
Come on Nick, show us where all these “ocean” measurements were made in 1920. !!
Nick, normal people compare apples with apples … but you turn up with an unripe orange !!
“Here is a map of trends over the last century“
It shows 0.0225 C/decade which sounds pretty accurate & convincing …
Except it’s a GIGO made-up piece of junk, intended to look sciency & factual … it’s NOT.
So in 1920, where were & how many accurate long-term stations in the – Amazon, Africa, Siberia, Greenland, China, Arctic … to compare with 2020 stations ??
If you can derive accurate 100yr global temperature trends from almost no data; then you should get a job advertising hemorrhoid cream or junk food (which probably has the same ingredients )
How were temperatures being measured 100 years ago? What type of probes? What types of methods being used.
As with most, the idea that the whole earth could be measured to within 0.01C is so absurd, that only a climate “scientist” could state it with a straight face.
Nick,
The hypothesis is mine.
Your global map seems to be based on numbers that you accept.
My article deals with numbers that I suggest must be questioned.
Are we seeing a difference between a mathematician and a scientist?
Geoff S
Yes, To a mathematician measurement uncertainty either doesn’t exist or is random, Gaussian, and it all cancels out.
To a scientist, measurement uncertainty is just a fact of life, it never cancels out and it always exists.
What are the implications for homogenisation if the regional coherence is poor?
I have no evidence but a very easy way is to change stations temperature and put it into the database. The next time through, the changed temp is used to change another station. You get a diffusion effect. I don’t know if any steps are in the process to mark a change so it is not used in future homogenizations. If this isn’t done, one should really start over each time with raw data. People have shown that stations do have their data changed multiple times which shouldn’t happen. It is an indicator that data is treated as an entirely flexible variable instead of a measurement.
I *really* don’t understand this focus on long-records when it comes to the GLOBAL temperature.
The temperature data base is a sample. That’s all it is. If that sample has 9000 entries or 10,000 entries isn’t going to make much difference in the average.
Even for a single station and calculating an anomaly, just start a new record but don’t calculate an anomaly for it until it reaches a breakpoint, two years, five years, pick one!
The monthly anomaly for the measuring station in Holton, KS missing from the global anomaly data for any number of months simply will not affect the global average. If you put a new station there today then don’t include it in the data until Feb 1, 2025! It simply will not be missed!
Could you show the equivalent map for the other half of the world, please?
That would be more relevant to Geoff’s article, and the Pacific is somewhat larger than the Atlantic.
Any Pacific map would be even more “just-made-up” than the first fabrication.
Very few if any reliable measurements in the Pacific before ARGO
And any land colouring-in would be based on urban, highly adjusted non-data.
That is just how Nick does things.
At least they didn’t give him bright red crayons. !
Data quality aside, the equivalent map centred on the south Pacific would provide a more direct comparison to Geoff’s article on Australian temperature records.
The WUWT Surface Stations project in the US conclusively showed years ago that rural does not equal pristine. You show that here for Australia using a different method.
I know for sure of only one truly pristine long record well maintained rural Aus station, Rutherglen Ag Research—might get that record for comparison purposes.
The famous Rutherglen station ?
with its associated land use change over the years in the Murray river area. didnt the bureau`s homogenization wipe that modern cooling data into a warming trend ?
Rutherglen was not “truly pristine”. The early part of the record (from 1912 to about 1925) was was combined with data from the post office 7 km or so away. The cooling trend was spuriously due to station move, not irrigation, and taking rainfall and site changes into account, there was no trend due to CO2, coal mining, electricity generation or anything else (https://www.bomwatch.com.au/bureau-of-meteorology/rutherglen-victoria/).
Cheers,
Bill Johnston
We keep seeing these questions raised about UHI effect here and there; Australia, in this case. Seldom mentioned is the fact that there is also a statistically significant warming trend in the air column above Australia, as derived by satellite instruments over their period of measurement (+0.18C per decade – faster than the global rate over the same period).
The satellites report the average temperature of the lower troposphere from the near surface to several km into the air. UHI is not a factor; or at least what little there is would disappear into the noise. The same applies to satellite measurements over the oceans. No UHI affect is present at all there, but the observed warming in the lower troposphere above the oceans is still statistically significant.
So even if all the surface stations are contaminated by UAH, you still have to explain away the very similar and statistically significant global warming trend observed in the lower troposphere.
TFN,
Straw man. Nothing to do with what I wrote. Please read again and again until you comprehend.
Geoff S
“ or at least what little there is would disappear into the noise.”
Why would it disappear into the noise? What noise? Natural variation? How do you separate out natural variation from man-made variation usning a satellite record?
You’ve made a claim here with no justification that I can see. Are we just supposed to take your word for it?
“The same applies to satellite measurements over the oceans. No UHI affect is present at all there”
Huh? UHI doesn’t *have* to come from a man-made air conditioner. If long term weather patterns create a change in cloud cover over a part of the ocean it is very possible to see a change equivalent to UHI. Since the satellites do a poor job of evaluating cloud cover, over both the short term and the long term, how do you evaluate the impact?
He ought to clarify the real-world interpretation of ‘statistically significant global warming.’
0.18ºC/decade . roflmao
No-one would ever notice it, or measure it on their own thermometer.
Exactly. The climate in my location has stayed the same. People who say otherwise only do so because of some super anomalous cold month or year they experienced back when they were around 8 or 9. No one is going to pay attention to their everyday typical weather.
Only warming in UAH is from El Nino events.
No human effect discernible. (even fungal knows that)
So yes.. no UHI affect over the oceans and only a very slight effect over land, maybe… (UAH land has a somewhat higher trend than UAH Oceans)
Heat does rise, but urban areas are not a particularly large proportion of the land area, so you would expect a somewhat diluted effect.. 😉
Throw away figure for the UK . .. 3 to 5 % of land area is built on, so overall urban effect will be small anyway and localised which will be even less
I suspect other regions would have considerably smaller percentage of “urban” land…
Places like Australia, Russia, etc…. percentage would be vanishingly small.
Yet urban data makes up nearly all of the so-called surface data…
… (especially after homogenisation)
Arctic towns experience their own form of UHI, even in winter. Buildings and heating systems release heat, making the urban environment noticeably warmer than surrounding colder areas.
And the dryness of the air, leads to a large temperature change.
No-one has any idea if or when the winds blow that energy towards the local thermometer site or not.
“making the urban environment noticeably warmer than surrounding colder areas.”
In the Arctic.. this is a very good thing ! 🙂
When I took thermodynamics these many, many moons ago, our instructor had worked on the design of pumping stations on the Alaskan oil pipeline. One of our problems was working out all the heat sources from oil through the pipe, to pump heat, to motor heat, etc. and what length and size of wood supports would be needed to isolate the station from the ground so it wouldn’t unfreeze and let the station sink into the ground. I remember many, many assumptions being needed for all the efficiencies and material factors. It was a hard project but also a learning experience.
Of course, the professor knew all the blind alleys and poor choices from having done the actual work.
UAH satellite data shows NO WARMING AT ALL for most of the time.
So yes, UAH doesn’t show much UHI effect.
The El Ninos give it a warming trend if you use basic linear calculation that include the effect of those step change El Ninos.
You KNOW this, or are incapable of letting this fact into your tiny brain-washed skull….
.. so why keep using the slight warming in UAH as a baseless argument ??
“UAH satellite data shows NO WARMING AT ALL for most of the time.“
Here is the plot:
Yes Nick
If you use your eyes, and engage your dementia-addled brain, you will see that..
.. there is basically NO WARMING from 1980-1997
Then the 1998 El Nino step
No warming from 2001-2015
Then the 2015/16 El Nino step
and COOLING from the 2016 El Nino until now.
Then 2001-2015
Then from 2016 to beginning of 2023
THE ONLY WARMING IS AT EL NINO EVENTS.
…red thumb, incapable of countering… so sad !.. so pathetic. !!
That means you KNOW and ADMIT that I am correct.
Thanks ! 🙂
Tell us fungal.. why was the no warming in UAH from 1980-1998.
And COOLING since 2016.
The main warming trend comes from a step change around 1998
Australia UAH since 1998 consists of short term jumps, with COOLING in between.
Your garbled nonsense and monkey-with-a-ruler gibberish is utterly destroyed by the actual data.
Do you really think satellites measurements have a way to ignore UHI?
I expect from your statement that you don’t believe air handlers on 50 – 100 story buildings add any heat to the troposphere nor can it be transported by winds.
And has probably never heard the term thermal updraft.
You know , those things that glider pilots hunt for by looking at the surface below.
If you want lift…. glide over a town or a ploughed field.!!
(only going from a recent chat with a glider/balloon pilot).
I used to fly o a small Cessna passenger plane from Topeka, KS to the Kansas City airport (no longer operating). I’ll never forget the pilot trying to land at the Topeka airport with a runway suitable for for B-52’s. So much heat was coming off the surface with so much lift that he was having trouble getting the plane on the ground!
What size is this “column of air”? 5º grid, 2.5º grid? A 2.5º grid over Australia is about 67,000 sq km. Good enough for Government work?
This is the way scammers use data to fool ignorant souls.
Like most places on the planet, the warmest period in Australia is mid summer with January and February competing for the highest annual average temperature. Both January and February have cooled from 1980 to 2023. So 42 years of Global Warming™and Australia was cooler in summer of 2023 compared with summer of 1980 according to GHCN.
Most warming in Australia is occurring in August. The middle of winter.
It is to be expected that Australia’s temperature extremes will moderate as the range in peak solar intensity decreases. What is not to like about that? And the CO2 is greening the country making it all the more liveable.
According to both RSS and GHCN, Canada, Siberia, Russia and Europe were up to 9C warmer in January 2023 than January 1980. I guess that would have the locals complaining about lower energy costs. RSS has 80% of the Australian continent cooler in January 2023 than January 1980. My wife was not complaining about that but I do not mind a bit of heat to warm old bones.
UHI definitely exists despite what BEST claimed. Showed that easily several different ways in essay ‘When Data Isn’t’ in ebook Blowing Smoke. Problem is, you cannot say how much at a given station because depends on many factors like prevailing wind direction. Was one of several basic reasons for concluding surface temperature records are simply not fit for climate purpose.
Ditto for climate models—for different reasons.
The ‘climate science’ is most definitely unsettled. The easy proof is 40 years of failed basic predictions of what was supposed to have happened by now. Sea level rise didn’t accelerate, Arctic summer sea ice didn’t disappear…
Rud,
My argument is not about whether UHI exists or not.
My argument is whether historical data can be used to measure it, accurately or at all.
Many past papers show credible evidence of UHI existing. Later, I will try to identify mistakes that future UHI authors might avoid.
The topic is not simple. Takes a few steps at a time. Patience, please. Geoff S
Good response Geoff… scientific discovery is a long process,.. but has to start somewhere. 🙂
“ many factors”
None of which do the climate models consider. It’s why infilling and homogenization does nothing but spread UHI around as well as measurement uncertainty.
We just have to paint all the dark rooves and solar panels white and the global warmening is fixed-
Climate change: Ditching dark roofs in Sydney will reduce temperatures, UNSW study finds (smh.com.au)
Why do people have to make these things so convoluted and difficult?
There is no doubt that the building practices of Western Sydney have created a MASSIVE UHI ghetto !!..
And at the base of the Blue Mountains escarpment, where air can stagnate ….
… not a good place to do that !!
White roofs. a tiny help, possibly !
More trees.. etc.. finding room is an issue in places.
Glad I don’t live there !
Geoff,
Have you considered station temperature reading differences that could arise from natural variations in prevailing wind directions and the variable heat (enthalpy) contained therein. I’m specifically referring to the bulk temperature of the moving, near-surface air masses coming from different directions and over different fetches . . . NOT the potential for air moving around the temperature sensor to create bias (any good enclosure/sensor shield would prevent this).
Please clarify if you made any corrections for wind variability (especially weather fronts moving by the “pristine” weather stations in your data set).
Variations in moving air mass heat content (temperature and humidity) would excite temperature “wiggles” of characteristic wavelengths from hours up to a year or more at any given station. Visually, one can almost discern a characteristic frequency of 12 months (a annual variability) in your graph of the wriggle fits for the 45 “pristine” station candidates.
TYS,
Yes, I have been trying to get a handle on this. I have ready for a later article an analysis of 8 “pristine” stations around Bass Strait between Tasmania and the mainland. They have wriggles in comparative unison. The aim is to show what causes the wriggles and why they differ from nearby Melbourne, which should be among the best-kept Australian stations. Geoff S
“NOT the potential for air moving around the temperature sensor to create bias (any good enclosure/sensor shield would prevent this).”
Not so. Microclimate includes the area in, under, and around the enclosure/sensor. Hubbard and Lin found in 2002 that microclimate has a big impact on temperature readings regardless of enclosure/sensor shield. That’s why they concluded that regional adjustments to temperatures were invalid, adjustments must be made on a station-by-station basis. The corollary to that is that it keeps you from adjusting past temperatures because you won’t know the microclimate variations at past times.
Tim, I referred specifically to enclosure/sensor shield and limited it to that. I have no doubt that the microclimate associated with a given station might affect its temperature reading of a more extensive air mass, although station siting criteria such as are imposed on USCRN stations are supposed to eliminate, or at least greatly minimize, microclimate effects.
I cannot comment further on how robust Australian stations are against microclimate effects.
In this regard, although really NOT considered to be a “microclimate” parameter, the topology (i.e., hills and valleys) adjacent to—say within 1000 m radius of—a given monitoring station would certainly have some effect on prevailing wind direction and its effective fetch over ground.
Temperature is a multi-factor quantity.
temp = f(humidity, elevation, geography, terrain, pressure, wind, latitude, and others I can’t think of right now)
All of these can be considered “microclimate”.
Ocean impact on measuring stations can go far inland for instance. The impact even depends of the prevailing winds are on-shore or off-shore.
Averaging temperatures don’t consider these other factors. Infilling and homogenizing spreads the impact of microclimates all over creation!
These are from a study, https://doi.org/10.1002/met.1972
These indicate to me that mixing temps from clear sky locations with cloudy locations could introduce a warm bias. Similar to what is being found with ocean temps and fewer clouds.
Remember, it only takes a bias of about 0.01°C annually to see the “warming” that is being touted.
Well, in fairness, that is a paper (“study” results) that is specific to Chinese monitoring stations and, more specifically, to issues with temperature sensor “shields” or “screens” as implemented just in the Chinese meteorological stations. Reference is made to a (Finnish manufacturer Vaisala) “DTR503A shield” and to a (Vaisala) “DTR13 radiation shield”, but nothing is given in the paper to say how significantly these differ from USCRN-approved temperature sensor shields.
Despite the outrageously large error of 2.84 °C cited in the paper (referring to Hubart 2011) for a “new shield” (so what?), the key takeaway is actually provided by this excerpt from the paper’s Abstract:
“The root mean square error (RMSE) and the mean absolute error (MAE) between the temperature errors obtained experimentally using a sensor inside the DTR503A shield and the corresponding temperature errors determined by using the proposed correction method are 0.043 and 0.038°C, respectively. The RMSE and MAE for the DTR13 radiation shield are 0.049 and 0.044°C, respectively. This method may reduce the error of the temperature data to 0.05°C.”
(my bold emphasis added)
So, in reality the existing Finnish-produced Chinese shields, as deployed in their meteorological stations, seem to be inducing experimentally-established temperature measurement errors of only ±0.04 to ±0.05 °C . . . hardly worth mentioning in context.
And a careful reading of the last sentence of the above-quoted abstract excerpt, given the numerical values provided in the preceding sentences, seems to reveal a typographical error: The last sentence should have been “This method may reduce the error of the temperature data by 0.005°C.”
Only by using the proposed correction method proposed in the paper.
Again only by using a correction method being proposed. Can you verify the correction method is being used anywhere on the globe?
Remember different screens can have vastly different errors in readings without correction. That does make the 2.84 error pertinent if proper corrections have not been applied.
Can you verify that corrections are being correctly applied?
I would also like to point out the DTR13 errors are 0.049 and 0.044°C. Your bolded section actually shows increasing the the uncertainty to 0.05°C.
The point I made was that varying screens can contribute substationally to the measurement uncertainty on a global basis.
Lastly, let me point out:
that the important part here IS NOT that it was a new shield, but tested under CLEAR SKYS. Here is what I said:
1) It is YOU, not me, that surfaced the Chinese paper on a CFD-based correction algorithm to be applied to shields surrounding temperature sensors on Chinese meteorological stations. I have no responsibility to verify anything in that regard.
2) I was previously limiting my comments to the USCRN stations and the Australian meteorological stations as referenced in the above article. I don’t care squat what’s going on over in China and whether or not they are applying the model-based corrections to their climate data. However, I greatly doubt that either the US or Australia have applied the analytical corrections suggested by the Chinese, especially given that such are focused on Vaisala-provided sensor screens and the conclusion that such might make all of a 0.005 °C difference.
3) Sure, incompetence in engineering design can have adverse consequences. Are you seriously implying that the temperature sensor shields used in USCRN or Australian meteorological stations have such poor design that they can lead to measurement errors approaching 2.8 °C under clear skies?
4) I disagree completely . . . reference my reply #3.
ROTFL.
I didn’t “say” anything. I quoted from a peer reviewed paper. If you have a problem with their data or conclusions, I suggest you write the publisher to have it withdrawn.
Ummm . . . your words, not mine.
Well, that’s a bit troubling. Why subjectively? What specific criteria did you use?
Nick already pointed this out, but this is an arbitrary condition you’re imposing. There is no reason why pristine sites should show less variability in trends. Different places around the world, or even across small regions, respond uniquely. There are even places on earth with cooling trends.
If everywhere is warming due to CO2, what would it matter?
Warming is good!
The Earth is still in an ice age named the Quaternary Glaciation with 20 percent of the land frozen either as permafrost or underneath glaciers.
https://en.wikipedia.org/wiki/Quaternary_glaciation
Around 4.6 million people die each year from cold-related causes compared to 500,000 from heat-related causes.
https://www.thelancet.com/journals/lanplh/article/PIIS2542-5196(21)00081-4/fulltext
CO2 is causing warming, but there are other things influencing regional climates than just CO2. This is particularly acute when you are looking at single stations, which are also significantly influenced by quasi-random weather variability (the smaller the region you’re looking at, the more variable will be the random fluctuations).
The issue you are proposing is endemic among amateur climate science. The “random” fluctuations are what determine the variance in a mean temperature calculation. It is NOT noise because the fluctuations occur year year after year. The variance generates doubt (uncertainty) in the value of any mean. That uncertainty SHOULD BE propagated throughout following calculations but never is.
The result is what Dr. Pat Frank predicted, the uncertainty interval surpasses any prediction being made which means they are only guesses.
Considering that correlation does not necessarily equate to causation, can you present any objective, science-based data to support that assertion? Any at all???
P.S. “Objective, science-based data” excludes the output from models.
CO2 is a greenhouse gas, it absorbs outgoing planetary longwave radiation. Adding CO2 to the atmosphere therefore causes warming.
Oh malarky!
If you turn up the heat on a pot of water does the water ever get to 300F?
If the CO2 in the atmosphere absorbs more outgoing longwave IR and its temperature goes up then it will radiate MORE – by the 4th power of the temp. It will radiate at a higher rate than what it received! It may also thermalize to other gases but that heat will also be transported up and out!
Now, tell us that the rate of radiation isn’t based on T^4!
The part of the atmosphere that radiates to space will always tend toward an effective temperature of 255K, that’s dictated by simple laws of physics. Adding CO2 moves that effective radiating altitude up just a bit, so the part of the atmosphere that is radiating is now at a higher, colder layer. That layer now has to warm up to 255K.
You are just regurgitating agw fallacies, because you know nothing else.
There is absolutely zero evidence of your idiotic little less than CONjectiure
If the radiation height increases, the area radiating increases by the square so more radiation escapes.
Seems you don’t know the simple laws of physics.
Some is always more than none. So increasing absorption by some will increase the absorption in the higher layer of the atmosphere even if that layer is less opaque by virtue of its area than the layer below.
You have no evidence of ANY.
NONE is LESS than some.
Tell how far the radiating height would move for CO2 going from 300 to 400 ppm.
“Adding CO2 moves that effective radiating altitude up just a bit, so the part of the atmosphere that is radiating is now at a higher, colder layer. That layer now has to warm up to 255K.”
So what? You didn’t change the rate of radiation!
You did, of course, since cold things radiate less intensely than warm things. You’ve reduced the flux of outgoing radiation, and now earth is receiving more energy than it is emitting. Thus, global warming results.
That’s the best you got?
Ever heard of the physical phenomena called reaching an “asymptotic limit”?
Water can dissolve salt, up to an asymptotic solubility limit. Beyond that, water is incapable of dissolving any more salt.
In an analogous manner, CO2 is a “greenhouse gas” only up to a limited concentration . . . something that few people, including many so-called “climate scientists”, fail to recognize.
Looking at observed temperature vs. atmospheric CO2 concentration dependence revealed by Antarctica ice cores (at less than 350 ppm) compared to the observed paleoclimatology proxy-based independence of global temperature and atmospheric CO2 concentration (the latter up to 7000 ppm), per the attached graph, indicates that CO2 ability to act as a “greenhouse gas” becomes asymptotically-limited (i.e., “saturated”) at about 400 ppm, the level we just exceeded.
Wijngaarden and Happer, most notably, have published papers asserting that CO2 has effectively reached its asymptotic limits in inducing LWIR warming of Earth’s atmosphere.
IOW, the best scientific data says that adding CO2 to the current atmosphere will not cause any significant additional warming.
W&H do not argue this, they argue that each doubling of CO2 produces a warming of about 2.2 degrees C, not considering feedbacks. You probably should read their paper a bit more carefully.
Your graphic shows the results of a carbon model superimposed over a rough schematic of paleo-temperature that is not based on any kind of proxy data, just the interpretation of rock layers by a single individual.
“W&H do not argue this, . . .”
Really???
Verbatim extracts from abstract of Infrared Forcing by Greenhouse Gases, W. A. van Wijngaarden and W. Happer, 18 June 2019:
“For current atmospheric concentrations, the per-molecule forcings of the abundant greenhouse gases H2O and CO2 are suppressed by four orders of magnitude from optically-thin values because of saturation of the strong absorption bands and interference from other greenhouse
gases . . . Doubling the current concentrations of CO2, N2O or CH4 only increases the forcings by a few per cent.“
(my bold emphasis added)
Read it yourself . . . copy available at https://co2coalition.org/wp-content/uploads/2022/03/Infrared-Forcing-by-Greenhouse-Gases-2019-Revised-3-7-2022.pdf
Now, you were saying something about reading papers a bit more carefully . . .
Given the above, it would be useless for me to attempt to correct your last paragraph.
They don’t provide estimates of climate sensitivity in the 2019 preprint you cite. They do provide them in their 2020 paper:
https://arxiv.org/pdf/2006.03098.pdf
In particular, see table 5:
Anyone who thinks W&H are their allies in pervading the egregious myth that CO2 cannot cause further warming is in for a sore disappointment.
I don’t think you’re in a position to evade based on the above.
Ahhh . . . this is getting to be old, but here goes nonetheless because it is so much fun:
If you had read the URL link for the W&H paper that I cited a bit more carefully you might have noticed that it indicates at the end “Revised 3-7-2022”. That date is approximately two years after the W&H2020 paper that you referenced above.
Moreover, either you did not read the abstract for the W&H2020 paper (your citation: https://arxiv.org/pdf/2006.03098.pdf ) or you intentionally chose to not mention its pertinent conclusions by obfuscating the discussion by presenting the cut-and-paste of Table 5. (BTW, all that table shows is that W&H were able to somewhat reproduce Manabe et.al’s sensitivity for their stated assumptions and they gave a new sensitivity for an assumed “pseudoadiabatic” lapse rate . . . it says nothing that discounts the W&H2020 abstract’s conclusion that CO2 is effectively “saturated” in its ability to provide additional atmospheric warming via increased concentration).
Here are the verbatim extracts from the abstract of W&H2020 that you did not mention:
“Over 1/3 million lines having strengths as low as 10−27 cm
of the HITRAN database were used to evaluate the dependence of the forcing on the gas concentrations . . . For current atmospheric concentrations, the per-molecule forcings of the abundant greenhouse gases H2O and CO2 are suppressed by four orders of magnitude. The forcings of the less abundant greenhouse gases, O3, N2O and CH4, are also suppressed, but much less so. For current concentrations, the per-molecule forcings are two to three orders of magnitude greater for O3, N2O and CH4, than those of H2O or CO2. Doubling the current concentrations of CO2, N2O or CH4 increases the forcings by a few per cent.”
(my bold emphasis added)
Sound familiar?
Also, this:
Well, based on the above-cited extract of the abstract for W&H2020, it certainly appears that you are indirectly accusing the HITRAN code and database of contributing to an “egregious myth”. I can only gently suggest that you pass your conclusion to the Center for Astrophysics, Harvard & Smithsonian, Cambridge MA, USA, that currently provides the software code and maintains the associated spectrographic database.
Oh, BTW, HITRAN is the worldwide standard for calculating or simulating atmospheric molecular transmission and radiance from the microwave through ultraviolet region of the spectrum.
Again, the climate sensitivity tells us how much warming will be produced by doubling the CO2 concentration. W&H estimate it in their 2020 paper, they do not estimate it in their 2019 paper or its revisions. Their estimate for sensitivity is about 2 degrees C. You correctly point out that this is almost identical to that derived by Manabe decades earlier, and this is not surprising, because W&H are doing nothing more than repeating the line by line radiative transfer calculations that Manabe performed in the 70s, they’re just doing it with an updated spectral lines database. Unsurprisingly, they find exactly the same result.
You can dither on about per-molecule forcing attenuation all you want, but I doubt you’re even able to define those terms, much less explain how they contribute to the sensitivity. The bottom line is that W&H determine that doubling CO2 produces 2+ degrees of warming, without considering feedbacks, and this is perfectly in line with the IPCC estimates.
W&H’s abstract says, “The change in surface temperature due to CO2 doubling is estimated taking into account radiative-convective equilibrium of the atmosphere as well as water feedback for the cases of fixed absolute and relative humidities as well as the effect of using a pseudoadiabatic lapse rate to model the troposphere temperature.” This is what is shown in table 5. This is the thing we care about when talking about how much warming increasing CO2 will produce.
Simply and completely false.
The reference (39) given in the first column of your cut-and-paste Table 5 is the paper The Effects of Doubling the CO2 Concentration on the Climate of a General Circulation Model, Manabe, S. and Wetherald, R.T., Journal of the Atmospheric Sciences, Vol. 32, No. 1, January 1975. An electronic copy of this paper is available at https://journals.ametsoc.org/view/journals/atsc/32/1/1520-0469_1975_032_0003_teodtc_2_0_co_2.xml?tab_body=pdf .
(The Table 5 note to reference (35) is to an earlier Manabe & Wetherald of 1967).
In that later paper, M&W1975, the authors clearly state (see abstract) they made their estimate for CO2 climate sensitivity “by the use of a simplified three-dimensional general circulation model”. Neither the abstract nor the body text of the paper make any reference to calculating radiative transfer using the absorption lines of CO2. Likewise, there is no mention anywhere of using the HITRAN code. One doesn’t even find a mention of spectral lines anywhere in that paper.
What M&W1975 does when referring to “radiation transfer” calculations in their model is to say they are basically using the same model computational methods as were documented in M&W1967. And what do we find in M&W1967 in this regard? . . . why the simplified, parametric approach to calculating radiative transfer based on assumed bulk temperature profiles versus height through the troposphere (see attached graph and title), with subsequent references to emissivities, absorptivities, optical thicknesses, T^4 and the Stefan-Boltzmann constant. In other words, M&W1967 as well as MW1975 calculate radiative transfer through the atmosphere just using the Stefan-Boltzman equation, without any regard to CO2 spectral absorption lines.
You can continue to just make things up as you wish, but please don’t bother me further with such postings as you’ll get no further response from me to such childish behavior.
I think that you’re dithering. W&H are repeating the radiative calculations first performed by Manabe in the 60s and 70s as part of the development of the first GCMs, and they get almost exactly the same result. The fact that W&H get the same result as everybody else, a result that has barely changed in 50+ years, is the meat of the argument. It’s the part you’re desperately trying to pivot the discussion away from.
“They argue that…”
not proof. conjecture.
and they base it purely on radiation, which is only a small part of energy movement in the atmosphere.
You have NOTHING.. except attempted and failed misinformation.
I care very little for your desire to disregard W&H’s paper, as I do not think it is an important contribution to the literature. If you want to dismiss them, be my guest.
I care little for your disregarding of all the major modes of energy transfer in the atmosphere.
Just proves you are an ignorant fool. !
Let me point out that your water and salt is not an appropriate analogy as you stated it.
CO2 saturation refers to the fact that the current level of ~1500 μm radiation from the earth’s surface is already pretty much absorbed by by the current concentration of CO2. Increasing the CO2 concentration will have less and less effect.
Would that be analogous to approaching an asymptotic limit?
I don’t know anybody all that concerned about 1500 μm radiation from Earth’s surface (that’s into the microwave part of the EM spectrum, far beyond the 7 to 15 micron region of LWIR surface radiation and CO2’s major absorption band).
And what would happen if we, say, doubled the radiated power at 15 μm leaving CO2 at its current level instead of, say, doubling the atmospheric CO2 concentration while leaving the power at 15 μm alone. Would there be a difference between the two conditions?
The amount of radiation being emitted is the issue. If all the radiation emitted is already being absorbed, which the radiation curves would lead you to believe, then adding more CO2 will not increase the amount the earth radiates, therefore less and less warming. That assumes CO2 is a major factor in any warming.
Of course, radiation near the ground is not even close to being fully absorbed:
But even if that were the case, radiation in the upper atmosphere will never reach saturation because, as stated above, all you do by adding more co2 is move the effective emitting altitude higher, which can occur indefinitely.
It is true that forcing shows logarithmic growth, but we are not anywhere close to having reached any practical upper limit.
Of course, no quantitative definition of “near the ground” is given.
Of course, no units for the y-axis (“Absorption Factor”) are given.
And hey! . . . and what’s up with the pink-coded region of wavelength having the word “saturated” in its identification? Ooops???
The absorption factor is the rate of exponential decay of a given wavelength as the CO2 concentration increases. The horizontal lines show how much wider the red band where CO2 absorption is saturated would become at a 4x CO2 concentration. You can see it more clearly here:
That is, if we quadrupled the amount of CO2 in the air, it would produce only a marginal increase in the saturation. And, again, that is only near the ground. High in the atmosphere, where emission to space is occurring, the air is not even close to saturation across the central part of the band.
To make it perfectly clear, the graph that AlanJ posted above on February 1, 2024 1:22 pm has its pink zone labeled as “Saturated at 1xCO2“.
Some that don’t read graphs carefully may miss that significant fact.
And it’s not known if the y-axis “Absorption Factor” is in units of 1/cm, 1/m, 1/km, or 1/inch, 1/ft, 1/mile, etc.
Finally, AlanJ claims the graph shows “radiation near the ground is not even close to being fully absorbed” . . . funny when considering the width of the pink zone and the logarithmic y-axis.
Go figure.
BTW, using the experimentally measured absorption coefficient (= 0.05/m) for CO2 centered at 15 microns wavelength at STP, 1% RH, and 400 ppm CO2 for a lower troposphere-representative optical path length of 10 km , 99.9% of that emitted radiation will be absorbed in less than 200 m vertical distance above the ground.
“CO2 is a greenhouse gas, it absorbs outgoing planetary longwave radiation. Adding CO2 to the atmosphere therefore causes warming.”
COMPLETE BS!
You have zero evidence that CO2 causes warming.
Any increased absorption is redirected through the atmospheric window.
Proven by real measurements.
There’s not much context here, but it looks like radiance is increasing everywhere except the bands where CO2 is a strong absorber. That is exactly consistent with what I said above. The planet is emitting more brightly because it is getting hotter, except the bands where CO2 is a strong absorber, because it is emitting from those bands at increasingly higher, colder layers of the atmosphere.
Poor mite, can’t understand a basic chart.
So sadly ignorant… and totally determined to remain that way.
NO EVIDENCE OF CO2 WARMING… and you know it. !
If you think the chart shows something other than what I’ve said, you need to articulate that. That’s how discussions work, you see. One person says something, the other responds, and so on.
Chart shows exactly what I said. You have not proven otherwise.
That is how discussion works.
You are in a land of DENIAL.
Huh? If CO2 is radiating less but the atmosphere is radiating more then how is CO2 causing the warming? It’s something *else* that is warming. Probably water vapor. At least as far as radiative heat loss is concerned.
Hot things radiate more intensely than cool things. The whole earth gets warmer, it radiates more intensely.
But CO2 is now emitting from a cooler layer of the atmosphere. Cool things emit less intensely than hot things. The effective radiating altitude for the wavelengths in which CO2 is a strong absorber moves up, radiation goes down in those wavelengths. It is this process that is producing the energy imbalance that is causing the whole atmosphere to warm up.
“CO2 is causing warming”
Totally scientifically unsupportable BS !
So AlanJ wants to use thermometers next to air-conditioners and in concrete carparks.
He wants the urban warming there… the exact opposite of what Geoff is looking at.
The poor little mite doesn’t seem to understand the concept of “pristine” very well. !
He has also just basically said that homogenisation is a form of FAKERY… on that we can all agree.
“He has also just basically said that homogenisation is a form of FAKERY… on that we can all agree.”
Hallelujah!
The hypocrisy here is massive. Climate science says it can infill and homogenize temperatures because they are dependent over distance.
Then they turn around and say that they can use the CLT to accurately locate the global average – when the CLT depends on independent values.
A second grader is typically smart enough to see the problem here.
AlanJ,
I started with over 1,500 stations ordered by data start date, then selected the longest 200 or so. Selected those good candidates for pristine based on my local knowledge. Earlier work had allowed me to visit many of these. Checked each site on Google Earth Pro to confirm remoteness. Deleted many with a lot of missing data. Was left with about 45 of the longest records, reduced to 37 after deleting those with missing data.
To avoid being accused of cherry picking, I did not cherry pick. I did not know what the temperature/time trends would be as I calculated them after selection.
I reason that pristine sites should have similar trends because I could not find any literature suggesting causes or processes to make them different. It is a thought process vaguely similar to estimating a global average temperature and it also has pros and cons. I also keep up with satellite-based estimates of UHI, which commonly assign uniform trends to large areas assumed to be pristine or rural. I am measuring and questioning that assumption by ground based data.
What do you think causes this trend variation? Do you think it valid to compare to the smoother Eschenbach figure? Geoff S
Thanks, so your selection process does indeed seem to be pretty subjective.
I’m not accusing you of cherry picking, just suggesting that your process may not be a particularly robust way of identifying the most remote/pristine stations.
In case it isn’t obvious: UAH shows more uniformity because you are looking at a large slice of the troposphere across a broad region, not a series of point-level measurements. There is abundant literature describing regional climate variability, I am surprised you managed to avoid it.
“There is abundant literature describing regional climate variability, I am surprised you managed to avoid it.”
Then why does climate science refuse to recognize this variation and give the variance associated with their global average?
“abundant literature describing regional climate variability”
AlanJ just destroyed the “homogenisation” scam.
Well done, AJ 🙂
You forgot, climate change causes drought/floods, warming/cooling, etc.!😭
Homogenization does not rely on regional climates being uniform.
Poor Anal. doesn’t even understand the word “homogenise”. !!!
Why display such base-level ignorance !?
What agenda do you have ???
No one said it requires regional climates being uniform!
What it requires is a belief that temperature in one place is dependent on the temperature in another place.
Dependency is sample values is a problem for the CLT, meaning the average of the sample has uncertainty.
Climate science is full of hypocrisy: 1. temperatures are dependent so homogenization and infilling is proper, and 2. temperatures are independent so the CLT applies.
Pick one and stick with it climate science!
It doesn’t require that, it requires that temperature change is correlated across regions, which is unquestionably true, and has been unequivocally demonstrated in the peer reviewed literature.
No, ANOMALIES have been shown to be supposedly correlated across distance. Believe that if you want. But, temperatures, no way.
One of the major problems I have with anomalies is that they should be added as relative values compared to a constant base, say 14C.
If you substitute the temp from Station A into Station B because you think they should be correlated then they are *NOT* independent. Independence is based on one thing not being influenced by another. That means you won’t *KNOW* what the temp is at Station B because it is not influenced by the temp at Station A.
You are your own worst enemy.
AlanJ,
Your suggestions for a better way to select stations would be welcomed, but you seem more interested in pushing your private agenda than constructive improvement of what is fairly neutral research into how to measure UHI.
Geoff S
“There are even places on earth with cooling trends.”
Yeah, like the United States, for example. The United States has been in a temperature downtrend since the 1930’s. No CO2 warming around here.
[citation needed]
The U.S. regional surface temperature chart (Hansen 1999):
Hansen said in the United States, the decade of the 1930’s was the hottest decade, and the year 1934, had the hottest temperature, and 1934, was 0.5C warmer than 1998, which makes 1934, warmer than any year subsequent to 1998.
Therefore, the United States has been in a temperature downtrend since the 1930’s.
Your graphic seems to end in the year 1999, we current live in the year 2024, if you happened to miss that update. Let’s bring that series up to date with the last quarter-century of data:
FAKEDAND MANIPULATED URBAN DATA ALERT !!
“Your graphic seems to end in the year 1999”
Yes, that’s why it is titled “Hansen 1999”.
The reason for using Hansen 1999 is because Hansen decided to bastrdize the U.S. temperature record after the cooling began in 1999, so 1999 is the last somewhat honest chart Hansen made. In about 2007, Hansen declared that 1934 was no longer warmer than 1998.
But that’s not a problem. Just combine Hansen 1999 and the UAH satellite chart. Both charts have the significant year 1998 on them.
Hansen says 1998 is 0.5C cooler than 1934, and going by the UAH chart, that would make 2016 0.4C cooler than 1934, and Hunga Tonga 0.2C cooler than 1934. So yeah, the United States has been in a temperature downtrend since the 1930’s.
You don’t expect me to take that bastardized Hockey Stick of yours seriously, do you? You wouldn’t have anything to talk about if it weren’t for that bastardized chart. It’s “hotter and hotter” temperature profile obviously does not correlate with the U.S. regional chart’s benign profile. Why is that? Answer: Because the Hockey Stick chart is a fraud perpetrated by Temperature Data Mannipulators for political/selfish purposes. And you keep throwing it out there as though it were legitimate.
The UAH Satellite chart:
1934 would be right at the top of this chart, almost off the page, if it went back to 1934.
Well, here’s the massively corrupt and untrustworthy post-1999 James Hansen GISTEMP US data compared to the beautiful and perfect UAH US satellite data:
So if you’re a UAH fan, sorry to say, it looks like the US is indeed warming.
Notice how GISS amplifies all the warm points., and has a steeper trend.
Lies on top of lies.
Why continue with them ???
You aren’t fooling anyone except your fellow AGW comrades…
… who, like you are exceptionally easy to fool with anything that reinforces their brain-washing.
So what?
Yes, the U.S. has been warming. It just hasn’t gotten as warm as it was in 1934. So, the U.S. is still in a temperature downtrend.
GISTEMP says it has, and UAH agrees with GISTEMP. So you’re stuck between a rock and a hard place. Either you now disavow UAH and insist that we have no idea what temperature is doing in the US (maybe it’s even warmer than UAH or GISTEMP say!), or you admit the obvious, that the US is warming.
“Different places around the world, or even across small regions, respond uniquely.”
You are your own worst enemy!
If different places respond uniquely then the variances of their temperature excursions vary uniquely also. When adding random variables with differing variances how to you add the variables to account for this?
When is climate science going to start giving ALL the statistical descriptors necessary to describe a distribution? If you assume a Gaussian distribution then at the very least the mean AND the variance is needed to describe the distribution!
What *is* the variance of the temperatures in Rio de Janeiro in December vs Lincoln, NE in December?
Regarding the issue of whether any weather stations are “pristine” or not, of if they similar show trends with nearby locations-
If by “pristine” you mean completely unchanged or unaltered from its natural state, and not influenced by potential man-made changes in nearby areas, then arguably there are none. The greatest influences to a station would arguably be the very first modifications to an environment, no matter how small, particularly in cold, dry climates at night during winter. This would be even more noticeable at higher elevations. Good luck trying to get two thermometers to agree in two shady areas in my yard where I live…
What would matter more is if you have one temperature station in the same location, and that location remains completely unchanged for decades or longer if theoretically possible, as well as no change whatsoever to any surrounding locations that could influence it. You could start at the South Pole or lower Manhattan, so long as nothing ever changed. Again, good luck with that. If I sneezed at the South Pole, I’m sure it would show up somehow.
Forgive the word mess in that first sentence…
I noticed the past couple of decades of USCRN temperatures in the country. Covering millions of square miles, mostly in the mid-latitudes and in a steadily growing country now with 336 million people, the trend is remarkably “meh”:
A word in your ear: Water
Urban Heat Island is a nice way of saying – classically an ‘island’ is a dry place surrounded by water
That is why urban islands record high temperatures = because like conventional islands, they are dry.
As water has immense heat capacity numbers, dry places will raise their temps faster than wet places when any given amount of (heat) energy is added to them.
They will also cool down faster.
(This goes to the very heart of greenhouse theory – where we’re informed that the offending gases ‘trap heat’ but from that moment onwards in the conversation, it all revolves around Temperature. The haha: Bait & Switch.
Other properties of water play a very significant part, esp that it changes phase and when it does, becomes extremely buoyant relative the other atmospheric gases.
iow It triggers: Weather
Thus, if you have myriad temperature stations, the numbers they produce will be intricately linked, for all sorts of reasons, to the amount of water in their immediate surroundings – dryness is The Main Cause of UHIE after all.
Take that reasoning out into the countryside and look at your temperature recording stations – is the amount of water around them changing and how has it changed in the past?
The very real problem is that a lot of the water is not immediately obvious.
It is stored away inside living plants for a start and all different types/varieties of plants have different capacities.
Very significantly, immense amounts of water can be stored in/under the ground, attached to cellulose, lignin and bacteria = all things that once were living plants.
There is a way around this problem though. Water always gives itself away and it does so exactly via the buoyancy it imparts to the air when it evaporates.
As its molecular mass is half that of the other gases, its effect is twofold
Both those things work to reduce barometric pressure and there is your proxy for the amount of Energy at your temperature measuring station.
The important number here is the air pressure that would exist in a static atmosphere = what Earth’s average pressure would be if there was no weather.
That number is 1013millibars
e.g.1 If temp is rising and pressure is below 1013, you are in a wet place with a rising-air regime and losing Energy. A lot of that energy will return via rain.
e.g.2 If temp is rising and pressure is above 1013, you are in a very dry place (a city or in eastern half of the UK for instance) under a descending-air regime
You’ll be gaining Temp rapidly and losing Energy even more rapidly because of that high temp. There will be no rain to cool you
e.g.3 Temp is flat-lined or falling and pressure is below 1013 (western side of the UK or a properly functioning rainforest)
You are in a wet place and counter-intuitively, actually gaining Energy. It will be being stored in the water that is falling as rain and you know it’s being stored because of the low pressure
e.g.4 Temp is flat-lined or falling and pressure is above 1013. You are in a very dry place and will not like being there.
Classically = any desert. That includes places like the Sahara as much as it does Antarctica
As attached is a clue why rural stations may not be ‘pristine’
And they will all be different -the significant clue there is the colour of the soil.
Two opposites here but with countless in-betweens
I forgot – the green circled bit
Important for 2 reasons:
(Maybe it’s Chris Packham driving the tractor and his halo slipped off the back – easily done with one the size of that)
Oh, why that is a Rural Heat Ocean = Desert in my picture?
Can you see the bird – that one solitary crow/rook?
That tells you all you need to know about Soil Erosion, Cyclonic Weather Regimes, Climate Change, Rural Heat Oceans & Urban Heat islands, why you are overweight, why you gets cramps in your legs at night (also heart attacks) and are now almost certain to contract full-blown Alzheimer’s
Just that one single crow
If that field was properly functioning there would be 1’000s of birds following that tractor.
That field is as dead as we all will be unless we grow a pair and admit what we’re actually doing
Thank you, Peta.
I have done cluster analysis and principal component analysis of these temperature/time trends using both rainfall and annual rainfall change as two of many variables. Will be in a coming article. Not much advancement in understanding, I suspect too much noise affects most types of analysis. Not shown here is various types of suspected noises, like temperatures affected by day of the week of observation. Noise, noise, noise … No, not planning a musical. Geoff S
Yet what you are looking at is very much like a symphony with many parts all combined .
be careful not to classify natural variation as noise. It’s part of the signal. Hard to identify noise from signal sometimes.
One of those data series has obviously bad data at the end.
There are also a couple of step functions in the late 1960s.
Those are the sites jumping up and down saying “look at moi”.
Also, as somebody already suggested, consider splitting the long data series into 2 sections, before the majority of sites come in, and after. That gives a better feel for coherence during the common period.
And while I’m making more work for you, grouping by geographic area – coastal trends should be moderated by the humidity.
“coastal trends should be moderated by the humidity.”
Or warmed by the oceans, warmed by solar energy.
Fun, isn’t it. ! 😉
The range should be attenuated, which is what I should have written instead of “moderated”.
Dear Old Cocky,
Happy new year!
I thought you meant “moderated”.
All the best,
Bill
Happy new year to you as well, Bill. I hope you caught a good feed while you were out in the tinny.
Well, attenuating the range makes it more moderate. So many words have changed their meanings over the decades 🙁
I am guessing. of course, but didn’t our ancestors move north to the Orkney and Shetland islands during the Holocene because it was warm and pleasant enough for them to survive there. Did those folk think it had always been that temperature or were they savvy enough to believe only in the present since past is gone and future is completely unknown? I guess those people would have educated certain contemporary meteorologists in the ways of nature and the weather to their immediate benefit too. The pity is that we can’t send a whole heap of contemporary weather watching liars back to the Holocene to the benefit of the planet and all of us.
I should preface this comment with the statement that personally I have no pony in the various races here with respect to whether there is or is not some trend of some value in the various temperature indices and whether or not there is some clearly dominant anthropogenic factors involved, and frankly I don’t visit these posts as much as I used to, but I had a little late night downtime and thought to comment.
That said, it seems to me that it might be useful to both sharpen the focus on some aspects of this topic and broaden the focus on others. The first thing that struck me about the post was the statement of the hypothesis, “Measured historical factors can be used to distinguish between urban and pristine stations.” It may be simply that I am not a frequent enough follower of this area, but the hypothesis struck me as needing a good bit of specification for me to consider it quantitatively. It would be useful I think to be more specific about exactly what the set of “historical factors” includes and what it does not include. I gather the notion is somehow associated with “urban” or “not urban” where it seems urban-ness has something to do with populations. But as Peta, Nick and others point out, lots of other things change as history rolls on and it would be useful to me to be more clear in the hypothesis about what they are and are not. The second half of the hypothesis says “distinguish between urban and pristine” which somewhat circles back into the specificity of the historical factors. When I see the word pristine, my thoughts go to constancy of the instrumental or data logging procedures, but I think all of these measurement places have undergone some change in hardware and personnel over the years. Also, the notion of “distinguish” begs some questions, in particular what are metrics to be used to compare and contrast the things being distinguished? It seems like the tacit assumption is that distinguish means somehow associated with quantitatively different absolute values of certain regression values, but lots of other things could be different between the sets. In summary, I think it would be helpful to sharpen the focus on the hypothesis statement.
The second thing I think would be useful would be to include more of the uncertainties associated with the numbers involved, such as confidence intervals for the trend values and variances of the averages. I think there might be a lot of features of the data sets from the various stations that could reflect experimental issues, beyond just what the overall trend may be. Assuming the the subjectively extracted “pristine” group might have something in common that might be different from some set of the others, which seem to be called “urban” I casually looked at the data in the linked spreadsheets. At some point in experiments I am involved with, I usually do a routine examination of the distribution of various aspects of the data so I looked at qqplots and various tests of the “pristine” and “urban” data (although I may have misconstrued exactly which set corresponded to the same apples/oranges cases) and there do appear to be noticeable differences in the distributions of the two kinds of sets, in terms of how much the normal assumption is rejected, whether confidence intervals overlap, whether the distributions are skewed out or in, presence and amount of kurtosis, whether there is evidence of repeated or instrumentally constrained measurements, etc. And it struck me that someone who knew what he/she was doing with the data could in fact examine subsets of the reported data somewhat systematically for a range of features. It may even be amenable to those AI/machine learning approaches that are so popular now. But the general observation is that I think it would be useful to broaden the focus on difference in subsets of these data to include more than just comparisons of a single statistic, i.e the mean of something. There may turn out to be quite a few distinguishable subsets of these data that could suggest thoughts about what might be going on in either experimental methods or the environment.
fah,
Good comments, thank you.
Please read the next couple of parts for deeper discussion.
Some distributions are coming.
Some measure of uncertainty is in the tables for RMS error, but the real uncertainty takes several articles.
You could start at thid part 3 from a year agoi and work back
https://wattsupwiththat.com/2022/10/14/uncertainty-of-measurement-of-routine-temperatures-part-iii/
What do you think is the main generator of the wriggles that I mention?
Geoff S
we know the thermometer “data” is a joke and not fit for purpose … can never be … any exercise to try a determine how bad it is a mathematical exercise … but basically just an intellectual circle jerk … all discussions about “climate change” that accept all of the flawed assumptions are just GIGO exercises … there is no good data, there is no “well mixed gasses” and as such any calculations or modeling is just an intellectual exercise that reveals no truths …
You say:”… UHI thing was a myth…”. Are you saying there is no such thing as thermal mass regarding bricks, asphalt, concrete, etc that make up cities? Are you saying cities are not warmer than rural areas?
In my opinion, pristine weather stations do exist. I think the stations in your sample have a lot of different micro climates, so as a group they are not really comparable.
To compare pristine or almost pristine weather stations to UHI influenced stations, you’ll probably have to find pairs of stations with comparable natural micro climate, eg. distance to the sea, prevailing wind patterns, does the wind blow from the sea or from the desert, is the station sited on a southern or on a northern slope of a hill and so on.
I’m looking forward to the next article in the series.
lb,
If you have two close stations, one you have named urban and another rural (or pristine) how do you know that they are indeed that way?
What is your test to allow a station to be defined “pristine”?
I have started with a hypothesis that pristine station trends should resemble each other, because their main influence is natural variation, which is thought to act over large areas.
What is your working hypothesis?
Geoff S
Hi Geoff,
My test for “pristine” stations has two parts:
My working hypothesis, or rather suspicion, is that you have stations from different climate zones in your sample. I’d expect different trends for a station sited in a desert, continental climate compared to one sited at a place where it rains a lot.
lb,
Another version of this article had a location map of stations on a Koppen class coloured background.
I have not found any varioabl;e such as temperatures whose trend is affected by Koppeen class.
Why would you expect the trend to change with class?
When you see a graph with two response lines having different trends, it is easy to envisage them over the time scale in which they are measured. But, if the two lines are diverging over time, how long can this continue in time until one of them reaches impossible conditions?
Geoff S
Hi Geoff,
I’d expect the trend to change with class, because when the “global average temperature” changes, you’ll see places with more rain, more clouds and others with less. This could at least change the temperature trend.
Two diverging trend lines may continue to do so until one they reach a new stable plateau.
I always imagine chaotic attractors for changes in weather or climate patterns.
Cheers
Temperature is implicitly used as a pretty poor proxy for enthalpy, so the prevailing humidity and air pressure differences in different climate zones should lead to different temperature responses.
Except that IMHO:
1) The adjacent ground foliage appears to be too high relative to the green station enclosure height.
2) The tree to the left appears to be too close to the station . . . close enough to affect temperature sensing when the station is downwind of it.
3) At times when the station is downwind of—in the disturbed air zone coming off— Ayers Rock (Uluru) in the background, that could introduce erroneous temperature readings despite its apparent distance, which could be much closer that it appears depending on the focal length of lens used to take the photo.
Ummmm . . . for my Item 3, that’s assuming Ayers Rock hasn’t been “photoshopped” onto the horizon. 🙂
When the measurement uncertainty, even from pristine stations, is wider than the differences you are attempting to differentiate, how do you know that the actual trend is?