Those of you that have been with WUWT for a few years know that I often like to do hands-on experiments to illustrate and counter some of the most ridiculous climate change claims made on both sides of the aisle. On the alarmist side, you may remember this one:
Al Gore and Bill Nye FAIL at doing a simple CO2 experiment
Replicating Al Gore’s Climate 101 video experiment (from the 24 hour Gore-a-thon) shows that his “high school physics” could never work as advertised
Unfortunately, YouTube has switched off the video, but I’m going to try getting it posted elsewhere such as on Rumble. The graphs of temperature measurements and other images are still there.
Despite the fact that I proved beyond a shadow of a doubt that the experiment was not only fatally flawed, but actually FAKED, they are still using it as propaganda today on Al Gore’s web page.
They never took it down. Schmucks.
So along those lines, like Willis often does, I’ve been thinking about the recent paper published in Atmosphere by some of our brothers-in-arms (Willie Soon, The Connallys, etc) in climate skepticism,
Abstract
The widely used Global Historical Climatology Network (GHCN) monthly temperature dataset is available in two formats—non-homogenized and homogenized. Since 2011, this homogenized dataset has been updated almost daily by applying the “Pairwise Homogenization Algorithm” (PHA) to the non-homogenized datasets. Previous studies found that the PHA can perform well at correcting synthetic time series when certain artificial biases are introduced. However, its performance with real world data has been less well studied. Therefore, the homogenized GHCN datasets (Version 3 and 4) were downloaded almost daily over a 10-year period (2011–2021) yielding 3689 different updates to the datasets. The different breakpoints identified were analyzed for a set of stations from 24 European countries for which station history metadata were available. A remarkable inconsistency in the identified breakpoints (and hence adjustments applied) was revealed. Of the adjustments applied for GHCN Version 4, 64% (61% for Version 3) were identified on less than 25% of runs, while only 16% of the adjustments (21% for Version 3) were identified consistently for more than 75% of the runs. The consistency of PHA adjustments improved when the breakpoints corresponded to documented station history metadata events. However, only 19% of the breakpoints (18% for Version 3) were associated with a documented event within 1 year, and 67% (69% for Version 3) were not associated with any documented event. Therefore, while the PHA remains a useful tool in the community’s homogenization toolbox, many of the PHA adjustments applied to the homogenized GHCN dataset may have been spurious. Using station metadata to assess the reliability of PHA adjustments might potentially help to identify some of these spurious adjustments.
In a nutshell, they conclude that the homogenization process introduces artificial biases to the long-term temperature record. This is something I surmised over 10 years ago with the USHCN, and published at AGU 2015 with this graph, showing how the final product of an homogenized data is so much warmer than stations that have not been encraoched upon by urbanization and artificials urfaces such as asphalt, concrete, and buildings. By my analysis, almost 90% of the entire USHCN network is out of compliance with siting, and thus suffers from spurious effects of nearby heat sources and sinks.

In the new paper, here is a relevant papragraph that speaks to the graph I published in 2015 at AGU:
As a result, the more breakpoints are adjusted for each record, the more the trends of that record will tend to converge towards the trends of its neighbors. Initially, this might appear desirable since the trends of the homogenized records will be more homogeneous (arguably one of the main goals of “homogenization”), and therefore some have objected to this criticism [41]. However, if multiple neighbors are systemically affected by similar long-term non-climatic biases, then the homogenized trends will tend to converge towards the averages of the station network (including systemic biases), rather than towards the true climatic trends of the region.
The key phrase is “multiple neighbors, i.e. nearby stations.
Back on August 1, 2009, I created an analogy to this issue with homgenization by using bowls of dirty water. If the cleanest water (a good station, properly sited) is homgenized with nearby stations that have varying degrees of turbidity due to dirt in the water, with 5 being the worst, homgenization effectively mixes the clean and dirty water, and you end up with a data point for the station labeled “?” that is some level of turbidity, but not clear. Basically a data blend of clean and dirty data, resulting in muddy water, or muddled data.
In homgenization the data is weighted against the nearby neighbors within a radius. And so a station the might start out as a “1” data wise, might end up getting polluted with the data of nearby stations and end up as as new value, say weighted at “2.5”.

In the map below, applying a homogenization smoothing, weighting stations by distance nearby the stations with question marks, what would you imagine the values (of turbidity) of them would be? And, how close would these two values be for the east coast station in question and the west coast station in question? Each would be closer to a smoothed center average value based on the neighboring stations.
Of course, this isn’t the actual method, just a visual analogy. But it is essentially what this new paper says is happening to the temperature data.

And, it just isn’t me and this new paper saying this, back in 2012 I reported on another paper that is saying the same thing.
New paper blames about half of global warming on weather station data homogenization
Authors Steirou and Koutsoyiannis, after taking homogenization errors into account find global warming over the past century was only about one-half [0.42°C] of that claimed by the IPCC [0.7-0.8°C].
Here’s the part I really like: of 67% of the weather stations examined, questionable adjustments were made to raw data that resulted in:
“increased positive trends, decreased negative trends, or changed negative trends to positive,” whereas “the expected proportions would be 1/2 (50%).”
And…
“homogenization practices used until today are mainly statistical, not well justified by experiments, and are rarely supported by metadata. It can be argued that they often lead to false results: natural features of hydroclimatic times series are regarded as errors and are adjusted.”
So, from my viewpoint, it is pretty clear that homgenization is adding a spurious climate warming where there actually isn’t a true climate signal. Instead, it is picking up the urbanization effect which leads to warming of the average temperature, and adding it to the climate signal.
Steve McIntyre concurs in a post, writing:
Finally, when reference information from nearby stations was used, artifacts at neighbor stations tend to cause adjustment errors: the “bad neighbor” problem. In this case, after adjustment, climate signals became more similar at nearby stations even when the average bias over the whole network was not reduced.
So, I want to design an experiment to simulate and illustrate the “bad neighbor” problem with weather stations and create a video for it.
I’m thinking of the following:
- Use the turbidity analogy in some way, perhaps using red and blue food coloring rather than a suspended particulate, which will settle out. This is purely for visualization.
- Using actual temperature, by creating temperature controlled vials of water at varying temperature.
- Mixing the contents of the vials, and measuing the resultant turbidy/color change and the resultant temperature of the mix.
The trick is how to create individual temperature controlled vials of water and maintain that temperature. Some lab equipment, some tubing and some pumps will be needed.
Again purely for visual effect, I may create a map of the USA or the world, place the vials within it, and use that to visualize the results and measure the results.
I welcome a discussion of ideas on how to do this accurately and convincingly.
I do not think that would be a good analogy for “correction” errors. This is a case of a “non-blind” reporting, where the observer has expectations as to the outcome that influence the reported outcome. It is a matter of bad research design to not take bias into account, whether conscious or not.
Anthony Watts earlier work actually reporting on the siting of reporting stations as to how they are affected by UHI is more likely to show results.
Tom. The author of the above analogies IS Anthony Watts. 🤨
😀
You make a good general point, but, it would be helpful if you told us what “that [analogy]” is (the one you don’t like) so we really understand and discuss what you are saying… .
don’t worry, any warming from homogenization is negated by people getting up later in the old days
Use temperatures only from dairy farms and then homogenize to infer temperatures of places in between, or outlying.
😄
I only know that milk often is homogenized, what makes sense….
Similar suggestion: Remove all urban and UHI-prone sites (eg, airports) and apply homogenisation using only the remaining sites. Compare results for those stations with their results when homogenisation uses all sites. Then do the same for only the sites that were removed. You could get a bit more sophisticated and divide sites into more than two groups, but I think just two groups could give interesting results.
It’s the dairy-air that’ll get ya…
Review how the Argo buoy sea surface temperatures were homogenized with known faulty sst data measured from ships.
If you do not remove UHI from your data, you are incorporating it into your data. I’ve been saying this for many years. A child should be able to understand that this raises temperatures across the board. What’s wrong with these people?
They can’t handle the truth.
They have no interest in the truth.
Sod the truth!
But many of them do have a vested interest in the non-truth
I am having a bad neighbor problem right now. Seriously, I can’t find much credence in how the climate scientists avoid scientific standards and rigor in much of the published research , especially in using averages of selected groups of climate models. If we know that none of them are truly representative what is the good / value in showing graphs with wide ranges of predictions. If we know the models are deficient does showing a spread of models all of which are deficient is also not only deficient but intellectually deficient.
You should take lessons in consensus science then you would understand.
When I challenged Australia’s CSIRO on the 38C SST they were predicting in the NINO4 region by 2300 they replied that their latest model is only predicting to 2100 and its prediction is middle of the road of all the climate models.
Climate modellers are consensus scientists. They no longer care about the validity of their prediction, as long as it is in line with others. The consensus result has to be the average. Trying to determine which model produces the best result is not woke.
Data fabrication is even worse … https://www.youtube.com/watch?v=hs-K_tadveI
I don’t have anything to suggest about the visualization exercise. Not sure that will be persuasive.
But I will offer these plots for the USHCN monthly Tavg data for the month of December from 1895 to 2021. For the list of 1,218 stations, updated data files for raw, tob (time-of-observation adjusted) and FLs (final adjusted data after the pairwise homogenization adjustment) are available from NOAA here. https://www.ncei.noaa.gov/pub/data/ushcn/v2.5/ The latest compressed files have all the data for all the stations for all periods.
These plots at the links below give the mean for all reported data by year from the list of 1,218 USHCN stations (i.e. contiguous U.S.) Missing values are ignored to calculate the mean. Then the difference by year for tob-less-raw, FLs-less-tob and FLs-less-raw are given. The number of actual reported values in the Raw data and the percentage of values flagged “E” (Estimated) in the FLs data is also shown.
Key points:
Whatever trend for the contiguous US is being reported looks like it is driven by the adjustment processing, not by the raw data.
I realize that a straight mean of raw values does not represent a true climatic trend, as there is no attempt to area-weight or otherwise establish representative coverage.
The justification or technical validity of the time-of-observation and pairwise homogeneity adjustments is not what is being questioned here – these plots just show the bulk results over time, for whatever questions that may raise.
The recent years show a steep decline in the number of raw values reported, and the percentage of Estimated values flagged in the final (FLs) data has risen rapidly.
Finally, this is just for the month of December as an illustration.
You might want to look into Thermo chromatic dyes. Think old “mood rings”.
Bly Nye’s crappy explanation notwithstanding, is it not true that a jar with CO2 in it will warm up faster than the jar that has no CO2 in it when each is exposed to an external source of IR?
Glass jars? Most IR wavelengths are blocked by glass, such as in glass jars – the glass in the jars should heat up equally and warm the contents equally by conduction. Unless you change the material of the container to something that doesn’t block IR, then your methodology is completely flawed and the experiment is worthless.
This is my whole objection to the Eustice Crutch experiment which AGW is based on.
Can you also dismiss this? The Greenhouse Gas Demo – YouTube
Polyethylene terethphalate (soda bottle plastic) also blocks and absorbs IR wavelengths similarly to glass – I would have used a form of polycarbonate instead.
Frankly all this ‘experiment’ shows is the difference between the relative density of CO2 and air, nothing whatsoever to do with the IR absorption of CO2.
By the way – which thermometer was in which bottle? Neither were labelled or marked and, again, we’re left wondering how this guy managed to get the results that he did?
If you want a better ‘experiment’ – look around the internet; there are hundreds of crappy ones like this, but occasionally you will find a good one you can use.
I have always thought that the experiment purporting to show an “effect” of CO2 was invalid.
Part of that is because by adding CO2 to one container and not the other means you are adding density to one vs. the other, which means it should heat up more by that fact alone – in order to be valid, the experiment should add compressed air (with the same CO2 as ambient) into the “non-CO2” container to ensure the content density of each container is equal, thereby removing that as a confounding variable.
The material of the container and whether infrared radiation can penetrate it is, of course, another massive issue with the experiment.
Seriously?
I believe that the major point of Anthony Watts debunk of this video is that this looked for effect doesn’t happen, at least not reliably enough for the original video to have kept to an honest demonstration. If the effect doesn’t work reliability for every honest experimenter, then it’s invalid, not replicable.
Having said that, note that something even more to the point is that the atmosphere isn’t a sealed jar, so there’s no lid there, no blockage of convection, no block on turbulent mixing at the top, etc. So unless they can stick a planetary gravity field in a jar, the whole greenhouse effect concept doesn’t have a desktop demo like this anyway.
I think the point of the experiment was to show that CO2 is warmed more by IR radiation than most of the other gases that make up the atmosphere (mostly O2 and N2). This seems to be to be a basic scientific fact. A properly done experiment would show this. I’m not saying Nye’s experiment, if it can be called that, was properly done.
When a CO2 molecule absorbs energy it becomes more massive, heavier. E=mc^2. In absorbing it becomes heavier and slows down somewhat-something to do with mass and momentum. A slower molecule is ‘cooler’. When it gets rid of the energy it speeds up again. This little dance continues and ends with no resultant heating. That was a simplified version of things.
Exactly. At best, even if the experiment was valid in every way, all it would show is a purely hypothetical effect of atmospheric CO2 on temperature, based on its implicit assumption of all other things held equal.
Here in reality, “all other things” are most certainly NOT “held equal,” the “feedbacks” are negative, offsetting feedbacks, and the actual, as opposed to hypothetical, effect of atmospheric CO2 on the Earth’s temperature cannot be distinguished from ZERO.
Which is what observations support. No impact whatsoever.
Nye’s entire experiment was upside down, or perhaps more correctly, inside out. In the Earth’s GHE the source of LW that gets absorbed by CO2 and re-radiated is the earth. So LW that would have escaped the earth system instead gets sent back to earth. In Nye’s experiment, the LW source was OUTSIDE the system instead of in the middle of it! So when it hit the CO2 layer, it would have been absorbed and re-radiated with some of it being ejected from the system. In other words, the most likely outcome was that the jar with CO2 would warm slower.
If I recall, that’s exactly what Anthony’s experiment showed, and is a testament to Nye’s complete lack of knowledge about the GHE. He designed an experiment that if succesfull could only show the opposite of what he wanted to show, and so when confronted with results that didn’t show what he expected, he faked them because he just didn’t know enough to design an experiment that actually replicated the GHE.
Aside from the fact that glass is opaque it IR, CO2 is heavier than air – molecular weight of CO2 = 44, air (O2 & N2) ~28. Also CO2 has a much higher specific heat than air. Thus, if the same amount of energy is added to both containers the air would warm up more than the CO2.
Ah wait a second – the density of CO2 means that it’s slower to heat up and has a higher specific heat value, but it also retains that heat better than air? So if enough heat was added by conduction and enough time was allowed for (given the endothermic properties of the alka seltzer reaction and the density of CO2) the CO2 bottle might get slightly warmer than the air, mightn’t it? Fake results as it’s due to conduction and the various properties of the bottles/jars, the water, the air, the alka seltzer and, not least, the resultant CO2 but not IR absorption.
A simple black body experiment with different insulators between the black body and the sensor would give better results.
To everyone who dismisses Nye’s poor experiment (where he probably did not adhere to anything resembling scientific procedures), I think that the CO2 should warm up faster when exposed to IR, because quite simply it absorbs IR wavelengths readily and will therefore be warmed more than non IR absorbing gases. That is the basis for the greenhouse effect.
That is not the basis for the GHE. The basis for the GHE is that CO2 absorbs LW and re-radiates it or else gives up the energy to other molecule in the atmosphere via collision. This LW is generated from INSIDE the system when the earth absorbs SW and converts it to longwave. Nye’s experiment reversed this, putting the longwave source OUTSIDE the system.
The GHE has nothing to do with how much CO2 warms from LW and everything to do with it re-radiating or otherwise giving up the energy it just absorbed. See my comment upthread.
In addition, sunlight contains very little energy with wavelengths longer than 4um.
I’m sure that there is a lot of laboratory data to support general Greenhouse effect principles, like absorption of specific wavelengths by certain gases, whether CO2, CH4, H2O, or whatever. This in turn goes into the most basic integrative computer models, like MODTRAN, to start to give some idea as to how much a column of atmosphere ought to be warming. So GHE theory is based on data *plus* unavoidable assumptions about how to integrate things mathematically. This is quite a different matter from trying to confirm a GHE effect directly, in a closed jar, without a mathematical model, which is what we’re referring to here!
It is this latter thing, *directly* confirming the ‘closed jar’ Greenhouse effect, that we’re questioning here, both as something unverified *and* as likely to be irrelevant too, if it’s the open, ”no top’ situation that we’re really interested in.
It’s not even that “open top” situation. It’s the effect as a part of the Earth’s atmosphere, subject to all of the processes and feedbacks inherent in that system. They fixate on the hypothetical effect, when what they should be focused on is the real world effect when all processes and feedbacks are applied, which can only be determined by observation of the real world. The hypothetical effect of CO2 on temperature is based on the inherent assumption all other things held equal, a situation that has never existed, does not exist today, and never will exist.
At best all the blather about the “enhanced greenhouse effect,” i.e., the notion that adding CO2 to the atmosphere will raise the Earth’s temperature, is an academic exercise with no real world application. It should certainly NOT be used as a basis for ‘policy.’
The real world says quite plainly that atmospheric CO2 levels are not the “driver” of the Earth’s temperature.
The real world says CO2’s impact is not able to be differentiated from zero, because that is what observations support.
The problem is that when CO2 absorbs energy is moves to an excited state. It will then reradiate that energy soon thereafter or it will transfer the energy to another molecule (N2/O2). When this happens it cools back to its original temperature. If there were N2/O2 in the bottle, you might see some warming.
The planet is very close to being in a radiation balance. (Even if you accept the alarmists’ numbers it’s easily within 1%.)
Let’s assume, for ease of arithmetic, that radiation from the surface is the only thing we have to worry about.
Let’s also assume that the planet is a disk with one side permanently facing the sun.
Let’s assume further that there are two possible conditions:
1 – One version of the disk does not distribute heat over its surface. The side that faces away from the sun is at 0 Kelvin.
2 – The other version evenly distributes heat. We will take its temperature as 279K.
Because radiated heat is proportional to T^4, the sunward side of the first disk will be at 331K. The average temperature will be 165K.
So, just changing the heat distribution changes the average temperature from 165K to 279K.
It seems that, the more evenly the heat is distributed, the higher will be the average temperature.
So, if the process of homogenization artificially evens out the average temperature, one of two things will happen:
1 – The radiation balance will be upset.
2 – The average temperature will be too high.
As far as I can tell, temperature distribution is a big deal, and it is mostly ignored. By artificially evening out the temperature distribution, homogenization will give a wrong (too high) average temperature.
Temperature is an intrinsic quantity, unlike energy, it isn’t conserved. You can average temperatures, but it has physical meaning only for averaging using materials with the same specific heat capacity.
That’s fine but the design brief was:
🙂
I don’t understand the pathological desire for there to be a ‘balance’. There has never been a balance. We have gone from full-on glaciation to interglacials. Where’s the balance?
It’s not a desire, it’s an observation.
The amount of incoming radiant energy from the sun is basically the same as the amount of energy the Earth radiates (and reflects) into the depths of outer space.
For the purposes of my thought experiment above we can treat the balance as perfect.
Some suggestions from a guy with 13 issued patents (admittedly, I designed some unique experiments but had others run them.
Had to go to main computer archive, as could not remember post title:
How Good is GISS, guest post 08/03/2015. My, time flies. Analytic memory was correct. 4 urban, ten suburban/rural, all Surface Stations project CRN1. The guest post also hyperlinked to the then Surface Stations data I extracted and used for the qualitative (Mark 1 eyeball) trend analysis.
2. Take a baritone voice and homogenize it with a soprano to get a yucky blend.
3.Mix types of pop (remember doing that at Royal Fork or some other all-you-can-eat buffet as a kid lol) to get a “weird” tasting pop.
— Your idea of red-blue food coloring is good, but, make it yellow + blue slime that is too green (truth is blue) — here’s a recipie for “Slime” https://www.letsroam.com/explorer/easy-slime-recipe/

4. Speed up (hotter) music so it is hilarious but WRONG.
Normal speed “You’re Welcome”
https://www.youtube.com/watch?v=79DijItQXMM&ab_channel=DisneyMusicVEVO
(published on YouTube)
Speeded up “You’re Welcome” (1.5x, 1.75x, 2x)
My comment just *poof* disappeared😲
So, splitting it into two:
PART 1

1. I hope a synthetic chemist volunteers a “recipe.” E.g., add a bit of this… another bit of this…… and instead of Kool-whip, you have toothpaste.
2. Take a baritone voice and homogenize it with a soprano to get a yucky blend.
3.Mix types of pop (remember doing that at Royal Fork or some other all-you-can-eat buffet as a kid lol) to get a “weird” tasting pop.
— Your idea of red-blue food coloring is good, but, make it yellow + blue slime that is too green (truth is blue) — here’s a recipie for “Slime” https://www.letsroam.com/explorer/easy-slime-recipe/
PART II
Speed up (= hotter temp.) music so it is hilarious but WRONG.
Normal speed “You’re Welcome”
(published on YouTube)
Speeded up “You’re Welcome” (1.5x, 1.75x, 2x)
Who is this intended for? The homogenization problem seems very similar to how image convolutions work (eg image smoothing) which is typically a 3×3 array operating pixel by pixel. One can misuse a convolution and result in that which is nothing like the original.
The average 5th grader. 🙂
In other words, the average American.
In that case, I suggest using chocolate milk.
Actually not a bad idea, given my understanding of what Anthony is trying to present in a visual… milk with different concentrations of chocolate syrup, say from 0% syrup to 100% syrup.
*sigh* but then there is the temperature thing involving two dissimilar materials.
But throwing some things out there is what we are being asked to do, and chocolate milk may set someone off on a better tack, just as Janice posited ‘slime’ in different colors.
👍👍 Mike
Milk and chocolate sauce is a winner I think, because…
1) Humorous reference to homogenization which is most people’s only association with the word. (It is a subtle ridicule that may help click bait)
2) pouring in chocolate sauce to various glasses of milk is easy to associate with various amounts of UHI and allows you to call attention to the artificial warming in a visible way—talk about sources of urban heat as you pour more or less chocolate sauce and stir it up
3) It should be easy to see a contrast between pure milk and chocolate milk.
4) There’s no technical challenge of controlling or measuring temperature for people to nitpick.
5) Kids will enjoy replicating the experiment at home. Good safe and delicious fun! Get participation and reinforce learning.
I’m nitpicking as well but is it an experiment ‘to see what happens if….’ or a demonstration ‘to show what happens when….?’ The different approaches would require radically different ideas really. Reading Anthony’s post and the replies I’m getting quite mixed messages as to which approach is being taken.
I think, Mr. Page, given that homogenization easily results in inaccurate data, that it is “what happens when.”
You make good points, Mr. Davis. I think chocolate milk isn’t the best way to demonstrate the UNSAVORY outcome of homogenization of surface temperatures, however.
If > chocolate = > temperature….. the more chocolate the better😋
If > blandness (lack of chocolate) = > temperature…., not a powerful way to get the point across.
Try:
1) accurate data = excellent chocolate milk;
2) homogenized = chocolate milk with kale blended in 😝
So about 5 grades higher than the average alarmist?
Ignore 5 yrs of a station output & replace it with the estimates based on the data processing techniques (eg. homogenization) they normally use. Compare these estimates with the real data you put aside, noting the average error of daily values (differences).
Now repeat the process for all stations 1 at a time.
What happens if we do this with random selections of stations eg. 1%, 2%, 5%, 10%, 20% ?
What is the error range as we increase the number of stations being estimated?
If the stations with lower trends were more trusted, how would this change the result?
If the stations with higher trends were more trusted, how would this change the result?
How does the current result compare to those 2 extreme biases?
Repeating my comment from earlier regarding Onslow Airport (WA, Australia) 13 Jan 2022.
https://wattsupwiththat.com/2022/01/26/australias-broken-temperature-record-part-1/#comment-3440486
The position of weather stations in Penrith & Richmond (NSW) areas have been atleast 3 locations each. These 2 towns should have similar climates except for tracking of some clouds, similar low altitude, same river, same mountains to the west. The early locations were rural / small country town, surrounded by irrigated farms with no water restrictions. Now the Penrith weather station is North-west of a much larger suburban city (but near Penrith lakes, the river, open space with some buildings). The Richmond weather station is now at the RAAF base (airport) with less urban development compared to Penrith. Both now limit outdoor water use during the day & less area has crops & use less irrigation.
You would think the 2 locations would have a very similar climate if they had no human activity. But now the Penrith measurements are typically 0C to 2.5C warmer than Richmond. The average would probably be about 1C warmer. It depends greatly on UHI, wind direction/speed, cloud position & storm paths.
Remember The temperature (peak summer or coldest winter day) can vary 10C within 50km from Sydney coast to Western suburbs. Then colder over the next 50 to 100km as you go up mountains or southern highlands. How do they average these for grid cells of 250km?
As far as a turbidity visual, there are a number of companies that sell turbidity standards for calibrating turbidity meters. Most are a liquid that that are inverted several times before use. They’re not cheap but they are accurate. (The turbidity shows up as a white color.) The standards are often made with formazine. It will cloud the water but tends to take a longer time to settle out.
Since you are only looking for a visual illustration, you could buy one very high NTU standard (say, 4000 NTUs) and add different amount to the same volume of clear water.
If you could get a clear map to put over a light table then place your different NTU containers around the location to “homogenized”. Have an empty container for the missing site. Draw off the same volume of turbid water from each of the surrounding “sites”.
Camera angle would likely need to change to get the effect.
(If you have access to a turbidity meter, you could label each value.)
PS You could put a magnet stirrer under to eliminate the settling issues.
Also — show what a tiny mistake in plotting a plane’s or a ship’s course can make …. only .5 degrees off and……… after days of traveling….. uh, oh! You missed Hawaii…. you are out of fuel in the middle of the Pacific Ocean……. GAME OVER.
I would love to see a comparison of the digital thermometers used in Australia since 1998 which have 1 second resolution with the mercury thermometers used before then that have about a minute resolution. This must have a big impact on BOM stats.
As far as using actual temperatures of water goes, perhaps aquarium heaters/thermometers?
Portable sous vide cooking units made today are pretty good at keeping liquids at constant temperature for long periods of time and they are reasonably priced.
A thought about data collection.
As I understand it, homogenizing temps involve taking surrounding “official” sites and extrapolating the temperature for a large are where there are no “official” sites.
In the stores I’ve noticed home temperature/weather stations that can upload the values to a website. (Of course that gives no clue as to proper siting.)
But if those home kits’ values in one of those large homogenized areas vary greatly …?
The experiment that cries out to be done is to calculate the global average anomaly history with and without the adjustments, to see what difference it makes. I’ve done it, and the difference is small.
Define “small” Nick.
While you’re at it Nick, you might mention which way the error goes.
Look at the squirrel over there.
Adjustments are made on a regional basis. When you go global you ‘smear’ everything. Of course, there is little difference. Suddenly you get a new baseline.
Do you remember the post on here called On ‘denying’ Hockey Sticks, USHCN data, and all that – part 2
Look at the chart USHCN Temperatures Raw 5-yr Smooth
https://wattsupwiththat.com/2014/06/26/on-denying-hockey-sticks-ushcn-data-and-all-that-part-2/
It bears no resemblance to a currently plotted chart or the othe charts Zeke posted which have now unfortunately disappeared.
I have stored NCDC & Hadcrut charts that also bear no resemblance between old and new data, but I can’t post the Excel sheet on here.
Question:
Did you use “TheWayBackMachine” to get the without adjustment values or what a current website now says are the without adjustment values?
GHCN publishes the full unadjusted record here, updated daily. It is the qcu file, the qcf is adjusted. That is the primary source. From the discussion here, people do understandably get the impression that they only produce adjusted data.
Try putting that site address in the search of TheWayBackMachine.
You don’t need to, and it would be of no use. Each day they post an updated version of the entire history (unadjusted and adjusted). That overwrites previous versions.
TheWayBackMachine may have histories before they were overwritten.
Here’s the oldest I found.
https://web.archive.org/web/20151030001149/https://www1.ncdc.noaa.gov/pub/data/ghcn/v4/
A click on the “parent directory” takes you here:
https://web.archive.org/web/20151030175532/http://www1.ncdc.noaa.gov/pub/data/ghcn/
Anthony, you might remember I spent some time exploring the temperature record a decade ago, and homogenisation really bugged me. While I don’t have a suggestion for an experiment, the effect of what is homogenised is likely to be important. And I don’t just mean rural vs urban. Some insight here – https://diggingintheclay.wordpress.com/2012/10/29/the-trouble-with-anomalies-part-2/ Actually obvious, but challenging to prove an effect.
Hi, I’m a PhD MS chemist always here to help. Experimental design is the most solid foundation of a study. Immediate technical stuff that comes to my mind are how representative is your model with respect to the real system that you are trying to mimic and make conclusions. Statistics play the role of making sure that the experiment is objective and can conclude valid conclusions based on intervals of confidence. One problem with measuring climate from the point of view of temperature is that this reference has too many variables contributing, and so it is too hard to distinguish the pieces of heat from many contributors, including CO2. It is scientifically more adequate to study an issue from a reference point that has the minimum contributions, so that correlations become correspondences, in which there’s more confidence to affirm that there is a cause and effect. This is why scientists are saying that it would be better to study the climate change issue not from the temperature point of view, but from pressure and volumes. The thermodynamics of temperature for gases is established by the ideal gas law PV = nRT, and so data of pressure and volume can be used to indirectly study temperature changes.
A recent article here was talking about the issue of heat transmission called convection used by gas molecules to transmit heat in very low amounts and the ability to CO2 to heat up the atmosphere. Another recent article was mentioning that there’s no scientific proof that temperature is a function of CO2 concentration; no one knows if that relationship is lineal, logarithmic, quadratic, exponential, power series, …, and that will affect a lot of the real role of CO2 in the atmosphere.
Simulating experimentally in a lab something about the atmosphere is actually nearly to impossible for modern science. Notice that we are pretending to extrapolate the science of the minuscule into the macro atmosphere. This transference requires major adjustments to existing theory of gases to account for effects that are greater than quantum effects from which thermodynamics is founded, contributions that are not accounted for in the formulas. The adjustments are mostly unknown, so that using the science of thermodynamics as it is wouldn’t even be enough to really understand the behavior of the atmosphere. The classification of the atmosphere as chaos theory speaks for itself, in which the indeterminism of random events make impossible future predictions.
In addition, any experiment attempting to mimic something related to CO2, warming, and the earth would have to make sure that the ratio of gases is kept at about 78% N2, 21% O2, 0.90% water vapor 0.04% CO2; better experiments must include cycles of carbon transformations with the atmosphere, soils and oceans. The earth is a living equilibrium of constant physico-chemical changes of a level of articulation beyond comprehension. Whoever can overcome all of this issues and a lot more, to come up with real scientific answers, deserves a PhD.
Another recent article discussed a personal question that I happened to be asking for long: no one knows the macro equivalent of the thermodynamics concept of Heat Capacity for the atmosphere, meaning that we don’t know how much heat the atmosphere can take before showing a change of 1 degree. This would say a lot about the possibility of any gas changing it’s overall temperatures. In addition, the exact amount of heat that individual molecules of CO2 can absorb and transmit is not clear.
Furthermore experiments would have to asses issues such as quantum effects of molecular modes of vibrations attained by molecules of CO2 as a consequence of absorbing IR energy; this does not account for heat as in temperature.
As a researcher I advice to be careful with experiments, because experiments can be easily-flawed and even manipulated. If anyone is attempting to do experiments and share results, I strongly advice going the standard method of designing it, running it, getting data, concluding and publishing in a peer-reviewed journal. Until then it would only be insignificant/hypothesis/an idea.
I will read more of you experiment and Gore’s, didn’t finish it. Always here to help, I could take some time to check out your experimental design. Good luck with it.
Thanks.
JBVigo, PhD
I don’t think the experiment/demonstration is about reproducing the physical effects of homogenisation. I think Anthony is looking for an analogy. Analogies are limited in scope.
He’s probably looking for a WAH !!! factor that immediately makes a person think.
Y’all ‘Climate Deniers’ gonna get us all killed: Massive explosion on far side of the sun could have been catastrophic for Earth https://freerepublic.com/focus/f-chat/4040192/posts