Guest essay by Mike Jonas
Introduction
There are a number of organisations that produce estimates of global temperature from surface measurements. They include the UK Met Office Hadley Centre, the Goddard Institute of Space Studies (GISS) and Berkeley Earth, but there are others.
They all suffer from a number of problems. Here, an alternative method of deriving global temperature from surface measurements is proposed, which addresses some of those problems.
Note: The terms global temperature and regional temperature will be used here to refer to some kind of averaged surface temperature for the globe or for a region. It could be claimed that these would not be real temperatures, but I think that they would still be useful indicators.
The Problems
Some of the problems of the existing systems are:
· Some systems use temperature measurements from surrounding weather stations (or equivalent) to adjust a station’s temperature measurements or to replace missing temperature measurements. Those adjusted temperatures are then used like measured temperatures in ongoing calculations.
· The problem with this method is that surrounding stations are often a significant distance away and/or in very different locations, and their temperatures may be a poor guide to the missing temperatures.
· Some systems use a station’s temperature and/or the temperatures of surrounding stations over time to adjust a station’s temperature measurements, so that they appear to be consistent. (I refer to these as trend-based adjustments).
· There is a similar problem with this method. For example, higher-trending urban stations, which are unreliable because of the Urban Heat Effect (UHE), can be used to adjust more reliable lower-trending rural stations.
· Some systems do not make allowances for changes in a station, for example new equipment, a move to a nearby location, or re-painting. Such changes can cause a step-change in measured temperatures. Other systems treat such a change as creating a new station.
· Both these methods have problems. Systems that do not make allowance : These systems can make inappropriate trend-based adjustments, because the step-change is not identified. Systems that create a new station : These systems can also make inappropriate trend-based adjustments. For example, if a station’s paint detoriates, then its measurements may have an invalid trend. On re-painting, the error is rectified, but by regarding the repainted station as a new station the system then incorporates the invalid trend into its calculations.
There are other problems, of course, but a common theme is that individual temperature measurements are adjusted or estimated from other stations and/or other dates, before they get used in the ongoing calculations. In other words, the set of temperature measurements is changed to fit an expected model before it is used. [“model” in this sense refers to certain expectations of consistency between neighbouring stations or of temperature trends. It does not mean “computer model” or “computer climate model”.].
The Proposed New System
The proposed new system uses the set of all temperature measurements and a model. It adjusts the model to fit the temperature measurements. [As before, “model” here refers to a temperature pattern. It does not mean “computer model” or “computer climate model”.].
Over time, the model can be refined and the calculations can be re-run to achieve (hopefully) better results.
The proposed system does not on its own solve all problems. For example, there will be some temperature measurements that are incorrect or unreliable in some significant way and will genuinely need to be adjusted or deleted. This issue is addressed later in this article.
For the purpose of describing the system, I will begin by assuming that the basic time unit is one day. I will also not specify which temperature I mean by the temperature, but the entire system could for example be run separately for daily minimum and maximum temperatures. Other variations would be possible but are not covered here.
The basic system is described below under the two subheadings :The Model” and “The System”.
The Model
The model takes into account those factors which affect the overall pattern of temperature. A very simple initial model could use for example time of year, latitude, altitude and urban density, with simple factors being applied to each, eg. x degrees C per metre of altitude.
The model can then be used to generate a temperature pattern across the globe for any given day. Note that the pattern has a shape but it doesn’t have any temperatures.
So, using a summer day in the UK as an example, the model is likely to show Scottish lowlands as being warmer than the same-latitude Scottish highlands but cooler than the further-south English lowlands, which in turn would be cooler than urban London.
The System
On any given day, there is one temperature measurement for each weather station (or equivalent) active on that day. ie, there is a set of points (locations) each of which has one temperature measurement.
These points are then triangulated. That is, a set of triangles is fitted to the points, like this:
Note: the triangulation is optimised to minimise total line length. So, for example, line GH is used, not FJ, because GH is shorter.
The model is then fitted to all the points. The triangles are used to estimate the temperatures at all other points by reference to the three corners of the triangle in which they are located. In simple terms, within each triangle the model retains its shape while its three corners are each moved up or down to match their measured temperatures. (For points on one of the lines, it doesn’t matter which triangle is used, the result is the same).
I can illustrate the system with a simple 1D example (ie. along a line). On a given day, suppose that along the line between two points the model looks like:
If the measured temperatures at the two points on that day were say 12 and 17 deg C, then the system’s estimated temperatures would use the model with its ends shifted up or down to match the start and end points:
Advantages
There are a number of advantages to this approach:
· All temperature measurements are used unadjusted. (But see below re adjustments).
· The system takes no notice of any temperature trends and has no preconceived ideas about trends. Trends can be obtained later, as required, from the final results. (There may be some kinds of trend in the model, for example seasonal trends, but they are all “overruled” at every measured temperature.).
· The system does not care which stations have gaps in their record. Even if a station only has a single temperature measurement in its lifetime, it is used just like every other temperature measurement.
· No estimated temperature is used to estimate the temperature anywhere else. So, for example, when there is a day missing in a station’s temperature record then that station is not involved in the triangulation that day. The system can provide an estimate for that station’s location on that day, but it is not used in any calculation for any other temperature.
· No temperature measurement affects any estimated temperature outside its own triangles. Within those triangles, its effect decreases with distance.
· No temperature measurement affects any temperature on any other day.
· The system can use moving temperature measurement devices, eg. on ships, provided the model or the device caters for things like time of day.
· The system can “learn”, ie. its results can be used to refine the model, which in turn can improve the system (more on this later). In particular, its treatment of UHE can be validated and re-tuned if necessary.
Disadvantages
Disadvantages include:
· Substantial computer power may be needed.
· There may be significant local distortions on a day-to-day basis. For example, the making or missing of one measurement from one remote station could significantly affect a substantial area on that day.
· The proposed system does not solve all the problems of existing systems.
· The proposed system does not completely remove the need for adjustments to measured temperatures (more on this later).
System Design
There are a number of ways in which the system could be designed. For example, it could use a regular grid of points around the globe, and estimate the temperature for each point each day, then average the grid points for global and regional temperatures. Testing would show which grid spacings gave the best results for the least computer power.
Better and simpler designs may well be possible.
Note : Whenever long distances are involved in the triangulation process, Earth’s surface curvature could matter.
Discussion
One of the early objectives of the new system would be to refine the model so that it better matched the measured temperatures, thus giving better estimated temperatures. Most model changes are expected to make very little difference to the global temperature, because measured temperatures override the model. After a while, the principal objective for improving the model would not be a better global temperature, it would be … a better model. Eventually, the model might contribute to the development of real climate models, that is, models that work with climate rather than with weather (see Inside the Climate Computer Models).
Oceans would be a significant issue, since data is very sparse over significant ocean areas. The model for ocean areas is likely to affect global averages much more than the model for land areas. Note that ocean or land areas with sparse temperature data will always add to uncertainty, regardless of the method used.
I stated above (“Disadvantages”) that the proposed system does not completely remove the need for adjustments to measured temperatures. In general, individual station errors don’t matter provided they are reasonably random and not systemic, because they will average out over time and because each error impacts only a limited area (its own triangles) on one day only. So, for example, although it would be tempting to delete obviously wrong measurements, it is better to leave them in if there are not too many of them, because they have little impact and their removal would then not have to be justified and documented. The end result would be a simpler system, easier to follow, to check and to replicate, and less open to misuse (see “Misuse” below), although there would be more day-to-day variation. Systemic errors do matter because they can introduce a bias, so adjustments to these should be made, and the adjustments should be justified and documented. An example of a systemic error could be a widespread change to the time of day that max-min thermometers are read. Many of the systemic errors have already been analysed by the various temperature organisations. It would be very important to retain all original data so that all runs of the system using adjusted measurements can be compared to runs with the original data in order to quantify the effect of the adjustments and to assist in detecting bias.
Some stations may be so unreliable or poorly sited that they are best omitted. For example, stations near air-conditioner outlets, or at airports where they receive blasts from aircraft engines.
The issue of “significant local distortions on a day-to-day basis” should simply be accepted as a feature of the system. It is really only an artefact of the sparseness and variability of the temperature measurement coverage. The first aim of the system is to provide regional and global temperatures and their trends. Even a change to a station that caused a step change in its data (such as new equipment, a move to a nearby location, or re-painting) would not matter much, because each station influences only its own triangles. It would matter, however, if such step-changes were consistent and widespread, ie. they would matter if they could introduce a significant bias at a regional or global level.
It wouldn’t even matter if at a given location on a particular day the estimated maximum temperature was lower than the estimated minimum temperature. This could happen if, for example, among the nearby stations some had maximum temperatures missing while some other stations had minimum temperatures missing. (With a perfect model, it couldn’t happen, but of course the model can never be perfect.).
All the usual testing methods would be used, like using subsets of the data. For example, the representation of UHE in the model can be tested by calculating with and without temperature measurements on the outskirts of urban areas, and then comparing the results at those locations.
All sorts of other factors can be built into the model, some of which may change over time – eg. proximity to ocean, ocean currents, average cloud cover, actual hours of sunshine, ENSO and other ocean oscillations, and many more. Assuming that the necessary data is available, of course.
Evaluation
Each run of the system can produce ratings that give some indication of how reliable the results are:
· How much the model had to be adjusted to fit the temperature measurements.
· How well the temperature measurements covered the globe.
The ratings could be summarised globally, by region, and by period.
Some station records give measurement reliability. These could be incorporated into the ratings.
Misuse
Like all systems, the proposed system would be open to misuse, but perhaps not as much as existing systems.
Bias could still be introduced into the system by adjusting historical temperature measurements – eg. to increase the rate of warming by lowering past temperatures. The proposed system does make this a bit more difficult, because it removes some of the reasons for adjusting past temperature measurements. In particular, temperature measurements cannot be adjusted to fit surrounding measurements, and they cannot be adjusted to fit a model (deletion of “outliers” is an example of this). If such a bias was introduced, the ratings (see “Evaluation” above) would not be affected, so they would not be able to assist in detecting the bias. The bias could be detected by comparing results against results from unadjusted data, but proving it against a determined defence could be very difficult.
Bias could still be introduced into the system by exploiting large areas with no temperature measurements, such as the Arctic, but the proposed system also makes this a bit more difficult. In order to exploit such areas, the model would need to be designed to generate change within the unmeasured area. So, for example, a corrupt model could make the centre of the Arctic warmer over time relative to the outer regions where the weather stations are. It would be possible to detect this type of corruption via the ratings (a high proportion of the temperature trend would come from a region with a low coverage rating), but again proof would be difficult.
NB. In talking about misuse, I am not in any way suggesting that misuse does or would occur. I am simply checking the proposed system for weaknesses. There may be other weaknesses that I have not identified.
Conclusion
It would be very interesting to implement such a system, because it would operate very differently to current systems and would therefore provide a genuine alternative to and check against the current systems. Raising the necessary funding could be a major hurdle.
The system could also, I think, be used to check some weather and climate theories against historical temperature data, because (a) it handles incomplete temperature data, (b) it provides a structure (the model) in which such theories can be represented, and (c) it provides ratings for evaluation of the theories.
Provided that the system could be used without needing too much computer power, it could be suitable for open-source/cooperative environments where users could check each other’s results and develop cooperatively. The fact that the system is relatively easy to understand, unlike the current set of climate models, would be a big advantage.
Footnotes
1. I hope that the use of the word “model” does not cause confusion. I tried to make it clear that the model I refer to in this document is a temperature pattern, not a computer model. I tried some other words, but they didn’t really work.
2. I assume that unadjusted temperature data is available from the temperature organisations (weather bureaux etc). To any reasonable person, it is surely inconceivable that these organisations would not retain their original data.
###
Mike Jonas (MA Maths Oxford UK) retired some years ago after nearly 40 years in I.T.
Abbreviations
C – Centigrade or Celsius
ENSO – El Niño Southern Oscillation
GISS – Goddard Institute of Space Studies
UHE – Urban Heat Effect
1D – 1-dimensional
How about just always requiring plotting the raw versus adjusted data on any graph or in any paper. people can then see for themselves the adjusted ‘opinion’ whatever method is used.
How about we stop trying to calculate global average temperatures from surface records altogether? They clearly are not up to the task. Let’s leave these measurements in their raw form and use them only for trivia purposes for local weather segments after the news each day!
wicked – Yes I have wondered that myself. By all means report them but really concentrate on marine temperatures with a coordinated, standardised system. With the degree of money now being committed to climate one could build a system from hell. These are the sort of issues that the IPCC should be addressing.
I have been using a system somewhat like this, which I describe here, with links there and many earlier articles. For over four years I have been posting reports, latest for December here. My main method (mesh) does involve triangulation – I use the convex hull of the points, which is effectively Delaunay – better than minimising length.
I don’t attempt to predict by latitude ot whatever – there is no need. The historic record gives a better predictor. In effect, monthly values are fitted with a linear model by least squares, and the result integrated by linear interpolation within the triangles. I use unadjusted GHCN, although adjusted makes little difference.
I don’t use daily values – then the effort of triangulation would be prohibitive, with little gain (and not much data readily available). Monthly is well within the capability of my PC.
You can run the system yourself – the R code is provided and described.
Nick,
So as a favour, could you modify the code to calculate the 4th root of temperature, trend that, and then compare to the temperature trend?
David,
“4th root of temperature”
I presume you mean fourth power. I could, easily, but people overrate the non-linearity produced by fourth power. Most monthly means lie between about 260 and 305 K. The ratio of derivatives of T^4 would be about 1.6:1, but that is for fairly rare extremes.
The nonlinearity for anomalies would be quite negligible. So the implementation would simply be to vary the weighting by that derivative (of T^4) for each reading.
Of course, you would probably want to average the days within the month by T^4. That I’m afraid we can’t.
Of course, you would probably want to average the days within the month by T^4. That I’m afraid we can’t.
Why?
As for it being a small ratio, 1.6:1 is not a small ratio when one is trying to resolve global trends out to 2 decimal places. 1.6:1 is huge! As for it being only for rare extremes, that’s the whole point, they aren’t rare.
“Why?”
Because for many locations we only have monthly averaged data. This is changing somewhat with GHCN Daily, but not completely. People don’t appreciate what an enormous effort has been required to simply digitise the temperature record that we have. Transfer it eliably from paper to bits.
But I should push back against the idea that T^4 is even appropriate. It is no doubt based on the idea that you really want average flux, not T. That isn’t necesarily true. But insofar as it is, T^4 dependence applies only to the reltively small proportion of flux that exits via the atmospheric window. For flux that interacts with the air (and GHG) ordinary diffusion (Rosseland) is a much better model, and that is linear.
To put it another way, while he surface itself radiates approx as T^4, the almost balancing back radiation also increases as T^4. The difference is close to linear.
Or yet another way, as the Earth warms with GHG, its apparent T seen from space doesn’t increase.
To put it another way, while he surface itself radiates approx as T^4, the almost balancing back radiation also increases as T^4. The difference is close to linear.
Or yet another way, as the Earth warms with GHG, its apparent T seen from space doesn’t increase.
Correct. The Effective Black Body temperature of earth doesn’t change by one bit due to the addition of GHG’s. However, that doesn’t mean that the system a a whole has a linear response. In fact, if it DID have a linear response, you would get a NON linear result by averaging temperatures across the globe. Get it? The back balancing radiation is not in near balance from a an altitude/lapse rate perspective, nor from a latitude perspective. The notion that everything just stays roughly linear due to the addition of GHG’s is to completely ignore all the physical processes that create everything from weather to trade winds to ocean currents, to processes like la nina and el nino.
That is a ridiculous assumption and one that is without merit unless and until it has been put to the test.
“Why?”
Because for many locations we only have monthly averaged data.
Actually, not that I thought it through, that would probably be sufficient. As long as the locations could be raised to the 4th power, and then averaged in time and space, the over weighting of cold regimes (high lat, high alt, winter) and the under weighting of warm regimes (low lat, low alt, summer) could probably become apparent.
Of course that is just my belief.
Thanks, Nick. I’m not familiar with Delaunay, but any consistent triangulation system would do.
re predicting latitude etc – by using the historical record as a predictor, you are in fact using a model to predict (the model is the interpretation of the historical record), and you are using the system itself to “learn” as I indicated. What you start with doesn’t matter, though it would be nice to keep the null latitude model for comparison. I suspect that the only difference would be in the ratings. UHE is more interesting than latitude, and it would be nice to see its influence in temperature estimates reduced to where it belongs.
I illustrated using daily values, but monthly would be valid (“I will begin by assuming that the basic time unit is one day. … Other variations would be possible …“), provided the whole month is available at each location. I agree that daily triangulation could well be prohibitively expensive, and a monthly system might have to be used for practical purposes. However, since monthly data is presumably constructed from daily data and some months can’t be constructed, data would be lost. I would be reluctant to lose anything until proved necessary.
I looked at your “little difference” page, and the graph seemed to me to be out of kilter with the explanation : the difference between the two plots went all one way while the explanation said it changed direction. I suspect that what I was looking at wasn’t what I thought it was.
Mike,
What is shown on that page is a back-trend plot. It shows the trend from the x-axis time to present. So starting before about 1960 the trends of adjusted to present are higher, but only by less than 0.1°C/Century. For trends from later than about 1970 to present, the adjusted is actually less.
There is a post here showing the regional breakdown, and also global, in the more familiar time series plot.
Thanks, Nick. I have thought quite a lot about what you are saying, and have come to the conclusion that all data should be used unadjusted no matter what. It makes everything a lot easier, and it makes everything a lot less open to manipulation. This ties in to something I said in another comment, namely that adjustments can introduce bias – even adjustments aimed at removing bias.
If there is something so dramatically important that it simply has to be dealt with, then its proper place is in the model – so for example if it was desperately needed then aircraft traffic could be in the model. I should have stuck to my own principle: measurement trumps model.
Why not start with something “easy” like a house. What’s the average temperature of your home. Good luck!
Question:
Is it important to determine the actual “average” temperature or is it more important to determine a trend?
If the average of intensive properties is physically meaningless, then the trend will be equally meaningless.
The average price of the shares in a portfolio is physically meaningless, but the trend matters a lot.
Really? The trend of a physically meaningless average is somehow meaningful? Share prices aren’t physical things, they’re monetary concepts.
I’d suggest you read this.
Mike Jonas January 10, 2016 at 9:02 pm
The average price of the shares in a portfolio is physically meaningless, but the trend matters a lot.
The ratio of “price” and “shares” is not governed by a non linear physical law. The analogy is not apt.
I get why you are trying to do what you are trying to do. But unless and until the data is first converted to 4th root of T and then averaged and trended, you’ve got nothing meaningful in terms of the earth’s energy balance. Every physicist I have ever talked to about this issue agrees to one extent or another. The reluctance of people with the processing power, access to data, and programming skills to do that just baffles me. They’ll spend countless hours figuring out how to interpolate data via some uber sophisticated math, but this simple ask gets ignored. If someone were to explain to me why I am wrong, I’d be happy to listen. But no one does.
I rather suspect (my hypothesis if you will) is that the resulting trend will be even more muted than the temperature trend.
davidmhoffer – I hadn’t considered using the 4th root, but of course it does make some sense. Please note that I don’t rule anything out (“The terms global temperature and regional temperature will be used here to refer to some kind of averaged surface temperature for the globe or for a region.“), so the system could use 4th root averaged ^4. At least it could get people thinking …..
You can derive a “global temperature” all you want, but it will still be physically meaningless.
Why does it even matter?
Because the surface temperature is being abused/adjusted to fabricate a “See, we told you so.” retrograde justification of CAGW. I’m sure there is some logical fallacy that applies.
Broken record time.
1) Anthro’s net 4 Gt/y CO2 contribution to the natural 45,000 Gt of reservoirs and hundreds of Gt/y fluxes is trivial, the uncertainties in natural fluxes are substantial, IPCC AR5 Fig 6.1 and Table 6.1.
2) The 2 W/m^2 RF of the net anthro CO2 accumulated 1750 to 2011 is trivial in the magnitude and uncertainties of the total atmospheric power flux balance. Fig 10 Trenberth et al 2011. The water vapor cycle, albedo, ice, evaporation could absorb this amount in the 3rd or 4th decimal point.
3) The GCMs are worse than useless. IPCC AR5 Text Box 9.2
Do you mean AR5 WG1 ALL FINAL Box 9.1 | Climate Model Development and Tuning (p. 745)?
feliksch
No, Box 9.2 beginning page 769. FAQ 8.1 is another interesting read.
Yeah. I say the term “model”. The worlds best Dynamical Forecast Weather Models
consistently “bomb” after only 7-days. And “these people” want to talk Climate Models???
Seven days? You’re being WAY too generous. Yesterday they weren’t forecasting rain in my area for a few days. Today they’re now forecasting rain. Total fail.
There is one pristine, evenly spaced, unadjusted surface temperature compilation, USCRN
and one that also adds ballon atmospheric sample, ClimDiv.
Over the area that these cover, the satellite data trends in UAH and RSS are pretty much the same, thus verifying the data extraction algorithms of the satellite temp series.
So what is wrong with using data derived from satellites.?
Min/max is too often a measure of how high or low the temp was allowed to go by cloud, or the absence thereof, at the potential peak or trough times of a day. I am staggered that this simple glaring fact is ignored.
“Coolest” years in my region (mid-coast NSW) were 1929 and 1974…both years of much rain, hence cloud. (Not that rain tells the full story of cloud at potential min/max time of day, but it’s the only indicator available that means at least a bit.) Is it surprising that our driest year after legendary 1902 was also our “hottest” year on record (1915 – sorry warmies)? Is it surprising that “record” high minima occurred in the massive wet of 1950 in regions of northern NSW where the cloud usually clears at night in winter – but didn’t in 1950?
The problem with global temp is not that it is hard to determine by min/max records. The problem is that is utterly without point. Because cloud.
Well, a couple of comments. First, Mike, thanks for an interesting proposal. It is not a whole lot different from what the Berkeley Earth folks do. They define a global “temperature field” which is a function of (from memory) latitude, altitude, and day of the year. In any case, much like your system.
For me, I’d have to go with the sentiment expressed by Anthony and his co-authors. Their thought is that if you have a huge pile of data, some of which is good, some bad, and some ugly, you should just use the good data rather than busting your head figuring out how to deal with the bad and the ugly.
That of course brings up a whole other discussion about what is “good data”, but in my eyes that discussion is much better than trying to put lipstick on a pig.
w.
Hi Willis – yes, if your sole objective is to get global and regional temperature trends using the surface station history, then go for quality. No question. But I see the model itself as an objective too. [I doubt the model can ever get implemented, so think of this as a thought experiment].
I do expect that much of what I am proposing is already being done, as you say re BEST. But I am trying to get the mindset turned around so that there is no attempt to fit the temperature measurements to the model, Only the model can change.
We have become obsessed by temperature trends, and the BEST mindset is full,of them. I see a trend as something that you interpret from data. First comes the data, and trends play no role in collecting the data. So when you are putting together the global picture from station and other data, trends should play no role. Only after you have done it would you look at it to see what the trends were.
We have also become obsessed by UHE. The way I deal with it is not to try to to remove it from the record, but to use it and quantify it. I see a parallel with the isostatic (hope I got the term right) adjustments to sea level. The powers that be decided to report sea level adjusted up to eliminate the isostatic effect. But my view of that is that they should report the real sea level first and address the reasons later. So with UHE. The temperatures in urbs are higher – it’s what they are – so we should record and use those higher temperatures and address the reasons later. My system I think is way better for doing that. And it also ends up telling you how to remove UHE from the results if you want to.
Mike Jonas January 11, 2016 at 12:26 pm Edit
Thanks, Mike. To me, Berkeley Earth does a good job in that their method is transparent and they maintain all of the raw data, so I have access to both the adjusted and unadjusted information. And as I mentioned, they (like you) establish a “temperature field” that gives the expected temperature given the month, altitude, and latitude of the station.
They have also used what to me is a good method, called “kriging”, to establish the field. From Wiki:
Is this the right way to do it? One thing I learned from Steven Mosher is that there is no “right” way to average temperatures. There are only different ways, each with their own pluses and minuses.
Regarding your method, I fear I don’t understand it. You have fitted a triangulated mesh to the individual station points. So far, so good. But I don’t understand how that relates to the model. You say that you use the model to determine the shape of the temperature line between two stations … but how do you use that data about the shape of those lines?
In any case, I think I’ll go mess about using CERES data and DEM data to see what the temperature field looks like …
Regards,
w.
OK, here’s a simple 1D example of how the “shape” works. Suppose that somewhere between two points there is an urb so that the model between the two points looks like this:
http://members.iinet.net.au/~jonas1@westnet.com.au/UHEModel.JPG
Then suppose that on a given day the temperatures as measured at the two points are 12 and 13 deg C and that there are no measurements from the urb or anywhere else in between the two points. The estimated temperatures then look like this:
http://members.iinet.net.au/~jonas1@westnet.com.au/UHEEstimated.JPG
Hope that helps.
Thinks … maybe you meant what formula is used. I like to keep things simple, so you take the three points of the triangle in the model as representing a flat triangle, move the triangle corners to match the three measured temperatures, then place each point within the triangle at the same distance above or below the triangle as it is in the model. Other methods are possible of course, but that one is simple.
When Willis leaves the exit door this widely open, he must like you.
I am not a physicist. Can the total energy emission from earth of the appropriate form (IR?) be measured accurately by current satellites? Is it theoretically possible to simply measure or accurately calculate energy in vs energy out on an annual basis?
Interesting use of the triangulation mapping technique. How about creating maps including the other weather-station data such as precipitation, local pressure, wind vector. cloud coverage? Could they produce sets of weather-maps that if somehow integrated over 30 years could produce a ‘supermap’ showing actual climate change in terms of e.g average windspeeds, rainfall, cloud-cover, pressure and so on?
Thank you for your very interesting article. I think that it looks like a much improved method over what is currently being used.
However I feel that it suffers from the deficiency that there is no such thing as a global temperature. The temperature at Reykjavík, Iceland right now is very different from the temperature at Brisbane, Australia right now. Worse, the temperature at Brisbane right now (about 8:00 PM), is quite different from the temperature it will be at noon tomorrow. And the noon temperature at Brisbane today is quite different from the noon temperature in 6 months. We are looking at tens of degrees centigrade change every day, as well as summer to winter.
Now for climate change work, we don’t care so much about the actual temperature, but do want to know about the trend, so it is possible to create an alternative algorithm that is free from the systemic biases caused by attempts to merge thousands of low grade temperature records together. The method I suggest involves taking each site and breaking it up into uninterrupted segments. Any change in a site creates a new segment – relocation, painting, new equipment, etc. Instead of lowering the quality of the data, these changes now simply create new segments.
Each segment will be much shorter, but relatively high quality. You can rely on the segment to be internally consistent. In other words if it shows a trend, there is probably a trend there. The trends of a region can be combined (nothing smaller than a continent) to show the overall trend of that region. I, personally, wouldn’t combine regions to try to get the trend for the whole planet, but everybody knows climate scientists are crazy.
I still prefer to use raw satellite data. Always start with the highest quality data. Make it a rule.
Trend analysis based on actual measurements, no homogenization or infilling, just station trends.
https://micro6500blog.wordpress.com/2015/11/18/evidence-against-warming-from-carbon-dioxide/
“The system does not care which stations have gaps in their record. Even if a station only has a single temperature measurement in its lifetime, it is used just like every other temperature measurement.
· No estimated temperature is used to estimate the temperature anywhere else.”
If this means no proxy temps, I’m all for it; even a few miles can make a huge difference in temperature or weather conditions.
Establish this model for measuring the surface temperature of the Earth
Sun and Earth lie in the plane of the ecliptic, so that the sun’s rays are parallel to this plane and affect the Earth by circles section of the Earth, which are parallel to the plane of the ecliptic. Determine the number of these planes and the northern and the southern hemisphere (say every 5 degrees, going from the center of the Earth on the ecliptic and the point at the place where the radius pierces the Earth’s surface.)
In these cross-sectional circles set of measurements (say every 5 degrees of the initial selected point. This measurement is carried out at the same time London time. This is a measurement for one day, and specify how many times to perform during 24 hours.
To determine the annual measurement of the true anomaly angle (angle phi) and it determine, for example, every day -which is about phi = 1 level).
Then you have a completely accurate, coordinated and sized natural temperature of the earth.
If we use well sited stations and find that the temperature is quite flat and mates well with satellite data,
one could imagine that just selected the very best sited station and just use that would give similar trend performance.
This could be continued worldwide: each country selects its best sited station and publishes that data unchanged.
Would the overall trend in temperature be different from the current land based systems currently in use?
An extreme version of this would be to have just one well sited sensor – no more, and report that trend.
Unfortunately, no one would trust data from only one national source…..
Before people suggest just one sensor would not be able to relate to world wide temperature, as mentioned above, heat content (+20 degrees, -5 degrees: average is ???) so perhaps one or one per country could be much better.
The variance of a sample size of one is infinite / unknown. The unfortunate truth is that anything man does is impermanent and results (and errors) change over time. We have no standard sensors that do not fail, no stations without non-random error, no satellites without measurement drift, error, or infinite lifespan, nor adequate numbers of satellite replicates. Error is us.
There are limits to what we are able to do and we have to know our limitations. The reality is that humans are not currently able to gather consistent, unbiased or error free information over centuries or even portions thereof. It is certain that the current changes over a century are well within our gross measurement error and that supposed ‘global climate change’ of .8 C per century is a farce.
We should start by gathering statistically valid estimates based on random, replicated samples of adequate size so we at least measure what sorts of variance our methods produce. Since we have not even done that, it certainly would be a good start so that we more accurately know our limitations. But we do not because it is inconvenient for those with agendas to explain away data that do not agree with their viewpoints.
Thanks, Mike Jonas. Your New System makes a lot of sense, but I’m afraid it is way too much for humans right now.
We will have to keep on using just MSU LT temperatures until we become of age, if we ever do.
In the end, your approach like all the others except satellite based measurement provides an imperfect fiction which is overwhelmed by inaccuracy and variability. You really cannot logically average temperatures across the globe with such poor distribution of stations and such variability of accuracy in local measurement capability. When you consider the lack of precision, error bars expand and your final product provides no greater utility than current systems, the value of which is already oversold.
george e. smith January 11, 2016 at 9:53 am
Well, that’s not entirely true, as “Centigrade” was also the accepted and common name for the Celsius scale when I was a kid, and nobody used “Celsius”. Like the song “Fever” said back in the early 20th century,
Next, here’s the result of asking for the definition in Google:
See that part that says “another term for Celsius”? … who would have guessed? Well, actually, I and most everyone would have guessed that Centigrade is another term for Celsius. Except you. I guess.
Then we have Websters, which says:
So Websters specifically disagrees with your claim that “centigrade” means ANY scale going from zero to 100. Instead, Webster says a centigrade scale is a scale which specifically goes from 0 AT FREEZING to 100 AT BOILING.
In other words, George, your attempt at being a scientific grammar nazi is a total fail. If you are going to snark at people about their choice of words, you should first do your homework and make sure of your own facts … so here are some more facts for you
The Centigrade scale was invented in 1744, and for the next two hundred years it was called the Centigrade scale. In 1948 the CGPM (Conference General des Poids et Measures) decided to change the name to the Celsius scale.
So despite your bogus claims, Celsius is nothing more than the new name for the Centigrade scale, and as the author you so decry states, indeed both terms continue to be used, and the abbreviation “C” is the same for both.
This use of both terms should not be surprising, since the scale was originally called “Centigrade” for over two hundred years, and the change was made recently, by fiat, and by a bunch of scientists in France. As a result, the term centigrade is still well established in common parlance.
Regards,
w.
PS—Don’t get me started on the tardigrade scale …
Your model leads to a “step function” temperature map–that is, at the line equidistant between two nearest neighbors, there have different temperatures on each side. This is counterintuitive, hence the usual choice of weighted smoothing over points across some kernel.
Not sure that I get what you are saying, but take a look at my comment http://wattsupwiththat.com/2016/01/10/a-new-system-for-determining-global-temperature/comment-page-1/#comment-2117872 – that shows no step function at the mid-point.
A thoughtful essay and thoughtful back and forth comments.
I too think we need to start with “some method” that uses “only the raw temperatures” with minimum “interpretation”. Then, see where that takes us.
It may allow us to see that, at least before the satellite record, world temperatures are unknowable with both sufficient coverage, accuracy, and precision to make the current “CAGW exists and is due to CO2” claims.
Why go through all of the contortions; why not just use satellite data and be done with it.
It seems to me that our government agencies and mainstream scientists accept and use satellite records except the temperature measurements.
For example: Sea level and temperature, ice mass, hurricanes(typhoons) wind and rain, Polar ice mass, Ice extent, CO2 Concentration, read some ocean buoys, read some tidal gauges, and much more are acceptable from the several hundred meteorological satellites..
Why does the government agencies refuse to use RSS, UAH and Rdiosonde for temperature?
This makes very good sense.
Can this new system be applied to historic data?
Dear Mike (Jonas),
You can NOT determine “global temperature”.
Regards,
WL
…”and a model” I stopped reading right there.
It isn’t that kind of model, as I tried to make clear. You only had to read one more sentence (or pick the clue in the previous para).