
Temperature averages of continuously reporting stations from the GISS dataset
Guest post by Michael Palmer, University of Waterloo, Canada
Abstract
The GISS dataset includes more than 600 stations within the U.S. that have been
in operation continuously throughout the 20th century. This brief report looks at
the average temperatures reported by those stations. The unadjusted data of both
rural and non-rural stations show a virtually flat trend across the century.
The Goddard Institute for Space Studies provides a surface temperature data set that
covers the entire globe, but for long periods of time contains mostly U.S. stations. For
each station, monthly temperature averages are tabulated, in both raw and adjusted
versions.
One problem with the calculation of long term averages from such data is the occurrence of discontinuities; most station records contain one or more gaps of one or more months. Such gaps could be due to anything from the clerk in charge being a quarter drunkard to instrument failure and replacement or relocation. At least in some examples, such discontinuities have given rise to “adjustments” that introduced spurious trends into the time series where none existed before.
1 Method: Calculation of yearly average temperatures
In this report, I used a very simple procedure to calculate yearly averages from raw
GISS monthly averages that deals with gaps without making any assumptions or adjustments.
Suppose we have 4 stations, A, B, C and D. Each station covers 4 time points, without
gaps:
In this case, we can obviously calculate the average temperatures as:
A more roundabout, but equivalent scheme for the calculation of T1 would be:
With a complete time series, this scheme offers no advantage over the first one. However, it can be applied quite naturally in the case of missing data points. Suppose now we have an incomplete data series, such as:
…where a dash denotes a missing data point. In this case, we can estimate the average temperatures as follows:
The upshot of this is that missing monthly Δtemperature values are simply dropped and replaced by the average (Δtemperature) from the other stations.
One advantage that may not be immediately obvious is that this scheme also removes
systematic errors due to change of instrument or instrument siting that may have occurred concomitantly with a data gap.
Suppose, for example, that data point B1 went missing because the instrument in station B broke down and was replaced, and that the calibration of the new instrument was offset by 1 degree relative to the old one. Since B2 is never compared to B0, this offset will not affect the calculation of the average temperature. Of course, spurious jumps not associated with gaps in the time series will not be eliminated.
In all following graphs, the temperature anomaly was calculated from unadjusted
GISS monthly averages according to the scheme just described. The code is written in
Python and is available upon request.
2 Temperature trends for all stations in GISS
The temperature trends for rural and non-rural US stations in GISS are shown in Figure
1.

This figure resembles other renderings of the same raw dataset. The most notable
feature in this graph is not in the temperature but in the station count. Both to the
left of 1900 and to the right of 2000 there is a steep drop in the number of available
stations. While this seems quite understandable before 1900, the even steeper drop
after 2000 seems peculiar.
If we simply lop off these two time periods, we obtain the trends shown in Figure
2.

The upward slope of the average temperature is reduced; this reduction is more
pronounced with non-rural stations, and the remaining difference between rural and
non-rural stations is negligible.
3 Continuously reporting stations
There are several examples of long-running temperature records that fail to show any
substantial long-term warming signal; examples are the Central England Temperature record and the one from Hohenpeissenberg, Bavaria. It therefore seemed of interest to look for long-running US stations in the GISS dataset. Here, I selected for stations that had continuously reported at least one monthly average value (but usually many more) for each year between 1900 and 2000. This criterion yielded 335 rural stations and 278 non-rural ones.
The temperature trends of these stations are shown in Figure 3.

While the sequence and the amplitudes of upward and downward peaks are closely similar to those seen in Figure 2, the trends for both rural and non-rural stations are virtually zero. Therefore, the average temperature anomaly reported by long-running stations in the GISS dataset does not show any evidence of long-term warming.
Figure 3 also shows the average monthly data point coverage, which is above 90%
for all but the first few years. The less than 10% of all raw data points that are missing
are unlikely to have a major impact on the calculated temperature trend.
4 Discussion
The number of US stations in the GISS dataset is high and reasonably stable during the 20th century. In the 21st century, the number of stations has dropped precipitously. In particular, rural stations have almost entirely been weeded out, to the point that the GISS dataset no longer seems to offer a valid basis for comparison of the present to the past. If we confine the calculation of average temperatures to the 20th century, there remains an upward trend of approximately 0.35 degrees.

Interestingly, this trend is virtually the same with rural and non-rural stations.
The slight upward temperature trend observed in the average temperature of all
stations disappears entirely if the input data is restricted to long-running stations only, that is those stations that have reported monthly averages for at least one month in every year from 1900 to 2000. This discrepancy remains to be explained.
While the long-running stations represent a minority of all stations, they would
seem most likely to have been looked after with consistent quality. The fact that their
average temperature trend runs lower than the overall average and shows no net warming in the 20th century should therefore not be dismissed out of hand.
Disclaimer
I am not a climate scientist and claim no expertise relevant to this subject other than
basic arithmetics. In case I have overlooked equivalent previous work, this is due to my ignorance of the field, is not deliberate and will be amended upon request.



Okay, this means that Peter was right, then;
Would anyone care to explain to me why the graph linked by Mr Palmer for Hohenpiessenberg at http://climatereason.com/LittleIceAgeThermometers/Hohenpeissenberg_Germany.html shows a virtual flat line and then one that MFK Boulder links to at http://preview.tinyurl.com/Hohenpeissenberg shows our favourite hockey stick, preferably without using the word “baloney” in the text even if you do spell it correctly.
In Re; Garrett Curley (@ga2re2t) says:
October 24, 2011 at 3:13 am
As to the notion that ‘skeptics’ don’t deny that the Earth has warmed; I think it a bit much to lump all of us into one group with one set of beliefs. The Earth seems to have warmed, according to what the record tells us, but it is difficult to imagine that we know with any degree of certainty. Most of us (I believe) stop short of accusing the curators of the data of out an out malfeasance, but even given honest brokers; the statistics of the matter are mind bogglingly difficult. Imagine estimating the average temperature of a sphere roughly some 8000 miles in diameter with a data set of thermometers unevenly spaced, gaps and overlaps abound and 2/3 of the surface has no meaningful data. We simply do not have one giant thermometer that gives us global mean surface T. (The closest we come now is the satellite data, which may in part explain the reduced number of stations; they aren’t really all that necessary with the satellites giving us significantly more accurate and complete data.) Too bad that record only begins in 1979.
In light of the best evidence; I think the Earth has probably warmed. It’s certainly warmed from the late 1800’s. In fact it’s likely warmed continously from the LIA to present.
But this article just takes a look at a rather humble slice of data; unadjusted data from stations with continuous reporting. It is not peer reviewed. It isn’t presented as such and isn’t yet meant for publication.
I think a lot of you believers do not have a good understanding about the nature of this debate. There may be a paper that results from this. It wouldn’t be the first time something that started out here turned into a paper survived peer-review and got published. But we don’t ostracize those with dissenting ideas here. We talk about those ideas.
That is how science usually begins; a hypothesis is developed and a means of testing it is devised. I don’t know how the current cargo culturalists who run orthodox climatology are doing it these days. It appears that their science is based entirely on models, data adjustments, squelching dissent and gaming peer review. That is just my perception. That is my opinion and perhaps I am wrong.
Either way, if the idea in this article were to develop into a peer reviewed, published paper it would certainly give those charlatans some issues to address. Especially if the data and methodology were shared freely and without objection.
MFKBoulder October 24, 2011 at 6:11 am says: “Look at the Hohenpeissenerbg Graph and you see the statement quoted is nothig but belony.”
The link you provided does not suggest whether the graph uses raw or adjusted data.
Furthermore, the link mentions that, “In March 1950, the status of the Hohenpeissenberg station was upgraded to that of a meteorological observatory.”
Did it switch in 1950 from “Mannheim hours” (at 700, 1400 and 2100 hours local mean time) to hourly readings? If so, does the graph use only the data taken at “Mannheim hours” for accurate comparison?
John M Reynolds
IN 1970, very few of us had air conditioning. If we had A/C at home, it was a window unit. (Many of my friends slept outside during the summer.) We didn’t have A/C in our cars. Most people didn’t have A/C at work.
Today, we wake up in an air conditioned home, walk 10 feet and get in our air conditioned cars. We then walk through the parking lot (god, it’s hot out!) and into an air conditioned office or other workplace. It’s no wonder it seems hot out, because we’re no longer used to the normal summer heat.
KR says:
October 24, 2011 at 8:18 am
“Averaging raw temperatures (which vary hugely over short distances) rather than anomalies (which don’t – a mountaintop and a nearby pass/beach have different raw temperatures, but see roughly the same weather patterns).”
In fact, my method amounts to averaging anomalies rather than raw temperatures.
“Throwing out 90% of the temperature records, when even a quick examination shows 1/3 of stations with a negative trend, 2/3 with a positive trend, making any conclusions from 10% poorly supported.”
The 10% were selected not for some trend or for location but purely based on continuity. If you think that criterion meaningless, fine, I just don’t agree with you.
“For the correlation of nearby station anomalies and area weighting, I would recommend Hansen & Lebedeff 1987 …”
I’m not claiming to have calculated The One True Average Temperature Trend. The only point I make is that long-running stations trend differently from ones that are not, and for that I don’t need area weighting.
Thanks for playing.
Thanks, Michael. You also show that those without certified climate degrees, but with the powers of observation and the tools of sharp pencils can make significant scientific conclusions. With a “problem” that is global, it amazes me that the claim you need finely tuned technical backgrounds, million dollar computers and a statistician’s understanding of why it looks like A but is actually B, is so easily accepted by the mainstream. Or was, anyway.
Your question as to the large dropoff of stations is about a situation one I’ve never understood, either. You would think that a “problem” of the end of mankind/the biosphere would generate more, not less field work. Yet as the problem became, in the warmist view, worse, the station count collapsed. I’m no stranger to the need to check only the right few to determine the course of the many (politicians as well as businessmen rely on such surveys), but reducing what was going on at such a time seems very odd. Saving pennies when about to spend billions doesn’t seem what would happen in a budgetary process. Again, the MSM doesn’t seem to find this peculiar.
Senator Inhofe said that AGW was the greatest scam of all; perhaps he was thinking like a man with common sense, seeing such things as the station count drop and saying the whole thing just didn’t make sense. I’d agree with that.
Frank Lansner says:
October 24, 2011 at 5:40 am
As James Woods said at the end of the film “Contact”… “Yes, that is interesting, isn’t it.”
The strength of the global warming narrative is in the satellite data beginning in 1979. There is little doubt that the data is reliable, accurate to the degree necessary, and coverage is near global and around the clock. The earth’s temp was rising between 1979 and 1999 and has leveled off since then.
But that’s not a long enough period of time for a “climate trend” which is defined as 30 years of weather. Even 30 years is questionable for being long enough because we know for a fact that there are climate cycles that go far beyond a mere 30 years. Interglaical periods for instance are on a cycle of 100,000 years. The AMDO (Atlantic Multi-Decadal Oscillation) is a 60-year cycle. In fact many of us believe that the past several decades are simply the warm side of the AMDO being measured by satellites and nothing more.
Michael Palmer
My apologies on the anomaly/averaging – re-reading your post I see I was incorrect on that.
The lack of area weighting and discarding of 90% of the data, on the other hand, are quite serious issues. As I stated in my previous post, given the limitations you have imposed on the data, I would be equally unsurprised by flat temperatures as by a temperature rise several times what is noted in any of the records. You have also used the raw data, rather than data adjusted for changes at the various stations (as in new thermometers and the like). That could change the data either up or down – but will inevitably add yet another source of error and variation, making your conclusions even less statistically supported.
Area weighting data simply allows you to use the other 90% of the available data.
“Thanks for playing” – Oh? You consider this a game?
Michael Palmer – Disclaimer: I am not a climate scientist and claim no expertise relevant to this subject other than basic arithmetics. In case I have overlooked equivalent previous work, this is due to my ignorance of the field
nuff said!
Wouldn’t a more accurate (and more computationally intense) method be to take the temperature deltas by months or even days and then average those together to get the average delta?
To clarify, if you went by days (or even specific times of day), you’d take all the January 1st readings, and calculate your base period and delta for that specific day. You’d then do that for every day in the year, and then calculate the year from there.
This would mitigate missing records from days or times by simply ignoring them.
There may still be bias though since the temperatures may not be measured under certain weather conditions (i.e very cold), but it would prevent any input of false signals.
To make it even more accurate and challenging. You could recalculate the base period every time the station data is offline for more than a week or so. Essentially, if a station moves or equipment is changed/repaired, then this may be reflected by a period of missing records. The logical thing to do would be to treat it as a completely separate station instead of comparing its data to the older data.
Dave Springer asks:Hi Ivor. Just curious about how cloud height is determined aboard ship.
Hi Dave,
We used the mark 1. uncorrected Eyeball, a large chart supplied by the Met office, vast amounts of experience and a lot of guesswork. As you know the various types of clouds have different base levels so it was always the starting point to decide what type of cloud you were looking at. The low clouds were fairly easy to determine but high clouds were largely a matter of looking at the type on a chart and then trying to see if it had a positive base i.e. at the lowest end of its height range; or blurred baseline, possibly higher. In those days we did Met as part of the Board of Trade exams. I don’t recall a course called “staring at computer screens” ,we actually had to look out of the bridge windows! Vertical sextant angle would not work unfortunately unless God dropped a plumb line to give a reference point between horizon and cloud base. Cloud cover was estimated by quartering the sky and using percentage estimates. You can understand my chagrin at the way our guesstimated data are now refined to multiple decimal points.
I too took the nuclear defence course. We were told to take cover in the shaft tunnel by an enthusiastic Lt Commander. When we pointed out the lack of such on a 200,000 ton tanker we were told to turn our backs to the flash, close our eyes, bend over and …………you can guess the rest.
Well, if you have ever had a look at the code that does the “adjustments” I would guess that it gets very complex with a lot of stations. So if you reduce the number of stations, it becomes much faster / easier to do their “adjustments”. That could explain a reason for wanting to reduce the *number* of stations, but it doesn’t explain why the *nature* of stations selected for deletion were biased toward colder stations. Rural and high altitude stations were chopped mercilessly and not just in the US, the same goes for South America, too.
To address another question asked above, yes, these stations are by-and-large still reporting every day. They are available in many cases electronically over the Internet. The stations are still there, and the data are still there, GISS simply no longer uses them.
This is crazy when you consider that the three coastal stations now representing all of California in no way reflect the weather in, say, Bridgeport, California which is at about 7,000 feet altitude and is East of the Sierra Nevada. For example, the forecast temperatures in Bridgeport for Wednesday are High 47°F, Low 14 °F where the forecast temperatures for San Francisco are High 70°F Low 56°F
It certainly changes the “average” temperature of the state and requires a much greater degree of “adjustment” and does not take into account wind direction. San Francisco can warm considerably this time of year when the wind comes from the East and we get adiabatic warming from air dropping in altitude from the Sierra Nevada (like a “Chinook” wind).
A number of people are questioning why there is a post on this site arguing the the temps have not gone up when there is also widespread acceptance of warming during the same time period and I think the problem is that this posting is not talking about “global” temperature, but the US.
I also got confused with the BEST figures, because they such an increase since the 1930s when – in the US – these were as warm as the last decade and I wondered why no-one was querying this. The fact is that – taken over the whole planet – records show an increase in average temperature since the 1800s and although there are still arguments over how much, no-one on any side is really debating this. [That’s why Muller’s comments on the BEST analysis shooting down skeptics is a straw man argument.]
This post is talking about the US and considering the 20th century as a single chunk – partly to point out the effect of the changes in the number of stations on the rate of change. It is as much an exercise in methods of developing a long term record in the presence of missing data as it is a comment on actual temperatures, but this is an important point since we know there are problems with missing data.
It has been a very effective posting, because it has generated a lot of comments, some of which contain very interesting and useful information themselves. Excuse me for shouting, but THAT IS THE POINT OF A SCIENTIFIC BLOG. If all you want is confirmation of your existing opinion, go to a political blog site.
Thanks Michael for this analysis, thanks to Anthony for posting it (and much, much more) and thanks to the commentors who have read the post, thought about it and are providing some useful feedback and discussion.
Glenn Tamblyn says:
October 24, 2011 at 1:40 am
Skeptical Science was caught red-handed editing post-facto an article in which a senior climate scientist was making critical comments. They not only treated a very civil senior scientist with expert credentials in the field poorly they edited their own article afterward to make him look worse. This was proven beyond a shadow of a doubt by compariing archived versions of the article and commentary (@ur momisugly archive.org). SkS was busted beyond any doubt at all.
Anthony Watts does not want links to SkS appearing here because that raises SkS google rankings appreciably and they do not deserve the added page views that come with a higher search ranking. It’s not rocket science, it’s quite understandable, and it’s Anthony’s call to make.
Besides that if whatever point you were trying to make had any merit to it you wouldn’t need to rely on a single source for a reference. If SkS is the only source you have then it’s a moot point to begin with.
Yikes.
Dr Parmer’s paper appears to demonstrate that the settled science of warming is attributable solely to the “fudge factors” (data selection and corrections) typically applied to such work.
It does not address the question of whether or not these “fudge factors” are legitimate or justified but it surely begs for further examination of same.
I hope this work can be submitted for formal peer-reviewed publication so that the warmists are forced (shamed) into explaining why the “corrections” applied to their raw data just happen to be exactly equal to the warming trend they report. The usual hand waving is insufficient and the powerful presentation in this article makes that pretty darn obvious.
On the methodology… we all know that data selection is always dangerous. However, the particular selection used in this article, based on nothing more than the continuity of the station data, does seem perfectly justified and certainly raises some fascinating questions.
Beautiful paper!
The “lopping” off of the pre-1900s and post-2000s is a major influencing factor on the regressions because the decreased period of time gives more weight to significant events occuring during the time period used for Figure 2, especially those significant events that took place in the first half of the century. Anomalies like the 1930s and 1950s droughts have a tendency to skew regressions negatively towards the end of the century because they were such major events temporally and spatially.
“KPO says:
October 24, 2011 at 2:17 am
I have this sense that there are parameters missing such as humidity,”
I understand where you are coming from. However how much difference does it really make in the end? The percent water vapor in the atmosphere can vary from close to 0 to about 4%. Let us assume that in a dry year, the humidity averages 1% and in a humid year, it averages 3%. The specific heat capacity of air is 1.0. Let us assume the specific heat capacity of water vapor is 2.0. So if the air has 1% water vapor, the average specific heat capacity is 1.01. And if the air has 3% water vapor, the average specific heat capacity is 1.03. I know the molar mass of water is 18 and not 29, but if we just assume they are the same, then the mass of the atmosphere with 3% water vapor is 2% larger than if there is 1% water vapor. (I am also generously assuming water vapor exists evenly throughout the atmosphere and does not condense out.) Then applying mct(moist air) = mct(dry air), we find that the mc for the moist air is 4% larger than for dry air. So to balance things out, the dry air has to have a temperature change that is 4% larger than the moist air. In other words, if moist air goes up by 1.00 degrees C, the dry air, with the same energy input, would go up by 1.04 degrees C. So I would say the difference is very small. Perhaps the error bars need to be made just a wee bit larger to account for the unknown average humidity values? Note that I am not addressing phase changes that may occur due to humidity which is a separate topic.
Rob Potter says:
October 24, 2011 at 9:03 am
“I also got confused with the BEST figures, because they such an increase since the 1930s when – in the US – these were as warm as the last decade and I wondered why no-one was querying this. The fact is that – taken over the whole planet – ”
Rob, the fact is that there IS NO TEMPERATURE record for the whole planet. Period. Even today there are vast areas missing inside the arctic and antarctic regions because the satellites don’t have a view into them.
The southern hemisphere was almost a complete unknown with virtually no instrumental temperature record until well into the 20th century. Adding insult to injury there is almost no coverage for the entire continent of Asia until well into the 20th century and virtually none for any of the world’s oceans exept in shipping lanes.
To pretend the situation is different is an outright lie. There IS NO RELIABLE GLOBAL instrumental record pre-dating the satellite era, period. End of story.
Stephen Wilde says:
“From our perspective there seems to be a warming or cooling at the recording sites when averaged out overall but in reality all that is being recorded is the rate of energy flow past the recording sites as the speed of energy flow through the system changes in an inevitable negative response to a forcing agent whether it be sun sea or GHGs.In effect the positions of the surface sensors vary in relation to the position of the climate zones and they record that variance and NOT any change in energy content for the system as a whole.”
A most welcome summary of the starting point that is a temperature record. For many, a temperature record is the end of the thought process but it really is the beginning. Thanks for reminding us what these data points actually mean!
Has USHCN-M any data worth looking at yet?
Michael Palmer’s article is an important one and we need to focus on the Big Picture. The article introduces two topics, one about station quality and the other about calculation. The station quality topic is logically prior to the other topic and raises the very important question about station quality and the empirical evidence for it.
Palmer describes the stations reporting continuously since 1900 as follows:
“Here, I selected for stations that had continuously reported at least one monthly average value (but usually many more) for each year between 1900 and 2000. This criterion yielded 335 rural stations and 278 non-rural ones.”
Graphing these stations, he concludes that:
“While the sequence and the amplitudes of upward and downward peaks are closely similar to those seen in Figure 2, the trends for both rural and non-rural stations are virtually zero. Therefore, the average temperature anomaly reported by long-running stations in the GISS dataset does not show any evidence of long-term warming.”
From Palmer’s observations, we need to ask what we can infer about the stations. I suggest that the most important and telling inference that can be drawn is that these stations have been well managed. The stations that do not fall into the category of “well managed” can then be graded on various levels of “poor management.” The levels of poor management can be determined by searching for causes of gaps or bumps and similar matters. (Bumps occur when there is a sudden and large continuous change in temperatures reported.)
I emphasize poor management for a very important reason. The only reasonable inference that can be made about stations with numerous gaps and bumps is that the readings that come from them are flaky. Yes, flaky, as in usually inaccurate and maybe in several different ways. The inferences that Warmista want to draw at this point are that errors offset one another, that errors are one-time shifts that do not affect trends, that surrounding stations are not flaky, and so on. Obviously, none of those inferences are justifiable without the results of empirical research done on the ground. Because Warmista adamantly refuse to engage in such empirical research, they are making wholly unjustified assumptions.
For the last thirty years, Anthony Watts and others have gathered information about siting which could explain many gaps and bumps and which could be used in grading poor management. Watts’ factual information goes far beyond what has been described here.
When cornered, the Warmista response is that all of these empirical matters are unimportant because their incredibly sophisticated statistical techniques enable them to compensate for all flakiness in all weather station records. The breath taking boldness of this claim makes it highly suspect. It raises the question whether Warmista could specify any degree of flakiness that could not be accommodated within their statistical techniques. (Please note that questions of calculation are separate from and can be in conflict with empirical knowledge of stations.)
The practical conclusion of all this is that the records of well managed weather stations should be privileged over those of poorly managed stations in calculations of average temperatures. Palmer’s claim that the well managed stations show no temperature trend at all should be the accepted baseline among climate scientists and deviations from it should require justification from empirical research about particular poorly managed stations.
Matt says:
The “lopping” off of the pre-1900s and post-2000s is a major influencing factor on the regressions…
A point that was fully addressed in the paper.
****
Frank Lansner says:
October 24, 2011 at 5:40 am
Thanks for comment. Yes , the time of observsation… its amazing.
So across the world from country to country, from culture to culture, continent to continent then thermometers NOT meant for climate purposes, just to tell people about their local temperatures, we have this synchronous TOBS.
Everywhere, the time of observation has systematically been changed in one direction causing too cold temperature data, that “must” be corrected massively.
****
Technically, TOBS is a legit correction, but then how could corrections to so many stations (thousands) produce a TOBS correction so lopsided upward? One would think that such a correction applied globally over so many stations would end up nearly random — near zero. And the TOBS correction applied isn’t even done by each individual stations’ data, it’s done by a TOBS “model” (algorithm).
I assume TOBS “models” are as trustworthy as climate models, until shown otherwise.
Ivor Ward says:
October 24, 2011 at 8:55 am
Interesting. I’d have thought you could simply measure the amount of sky showing between horizon and bottom of cloud deck. The distance to the horizon at sea should be pretty constant with possibly some adjustment needed for height of the ship’s deck above the waterline which would let you see some distance further than line of sight from waterline.
In NBC school we didn’t have sextants. In order to determine the height of the mushroom cloud we used “thumb widths” i.e. hold your arm straight out with thumb horizontal and count the number of thumb widths from the ground to top of mushroom cloud. IIRC correctly each thumb width is about 5 degrees. With a distance estimate taken by time between flash and sound you have the length of one side and two angles (including the 90 degree angle at the base of the mushroom cloud) of a right triangle which is sufficient data to solve for the length of the other sides. Exactly the same thing -should- work at sea to measure the height of the cloud deck although on a rolling ship it might be quite difficult counting thumb widths! Maybe beyond difficult as I’ve never tried anything like that and have virtually zero time spent on any ships at sea. I’ve been in all kinds of planes and helicopters, all kinds of watercraft on inland waters, and all sorts of land vehicles but only a couple of half day ocean fishing trips for my maritime experience – enough to know I don’t get seasick in modest swells but that’s about it.