Extremes Of Temperature Decreasing In The US

imageGuest essay by Paul Homewood

WUWT carried the story yesterday of the paper by Kodra & Ganguly, forecasting a wider range of temperature extremes in the future.

According to the Northeastern University press release, using climate models and reanalysis datasets, the authors found that

While global tem­per­a­ture is indeed increasing, so too is the vari­ability in tem­per­a­ture extremes. For instance, while each year’s average hottest and coldest tem­per­a­tures will likely rise, those aver­ages will also tend to fall within a wider range of poten­tial high and low tem­perate extremes than are cur­rently being observed.

But is there any evidence that this has been happening? We can check what’s been happening in the US, by using the US Climate Extremes Index, produced by NOAA.

Of course, the US only accounts for 2% of the Earth’s surface, (except when there is a polar vortex, a mild winter or a drought in California), but it seems a sensible place to start. We also know that climate models often bear very little resemblance to reality!

Just to recap, the US Climate Extremes Index, or CEI, is based on an aggregate set of conventional climate extreme indicators which, at the present time, include the following types of data:

  1. monthly maximum and minimum temperature
  2. daily precipitation
  3. monthly Palmer Drought Severity Index (PDSI)
  4. landfalling tropical storm and hurricane wind velocity.

In terms of temperature, the CEI is

  1. The sum of (a) percentage of the United States with maximum temperatures much below normal and (b) percentage of the United States with maximum temperatures much above normal.
  2. The sum of (a) percentage of the United States with minimum temperatures much below normal and (b) percentage of the United States with minimum temperatures much above normal.

So, for instance, we can plot maximum temperatures during summer months:

 

multigraph

http://www.ncdc.noaa.gov/extremes/cei/graph/1/06-08

 

And, minimum temperatures in winter:

 

multigraph

http://www.ncdc.noaa.gov/extremes/cei/graph/2/12-02

 

The reds indicate the percentage of the US, which were “much above normal”, and the blues “much below normal”. The CEI also lists the actual percentages, so we can plot the “much aboves” in summer, and the “much belows” in winter, thus:

 

image

image

 

The trend is to an increasing percentage with above average summer temperatures, although recent years seem to be at similar levels to the 1930’s. (The CEI is based upon adjusted temperatures, before anyone asks).

In winter, though, the trend is decreasing.

We can now combine the summer and winter sets together.

[I have simply added together the percentages, although of course some areas could have experienced both hot and cold – think of it as an index].

image

Clearly, the overall trend is to extreme temperatures reducing. In other words, the area of the US experiencing unusually high or low temperatures is tending to grow smaller. (Although it is interesting to note the relative absence of such extremes in the years around 1970).

Of course, although this analysis tells us about the area of the country affected, it does not say anything about how extreme the temperatures are. But we can check this very simply, using the NCDC Climate At A Glance datasets.

The graph below shows the difference each year between winter and summer temperatures, for the country as a whole, along with a 10-Year average. As can be seen, the variation from winter to summer has been getting smaller in recent years.

The most extreme year was 1936, when the hottest summer on record (even after adjustments) followed the second coldest winter. I wonder how their models account for that?

 

image

http://www.ncdc.noaa.gov/cag/

 

FOOTNOTE

 

NOAA offer this definition of how they calculate their index:

 

 

The U.S. CEI is based on an aggregate set of conventional climate extreme indicators which, at the present time, include the following types of data:

  1. monthly maximum and minimum temperature
  2. daily precipitation
  3. monthly Palmer Drought Severity Index (PDSI)
  4. landfalling tropical storm and hurricane wind velocity

* experimental (not used with the Regional CEI)

Each indicator has been selected based on its reliability, length of record, availability, and its relevance to changes in climate extremes.

Mean maximum and minimum temperature stations were selected from the U.S. Historical Climatology Network (USHCN) (Karl et al. 1990). Stations chosen for use in the CEI must have a low percentage of missing data within each year as well as for the entire period of record. Data used were adjusted for inhomogeneities: a priori adjustments included observing time biases (Karl et al. 1986), urban heat island effects (Karl et al. 1988), and the bias introduced by the introduction of the maximum-minimum thermistor and its instrument shelter (Quayle et al. 1991); a posteriori adjustments included station and instrumentation changes (Karl and Williams 1987). In April 2008, maximum and minimum temperature data from the USHCN were replaced by the revised USHCN version 2 dataset. In October 2012, a refined USHCN v

ersion 2.5 was released and replaced version 2 data for maximum and minimum temperature indicators.

 

 

The U.S. CEI is the arithmetic average of the following five or six# indicators of the percentage of the conterminous U.S. area:

 

  • The sum of (a) percentage of the United States with maximum temperatures much below normal and (b) percentage of the United States with maximum temperatures much above normal.
  • The sum of (a) percentage of the United States with minimum temperatures much below normal and (b) percentage of the United States with minimum temperatures much above normal.

 

 

In each case, we define much above (below) normal or extreme conditions as those falling in the upper (lower) tenth percentile of the local, period of record. In any given year, each of the five indicators has an expected value of 20%, in that 10% of all observed values should fall, in the long-term average, in each tenth percentile, and there are two such sets in each indicator.

A value of 0% for the CEI, the lower limit, indicates that no portion of the period of record was subject to any of the extremes of temperature or precipitation considered in the index. In contrast, a value of 100% would mean that the entire country had extreme conditions throughout the year for each of the five/six indicators, a virtually impossible scenario. The long-term variation or change of this index represents the tendency for extremes of climate to either decrease, increase, or remain the same.

 

The index is built up from the Climate Divisional Database, and therefore reflects the area of the US, rather than a simple percentage of stations.

 

0 0 votes
Article Rating
54 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
D Long
July 31, 2014 1:03 pm

See, your mistkae here is you used the real data. To get the right answer your supposed to use a model.

rogerknights
July 31, 2014 1:16 pm

TYPO: A “not” should be added to:

Of course, although this analysis tells us about the area of the country affected, it does say anything about how extreme the temperatures are.

DrTorch
July 31, 2014 1:17 pm

Someone should have presented this at Kodra’s dissertation defense.

KNR
July 31, 2014 1:21 pm

May I remind you of the first rule of climate ‘science’ , where the models and reality differ in value it is reality which is in error. So you see no problem here ,

PeterK
July 31, 2014 1:25 pm

Question: Are the thermostats that are located throughout the U.S. a good representation of the data collected? Let’s say you have 100 thermometers in areas where temperatures are usually hotter and only 50 thermometers in areas where temperatures are just hot or to the cooler side of things, how is something like this smoothed out for good average data? Or am I being silly asking this question?

July 31, 2014 1:28 pm

Here’s the average Daily Rising temp/Nightly falling temp for the US, based on almost 25 Million samples
YEAR RISING FALLING SAMPLE
1940 20.50393484 20.47222222 11970
1941 19.53602778 19.60376813 16268
1942 20.62009818 20.88718292 42576
1943 21.78958025 21.82869697 85002
1944 20.94628723 20.92561388 94929
1945 20.02836995 20.00437425 106144
1946 19.47600231 19.52145536 60535
1947 19.45848696 19.43814313 64334
1948 20.55756832 20.64052146 169562
1949 20.93780081 20.92489737 226056
1950 21.01726485 21.01279422 233793
1951 20.78761791 20.80262304 235986
1952 20.91286519 20.92834733 241349
1953 21.14826316 21.16224432 239458
1954 21.18784382 21.17370699 234317
1955 20.3439529 20.35577617 188611
1956 20.67196148 20.70659344 192100
1957 19.3519177 19.34870139 195286
1958 19.61351195 19.61132888 194909
1959 20.1410413 20.13474709 185345
1960 19.92417841 19.93018531 187684
1961 20.08189199 20.09685848 186470
1962 20.36645334 20.39177643 185163
1963 21.0058528 21.01024281 185535
1964 20.76135838 20.72739444 184793
1965 19.27401642 19.30817235 170881
1966 19.57937553 19.6218895 169008
1967 19.55240118 19.56793839 169271
1968 19.51440729 19.51537004 170969
1969 18.95651795 18.93003303 166540
1970 19.54189321 19.53865825 162444
1971 19.26583696 19.25906808 137157
1972 18.76846915 18.75793202 132835
1973 20.02816604 20.42532869 294958
1974 21.24096576 21.38641482 294917
1975 20.49408906 20.62964706 301018
1976 21.83571708 22.10139707 320099
1977 20.91972511 21.06568956 328282
1978 20.29562287 20.60190881 335288
1979 20.46592403 20.53014617 331747
1980 20.8307819 20.97933053 332650
1981 20.48358197 20.88219905 328797
1982 19.83992645 20.02764071 332003
1983 19.15595028 19.3837069 345723
1984 19.85890881 20.08549393 360525
1985 20.03335094 20.14719434 372943
1986 19.50725367 19.816422 383930
1987 20.06616619 20.32877043 385632
1988 21.02669699 21.227329 391606
1989 20.56740949 20.64185731 395250
1990 20.418652 20.57721881 398649
1991 17.98420449 18.60500473 394707
1992 17.90495645 18.41617037 416750
1993 18.2091921 18.5470867 434645
1994 20.14231631 20.5312268 436340
1995 18.35030187 18.92208783 437938
1996 18.85282218 19.05290403 430075
1997 18.78094141 18.90643062 436832
1998 19.67155262 19.71053558 450407
1999 21.58228053 21.65410029 487966
2000 21.48298772 21.58884048 508221
2001 21.45373636 21.56394752 520225
2002 21.14346663 21.15276393 569660
2003 21.05350293 21.1376414 577460
2004 20.4428097 20.45503992 621845
2005 20.52533899 20.57531986 715948
2006 20.13166306 20.17761288 758360
2007 19.92639519 19.98142808 792546
2008 20.12080306 20.19714083 825239
2009 19.58897239 19.61750471 862979
2010 19.56543667 19.61219247 895938
2011 20.14376982 20.1788869 881756
2012 20.91335563 20.98746102 884833
The overall average of all years:
9999 20.14606499 20.25408602 24801967
The average doesn’t look like it’s changed much, nor what I’d call having a trend.

July 31, 2014 1:29 pm

Convergence. (sorry for the drive by, couldn’t resist.)

July 31, 2014 1:37 pm

PeterK says:
July 31, 2014 at 1:25 pm

Question: Are the thermostats that are located throughout the U.S. a good representation of the data collected? Let’s say you have 100 thermometers in areas where temperatures are usually hotter and only 50 thermometers in areas where temperatures are just hot or to the cooler side of things, how is something like this smoothed out for good average data? Or am I being silly asking this question?

Not a silly question at all, just all bad answers.
Thermometers are where people are (were), and we didn’t populate the US based on a grid. So they are not distributed in a pattern meant to do a good job at measuring the US’s temp.
What you do about that is subject to much discussion, I tend to think it’s better to use all of the data, unevenly sampled and all. Others (BEST, NASA, GISS, CRU) use the measurements to create a “field” of temperature, then average this field. IMO this field is too abstract, displaying a value in places where no measurements ever existed.
Far to easy to make your Surface Temp Model meet your expectations.

TRG
July 31, 2014 1:39 pm

This seems like a flawed analysis. Summer extremes are based on high temperatures and winter extremes are based on low temperatures. If the winter extremes are declining, that’s an indication of warming. The decline in winter lows cannot be averaged with the increase in summer highs to produce anything meaningful in terms of climate.

July 31, 2014 1:41 pm

Annual hurricane number above average is often included in the extreme weather events.
Hurricanes occurrence is closely related to the AMO (Atlantic Multidecadal Oscilation).
Since the AMO cycle appears on the verge of its down-slope, the hurricane frequency is expected to fall too. Arctic atmospheric pressure appears to confirm a forthcoming down-trend in both the AMO and the number of (near future) hurricane events.

kenin
July 31, 2014 1:41 pm

i would love to comment, but I legally can’t; because The Nature conservancy persuaded me into signing into a gag easement……… so I lost my right to voice my opinion.

mpainter
July 31, 2014 1:45 pm

This is a good example of how easily empirical data clobbers the vaporous theories of the global warmers.

July 31, 2014 2:15 pm

“IMO this field is too abstract, displaying a value in places where no measurements ever existed.”
Silly.
The field is a prediction.
Suppose you have 40000 stations.
you build the Field using 5000
Then you test how well that field predicts the 35000 you held out.
Second
Now after you prove that the predicted field works (using hold out data ) you then
build a field using 40000 stations..
AND THEN
you go into the archives and you RESCUE old data that has never been digitized
and you have MORE out of sample data
In short there are places where measurements existed, but that data is on paper. So you can test your prediction when you find NEW data.
Finally you can ( as one guy is doing now ) go to old stations that stopped recording
and place new measurement instruments there. And we can test the methods going forward.
this will be some very interesting data given the region.
make predictions. test them.
If there is one thing I would like to hammer home to everybody it would be this
1. There is no “average” of past temperatures. all the groups create a field. This field is a PREDICTION of what would have been recorded.
2. The prediction will never be perfect
3. you test the prediction in three ways
A) hold data out when you build the field
B) test your field against hold outs
C) continue to recover old data and add new stations.
all the methods for creating these predictions will have issues. If you live in the illusion that there is a historical truth that you can recover you will always be fustrated. The best you get is a prediction of what you think the past was based on the present evidence you have of the past.
there are ways to test this prediction. do that.

Reply to  Steven Mosher
July 31, 2014 2:30 pm

Steven Mosher commented

In short there are places where measurements existed, but that data is on paper. So you can test your prediction when you find NEW data.

Do you compare the number on that paper to your prediction for the papers date, or do you have to make adjustments to it first? And how do you log the error?

. If you live in the illusion that there is a historical truth that you can recover you will always be fustrated.

I think there are historical measurements, that’s all the true we have.

July 31, 2014 2:18 pm

Wayne Delbeke says:
July 31, 2014 at 1:29 pm
Convergence. (sorry for the drive by, couldn’t resist.)
——
Regression to the Mean

Jonathan L
July 31, 2014 2:19 pm

Why would you only look at extreme lows for winter and extreme highs for summer? You need to look at extreme highs and lows for both summer and winter? I don’t think this is a good test. I bet you’d find the same going to summer and winter but excluding them seems to only show half the picture.

DD More
July 31, 2014 2:19 pm

The most extreme year was 1936, when the hottest summer on record (even after adjustments) followed the second coldest winter. I wonder how their models account for that?
Easy. Difference is that the most important GHG was in short supply. That would be water vapor and water in general. Height of the dust bowl, where the ground got no evaporative cooling and the winter had no way to stop night time IR stoppage.

Jimbo
July 31, 2014 2:24 pm

Yet they keep on assuring me that my eyes are lying, and that weather / climate in the US of A is actually getting more extreme.
John Holdren
http://www.breakingnews.com/item/2014/05/06/white-house-science-adviser-john-holdren-says-clim/

Steve Oregon
July 31, 2014 2:29 pm

I am so sick of the Climate Conjecturologists.
“While global tem­per­a­ture is indeed increasing”? Not really.
“So too is the vari­ability in tem­per­a­ture extremes”? Apparently not.
“Each year’s average hottest and coldest tem­per­a­tures will likely rise”?
“Will likely rise”? When? After a lengthy hiatus? Or likely not?
“Those aver­ages will also tend to fall within a wider range than are cur­rently being observed”?
“Tend to”? What the heck is that supposed to mean? Either they will or they won’t.

July 31, 2014 2:33 pm

Mr. Mosher says…
Is it really necessary to start with “Silly”?
Civil conduct is something one usually learns in grade school. Time to put climate aside and work on more basic skills. Make the great leap from child to adulthood please.

mpainter
July 31, 2014 2:36 pm

TRG:
The study put that temp. extremes, both high and low, were on the increase. This was refuted by the post with empirical data. It was not an issue of warming or no.

JJM Gommers
July 31, 2014 2:43 pm

It confirms the warming trend, on the other hand assuming the same relative humidity there is a substantial difference between the max and minimum temperature.

Reply to  JJM Gommers
July 31, 2014 2:54 pm

JJM Gommers commented

It confirms the warming trend, on the other hand assuming the same relative humidity there is a substantial difference between the max and minimum temperature.

Doesn’t look like the same method of measuring rain was used prior to 1973-1975, there was a large discontinuity in station sample counts then (I don’t know why this is).
YEAR RELH RAIN
1940 65.73307302 2.862918123
1941 67.43702257 3.750997414
1942 64.56667054 1.801965941
1943 61.69952112 2.292016192
1944 64.81837934 2.440795184
1945 66.49475955 2.854816683
1946 67.14167375 4.313983141
1947 66.56546188 5.051973176
1948 66.21899474 3.6774646
1949 65.66566384 3.18632797
1950 65.29381852 3.426637066
1951 65.4477694 3.620850651
1952 63.64801599 3.138566762
1953 63.10479051 3.260858562
1954 62.8811022 3.075652787
1955 63.64037453 4.064352034
1956 62.97443738 4.04604431
1957 66.5868783 5.21168486
1958 65.61188795 5.641995114
1959 64.75258125 5.477060728
1960 64.89406063 5.077035995
1961 64.83817198 5.121213938
1962 64.27947343 5.045563102
1963 62.71922655 4.463527918
1964 62.73616448 4.682886409
1965 64.58225669 3.098533825
1966 63.36422396 3.04735061
1967 63.7762625 3.158226583
1968 63.59051314 3.133923096
1969 64.9212314 3.169604752
1970 63.75464292 2.770137235
1971 64.08985445 5.035354955
1972 65.50917593 5.814275488
1973 66.03885706 54.67717893
1974 64.79188912 33.68853919
1975 65.81995263 33.5236337
1976 62.44187927 27.38660259
1977 63.31710615 27.70439865
1978 64.94271182 26.55442439
1979 65.06118666 25.72635776
1980 63.42987421 20.35689145
1981 63.94709412 21.48338516
1982 65.32446474 24.54734504
1983 65.6743313 25.43160239
1984 64.57100098 21.77386744
1985 64.28054331 20.47517287
1986 65.32372484 19.67176038
1987 63.48928085 18.50520612
1988 61.26936401 17.05257372
1989 63.54917543 19.60258364
1990 63.51614209 21.54539251
1991 64.59321421 20.75344794
1992 65.57367346 20.69746662
1993 65.63519036 21.74581632
1994 64.39591741 21.66006949
1995 65.17763212 16.70661974
1996 65.55042502 18.65793782
1997 66.73046193 18.81543757
1998 67.91632324 21.09147663
1999 64.89698447 21.11487886
2000 66.68246655 21.44845463
2001 67.32907498 19.73568207
2002 66.68873101 22.57551686
2003 67.55623033 25.15722909
2004 68.38129822 22.99680479
2005 66.59322945 21.82989059
2006 64.68054743 21.61713779
2007 65.17790934 20.51624765
2008 65.47567241 21.09059053
2009 66.54103838 21.81812305
2010 66.29844316 20.0408544
2011 65.31661863 18.42620844
2012 63.86552013 18.20600982
9999 64.94771665 5219.689742

July 31, 2014 2:48 pm

Steven Mosher says:
July 31, 2014 at 2:15 pm
“If you live in the illusion that there is a historical truth that you can recover you will always be fustrated. The best you get is a prediction of what you think the past was based on the present evidence you have of the past.”
I don’t think you understand the real problem. Measuring temperature is not the same as measuring energy flux which is what you are really trying to do. Averaging to produce an anomaly will show a cooler past than the present.
Simply taking the highs or lows and comparing them produces a much more accurate ‘prediction’ of the past than creating anomalies. Try it, it isn’t that hard 🙂

Lil Fella from OZ
July 31, 2014 2:49 pm

What you do is re write history to suit your objective! Yep, 1984 all over again!

July 31, 2014 3:07 pm

Percent of CONUS Area….
Are the thermometers evenly distributed across the CONUS? No.
Let us not forget there is a UHI effect and a Zombie effect to deal with.
Both effects tend toward the increase in extremes of heat and reductions in the extremes of cold.
Let’s start with the Zombie effect. In recent years, up to 45% of the stations, which contribute to the CONUS area, have been discontinued and infilled, i.e. fabricated, by using surviving stations “nearby” according to some regional homogenization. What the radius of influence is for this influence, I don’t know, but I think we need to know.
Also over the century, almost every existing station has experienced UHI influence, as population rises, energy use increases, streets are paved and widened, buildings built, and irrigation use increases.
Not all stations suffer the same degree of UHI. I argue that the Zombie stations, the ones shut down are disproportionately from rural, low UHI effected stations. But even if Zombie stations were evenly distributed across the UHI influence, the are the surviving stations used to infill the Zombies coming from a higher average UHI effect than the station that was discontinued? Is the net result that the Zombie effect is magnifying the UHI problem? That is the way I’ll bet.

NikFromNYC
July 31, 2014 3:35 pm

Mosher’s folly revealed: “If you live in the illusion that there is a historical truth that you can recover….”
That is the deconstructionist words of French philosophy, not words of science, spoken by a trained deconstructionist English major, in other words a historical revisionist. But there *is* a historical truth, exclaimed the little boy.

Keith
July 31, 2014 3:40 pm

All I’ve read so far has led me to think the following about climate and radiative gases:
– The primary effect of radiative gases is to smear heat around the atmosphere and resist large changes in air temperature. This is why humid tropical areas have little temperature variation across the day, while deserts have massive diurnal variation. It also explains the minimal temperature variation in the atmosphere of Venus despite its remarkably slow axial rotation.
– There is also am impact of changes in radiative gas levels on global average temperatures, but it appears to be trivial compared to the smoothing and smearing effects. Small changes in cloud levels appear to have as much, if not more, influence than large changes in radiative gas levels to global average temperatures.
– Greater warming in the Arctic than elsewhere makes sense on this basis, but the different situation in the Antarctic suggests that changes in radiative gas levels aren’t the dominant driver of global temperature change. The reverse may be true. It appears reasonable, though, to state that man’s CO2 emissions have played a significant part in the increases of the last century.
– You would expect that, ceteris paribus, this would lead to higher minimum temps and lower maximum temps, with little change to ‘averages’. UHI, land use changes, data quality and confirmation bias impacting historic temperature adjustments may be causing a small exaggeration of this average temperature increase.
I’m not convinced by the analysis shown in this piece, but it certainly makes more sense than the gloom-and-doom political statements, sometimes masquerading as scientific studies, that seem designed to scare people into thinking a world with more greenhouse gases means more extremes of weather and much higher average temperatures. Less extremes appear to be more logical to expect, while empirical data suggests modest changes to global averages.

plowboy55
July 31, 2014 3:51 pm

Not sure if I have a clue…. but here I go. Are these temps representative of an associated area, on a Thiessen polygon system like you would use for rainfall? An example would be do we have two records for all of North Dakota and eastern Montana that gets the same weight as two recordings near New York City. Then you figure the same polygon system and you have eight million people thinking it is hot in NYC and 700,000 of us freezing. There could be some inconsistencies in perception.

Bill_W
July 31, 2014 4:05 pm

Extreme weather is extreme weather. We care about it as it can be dangerous. That would be the point of preventing it by trying to cut CO2. It makes no sense to call a mild winter “extreme weather”. It is less dangerous, not more.

Bill_W
July 31, 2014 4:15 pm

Also the definitions of “extreme” are: reaching a high or the highest degree; very great. or furthest from the center or a given point; outermost. And to be extreme in the sense of a warm winter day, you would need to be further from the long-term mean in the warm direction than very cold days are in the cold direction. In a place like St. Louis, for example, say the average winter day is 40 C. They may have cold records of -10 C. So to be as extreme you would need to have a winter day that was 90 C. At least that’s one way of looking at it. Having a trend that winters are getting warmer by a few tenths of a degree per decade is NOT extreme.
At least if you want to use English as defined. But if you choose to use your own version of English where the words have special meanings I think it is only fair to let us know that in advance.
These two posts are directed at a few people above who want to count warm winters as extreme.

Keith
July 31, 2014 4:42 pm

plowboy55 says:
July 31, 2014 at 3:51 pm
———————————————
The temperature is recorded at numerous points and then ‘gridded’, so that what you suggest regarding undue weighting to lots of stations in a small area is minimised (but there are other problems, aplenty). You bring up a good point about perception though.
If a greater and greater % of the population live in cities, with those cities growing larger, two things happen. First, larger cities create a larger heat-island effect, whereby all the concrete and tarmac infrastructure soak up solar heat during the day and release it during the night, making the nights and average temperatures hotter. Second, more people than before experience this, so more people are of the opinion that “it’s getting warmer” than would be the case if the rural/urban population balance was unchanged over the decades.
This urbanisation process makes it easier for the case of man-made global warming to be pushed, of which I’m sure the political advocates are well aware. If the end of this century sees almost everyone living in ever-larger conurbations, as per the UN Agenda 21, with rural temperature stations being increasingly phased out, the surface temperature record will have a large (artificial) component of warming and the general populace will find it congruent with their perception. Sneaky buggers.

Slade
July 31, 2014 5:14 pm

Bill_W I think you mean F not C
40 degrees C would be 100 degrees F
-10 C would be 20 F
90 C would be over 200 F!

July 31, 2014 7:30 pm

So, the climate really is changing:
it is getting calmer.

Greg Goodman
July 31, 2014 8:40 pm

First, it’s good to see that NOAA have found themselves a proper filter in the form of the 9 point binomial. At least peaks and troughs are in about the right places.
Unfortunately the last ‘Climate at a glance” graph goes back to crappy running average, which they have not even managed to centre correctly. As a result the 1936 peak leaves the RM obviously too high for the following 10years.
Oh well, I supposed with their “limited” resources , at least it’s a start.

Bill Treuren
July 31, 2014 8:52 pm

As time passes the number of records will decline. on the first day of recording any data all data is a record but unlikely to be exceptional.
Can anyone tell us the impact of time on frequency of records where they to be random rather than trended data?

thingadonta
July 31, 2014 8:57 pm

Im just waiting for the alarmists to say how terrible it is that extremes of weather are decreasing

Greg Goodman
July 31, 2014 9:07 pm

Paul, it would be helpful to include the definitions of what you are plotting. What is “normal” and what does “much above” mean.
http://www.ncdc.noaa.gov/extremes/cei/definition
In each case, we define much above (below) normal or extreme conditions as those falling in the upper (lower) tenth percentile of the local, period of record. In any given year, each of the five indicators has an expected value of 20%, in that 10% of all observed values should fall, in the long-term average,
As per usual with this kind of propaganda, there is the stupid idea that the average over some arbitrary period is somehow “normal” which suggests to the reader that any deviation is “abnormal”.
Then the top and bottom 10% are defined as being “extreme”. While this is technically correct, in the sense that the two ends of a piece of wood can be called the extremes, it is not the way this will be read by the layman public this presentation is aimed at who will understand this as OMG extreme weather. Indeed this is clearly the spin they are trying to put on it.
As always you should include a definition of what you are plotting. I had to go and dig this out from NOAA to understand what you were showing. I suggest you include the paragraph or link above in the article.

Dr. S. Jeevananda Reddy
July 31, 2014 9:08 pm

Peter K/Mi Cro:
In fact I referred them as urban-heat-island — dense met network– and rural-cold-island — sparse met network — effects. The former is over emphasized in the averaging of global temperature and later is less emphasized. Thus, global temperature is over estimated. The second issue is prior to 1957 the unit of measurement was different from the unit of measurement from 1957. Here while averaging the former gives less error while the later gives more error. For example 23.45 = 23.5 and 23.35 = 23.3 and thus the difference of 0.1 changes to 0.2 in oF and oC. So,after 1956 the averaging has positive effect. Also, with passing of time met stations are changed — place or instruments or shelter, etc –. Also number stations changed with the time.
Dr. S. Jeevananda Reddy

Greg Goodman
July 31, 2014 9:25 pm

re. Greg Goodman says: July 31, 2014 at 8:40 pm
Apologies to NOAA , the last graph with the crappy running mean that was not centred correctly was Paul’s handiwork, not NOAA. I was mislead by the link below it which I understood to be a link to the original.
It was also incorrectly described as a “ten year average” which would have one point every ten years. What is shown is a ten year running mean ( a crappy distorting filter ) with a 5 year lag.
Hopefully Paul will take a tip from NOAA and find himself a proper filter.

Greg Goodman
July 31, 2014 9:37 pm

The idea of displaying the summer average / winter average difference is a good one
The reduction in the annual variation from 40F in 1900 to about 30F in 2000 is notable.

Reply to  Greg Goodman
August 1, 2014 4:44 am

“The reduction in the annual variation from 40F in 1900 to about 30F in 2000 is notable.”
But is it real, or an artifact of the poor sampling in 1900.

Greg Goodman
July 31, 2014 9:50 pm

vuk’ says:
Annual hurricane number above average is often included in the extreme weather events.
Hurricanes occurrence is closely related to the AMO (Atlantic Multidecadal Oscilation).
====
You miss labelled the blue line. I’m guessing it is lagger 20 or so years.
Yes, accumulated cyclone ( hurricane ) energy does seem to follow AMO quite closely:
http://climategrog.wordpress.com/?attachment_id=215
Ryan Maue’s graphs show major storms and total storm count have both peaked around 2000-2005.
Cooling signs.

Joel O'Bryan
July 31, 2014 11:14 pm

plowboy55 asked if their are inconsistencies due to population size diffs between NY and Montana?
my answer: Montana is part of flyover country for the Liberals who think nothing worth noting happens between I-5 and I-95.

Mark
July 31, 2014 11:49 pm

I might be missing something here but isn’t this more or less what you’d expect to see the longer data is being gathered where the climate is stable (but possibly cyclic)?

August 1, 2014 12:40 am

Greg Goodman says: July 31, 2014 at 9:50 pm
………….
Hi Greg
Strong link between multi-decadal variability in the Arctic pressure and the N. Atlantic SST going back to 1860s appears to be reflected in the hurricane events frequency.

August 1, 2014 5:32 am

Keith, you were doing great until: “It appears reasonable, though, to state that man’s CO2 emissions have played a significant part in the increases of the last century.”
You seem to be pretty on point on everything. But keep in mind that mankind is only responsible for 15ppm of the 400ppm of CO2. Just one more small jump and you’ve got it all.
Cheers!
Eric

Pamela Gray
August 1, 2014 8:58 am

But…but…but…UHI affect is likely the cause of warming winter extremes. And that…wait!…we can’t say that because we KNOW that UHI is NOT the cause of summer warming extremes. Never mind.

August 1, 2014 8:13 pm

“That is the deconstructionist words of French philosophy, not words of science, spoken by a trained deconstructionist English major, in other words a historical revisionist. But there *is* a historical truth, exclaimed the little boy.”
actually not.
Suppose you have a written record that it was min 56. F in New york city on april 1, 1909
is that the truth? how do you check it?
You can assume it is the truth. Of course if you read the protocal for observers you will see that the proceedure REQUIRES them to round up or down.
So, what was the min temperature on april 1 1909. you dont know. you certainly cant test the proposition
that it was 56F, and if you believe the written record you only know this
1. if you believe the observer followed the procedure
2. if you believe he wrote down the number correctly
3. if you believe the thermometer was accurate
4. Then you know that the min temperature after rounding was 56.
so, once you give up the illusion that the know the truth, you are leftt with your best estimate GIVEN
the data and GIVEN the acceptance of certain assumptions.
On the other hand if I tell that F=MA then you can make a prediction about the future and test it.
It has nothing to do with french philosophy. deconstruction would admit none of what I stated above.

August 5, 2014 1:39 pm

@Steven Mosher 8/1 8:13 pm
so, once you give up the illusion that the know the truth, you are leftt with your best estimate GIVEN the data and GIVEN the acceptance of certain assumptions.
On the other hand if I tell that F=MA then you can make a prediction about the future and test it.

What the duce are you going on about?
Of course you can make a prediction — it is easy to make predictions that fail.
Yes, you can make an accurate prediction about the future
GIVEN F=ma,
GIVEN you know the frame of reference,
GIVEN the measurements of F, m, a, are made accurately,
GIVEN they were written down accurately,
GIVEN the procedures are followed accurately.
Heck, you even have to assume constancy of the measurements over the future to achieve an Accurate prediction. Or you have to accurately predict the changes anticipated in your measurements to make a good prediction. Even mass isn’t necessarily a constant. Ever hear of a rocket, a case where dm/dt is less than zero by quite a bit?
We can launch a satellite into orbit using a 8 min time dependent vector with – eight degrees of freedom, air resistance at subsonic, transonic, supersonic modes as a function of time, altitude and orientation. We can with lots of practice, and some risk of mechanical failure. We can.
But how is our ability to use F=ma to launch a satellite in the future any justification for discounting the truth of what someone recorded in a temperature log in 1950?

August 5, 2014 1:54 pm

Edit:
8 min time dependent vector with – eight degrees of freedom,
make that at least sixteen degrees of freedom as a function of time.
3 positional x,y,z
3 orientation
3 velocity
3 thrust components (a primary along axis plus gimble for change of orientation)
1 mass (and I’m not counting dimensions associated with rotational inertia, another 3).
3 air resistance components. (a complex non-linear function of orientation, vehicle velocity, air velocity, vehicle shape, air pressure, temp).

%d bloggers like this: