Guest post by Steven Goddard

Photo above from: NY Daily News: Record Snowfall in New York
Now that we have reached the end of the meteorological winter (December-February,) Rutgers University Global Snow Lab numbers (1967-2010) show that the just completed decade (2001-2010) had the snowiest Northern Hemisphere winters on record. The just completed winter was also the second snowiest on record, exceeded only by 1978. Average winter snow extent during the past decade was greater than 45,500,000 km2, beating out the 1960s by about 70,000 km2, and beating out the 1990s by nearly 1,000,000 km2. The bar chart below shows average winter snow extent for each decade going back to the late 1960s.
Here are a few interesting facts.
- Average winter snow extent has increased since the 1990s, by nearly the area of Texas and California combined.
- Three of the four snowiest winters in the Rutgers record occurred during the last decade – the top four winters are (in order) 1978, 2010, 2008, 2003
- The third week of February, 2010 had the second highest weekly extent (52,170,000 m2) out of the 2,229 week record
The bar graph below shows winter data for each year in the Rutgers database, color coded by decade. The yellow line shows the mean winter snow extent through the period. Note that the past decade only had two winters below 45 million km2. The 1990s had seven winters below the 45 million km2, the 1980s had five winters below 45 million km2, and the 1970s had four winters below 45 million km2. This indicates that the past decade not only had the most snowfall, but it also had the most consistently high snowfall, year over year.
It appears that AGW claims of the demise of snowfall have been exaggerated. And so far things are not looking very good for the climate model predictions of declining snowfall in the 21st century.
Many regions of the Northern Hemisphere have seen record snowfall this winter, including Washington D.C, Moscow, China, and Korea. Dr. Hansen’s office at Columbia University has seen record snowfall, and Al Gore has ineptly described the record snow :
“Just as it’s important not to miss the forest for the trees, neither should we miss the climate for the snowstorm,”
A decade long record across the entire Northern Hemisphere is not appropriately described as a “snowstorm.”
Sponsored IT training links:
If want to improve TK0-201 score then go through 646-230 exam dumps and self test RH302 exams and get guaranteed success in first attempt.


kadaka (15:26:33) :
Now, are you saying the calculations for the Sun, out here in the real world, are of similar nature?
Here is more about how these calculations are performed
http://en.wikipedia.org/wiki/Standard_Solar_Model
The agreements are so good that we can use the disagreements to learn about the finer details of the Sun to get more correct decimals in the calculation. But these are just details, the basic evolution is well understood. Also, by making the same calculations for stars: http://en.wikipedia.org/wiki/Stellar_structure
We can check the validity of the calculations over enormous ranges of mass, composition, and age that the billions of stars in the Galaxy present to us.
Ever since I first heard of global warming people were saying “I know it must be true we aren’t getting the snow storms that we had as kids (1960-1970s)”. I would concede that I thought that was right, but I wasn’t sure because a foot of snow looks much bigger as a kid and people remember things differently. It is disingenuous in the extreme for the AGW crowd to now turn around and say global warming causes more snow. They were not saying that in the 1980s and the 1990s.
The 1960s and 70s were a cold period so it absolutely absurd to argue that more snow was caused then by cold global temperatures and that more snow is now being caused by warm global temperature and the only time we have less snow is at “normal” temps. Even the alarmists must see how stupid that sounds.
Joe (04:20:12)
Well … no.
As someone who has spent some time as a commercial salmon fisherman, this seemed … well … doubtful. So as I always do, I went to look for the sources so I could check the numbers. The main source seems to be an article in Nature, quoted here. For the North Atlantic, where the salmon are, the study says that salinity decreased by 0.02 psu (Practical Salinity Units).
Now, the salinity of the big oceans is slightly different, with the Atlantic averaging about 37 psu and the Pacific averaging 35 psu. In other words, the Pacific is 2 psu less salty than the Atlantic. And salmon live happily in both oceans … go figure.
So in fifty years, the Atlantic has gone from 37 to 36.98 psu … is there anyone here that thinks that the salmon care?
Which is why I always say, run the numbers yourselves. You don’t have to be a rocket scientist. I ran the numbers, and I saw that the North Atlantic salinity had decreased by 0.05% … be still my beating heart.
So, salinity changes killing the salmon? Sorry, doesn’t pass the smell test.
And since I catch (or rather try to) salmon in fresh water, they seem quite able to handle NO SALT at all, relatively speaking.
Here is a paper from studies in 2002.
http://co2science.org/articles/V6/N10/C2.php
Here is the salmon collapse of last fall.
http://www.ianwelsh.net/british-columbias-salmon-stock-collapses/
Ruth Curry’s work is quite good in understanding ocean flows, currents and changes.
http://www.whoi.edu/page.do?pid=12455&tid=282&cid=5098
Well I was certainly thankful that most of the snow fell well to the south of us this winter; I think we only got about two feet…
http://www.timesonline.co.uk/tol/news/environment/article7026317.ece
By one of those loveliest coincidences, I just got my quarterly newsletter for Winter from the National Snow and Ice Data Center. Amazingly, no mention of this latest winter. So I sent them this email:
Joe (19:25:30)
Joe, I’m not following this. You claim that the salinity in the North Atlantic is affecting the salmon. I show that’s not so. You come back with a citation about Pacific salmon. I live on the West Coast, I’ve fished commercially for salmon from Monterrey Bay to the Bering Sea, I’m more than aware of the problems with the Pacific salmon.
The sad truth is that no one knows why the salmon are in decline. They have an extremely complex lifestyle. Likely culprits are some combination of overfishing, change in the PDO from warm to cool, destruction of inshore spawning habitat, shifting ocean currents, “first nation” overfishing, parasites from farm fish, and likely some unknown factors.
So I’m not sure what your point is, Joe. What am I missing here?
Leif Svalgaard (16:23:15) :
Yes it does if you make the calculation for the moment where you have just measured all the properties in great detail. The calculation is then only valid for that precise moment, but that is all that is claimed. (…)
Why do you keep beating this dead horse? The dog likes her meat tender but not mashed to a pulp.
Such calculations of strengths of individual components are known to have issues, destructive testing is still employed. In manufacturing you work with tolerances, minimum and maximum values, thus calculations are affected right there. Rebar is a mass-produced commodity item, not made that critically, you could find strength variances in different spots along the same bar. For a bridge of any decent size, all the concrete will not be poured at the same time from a completely homogeneous mix, there will be variances in strength. Then comes assembly…
For your analogy to hold, you are arguing that a hypothetical Star Trek near-instantaneous scanning will take place on the complete assembled structure, of complete depth at a very fine resolution, from which your definitive calculation can be made with the complete and precise understanding of the exact physical properties of such an assembly and everything in it… With the result being only good for when it was scanned.
Real world, bridge gets a maximum weight classification good for all conditions, with regular inspections over time to verify it should still meet that rating with repairs as needed. And the rating is set lower than the calculated absolute maximum to provide a safety margin.
Yes, I understand that solar calculations are done with time figured in, with recognition of changing conditions, and the results are good for the specific time inputted. I also know about manufacturing tolerances, safety margins, and that maximum loading at-this-moment calculations for bridges aren’t done and have a practical value of essentially nothing. To meet your original (11:45:24) statement, you would be so far into future hypothetical land that with such scanning technology and raw computing power you could know the position, size, and duration of each and every sunspot weeks in advance, which we certainly cannot do at this time.
Now please stop trying to weaken my trust in solar calculations by insisting they can be as relatively sloppy as a bridge maximum load calculation. I would prefer to continue thinking solar physics is much more precise than bridge engineering, if you don’t mind.
Willis Eschenbach (22:55:01) :
(…)
The sad truth is that no one knows why the salmon are in decline. They have an extremely complex lifestyle. Likely culprits are some combination of overfishing, change in the PDO from warm to cool, destruction of inshore spawning habitat, shifting ocean currents, “first nation” overfishing, parasites from farm fish, and likely some unknown factors.
(…)
Dang, and here I thought the eco-mentalists were certain it was dams keeping them from their spawning grounds, leading to the destruction of dams for the benefit of the fish.
It must be very hard to Save The Earth (TM) these days. You need clean renewable hydroelectric to combat global warming. You need water reservoirs to cope with the droughts caused by global warming. But to save the salmon and prove you are not a selfish human, you have to get rid of old dams that do not have fish ladders, period. You really shouldn’t be adding on fish ladders, or even put up any sort of new dam regardless of whether it has one, due to the CO2 pollution in cement manufacture and the dam construction work. What is an Earth-loving environmentally-minded person supposed to do?
Public release date: 2-Mar-2010
“Were short warm periods typical for transitions between interglacial and glacial epochs?”
http://www.eurekalert.org/pub_releases/2010-03/haog-wsw030210.php
Now we know why Leif posted this
Leif Svalgaard (17:42:10) :
cyclones 3-5 million years ago: http://www.physorg.com/news186250015.html
“there were twice as many tropical cyclones during this period, and they lasted two to three days longer on average than they do now”
“temperatures were up to four degrees Celsius warmer than today”
on the WMO: “. . . we cannot at this time conclusively identify anthropogenic signals in past tropical cyclone data.” Thread.
He actually does exactly the same thing himself, except he is confident out to Billions of years in either direction, not just mere Millions.
Steve and you critcise Global Warming Predictions out to 100 years LOL, they have nothing on Stellar Scientists.
Ref – Jimbo (02:23:44) :
“Public release date: 2-Mar-2010
“Were short warm periods typical for transitions between interglacial and glacial epochs?”..
______________________
(-; Thanks Jimbo, I needed that.
Why is it that a few European “SCIENTISTS” seem so much better at their game than our “scientists”? Think it might be diet?
The Atlantic is most studied which is why the Pacific in recent years have been putting in machines to record the sudden chnages that are occurring.
http://www.scienceagogo.com/news/20031117204012data_trunc_sys.shtml
Observations of sea surface salinity in the western Pacific fresh pool: Large-scale changes in 19921995
Observations of sea surface salinity in the western Pacific fresh pool: Large-scale changes in 19921995
Christian Hénin
Centre ORSTOM, Noumea, New Caledonia
Yves du Penhoat
Centre ORSTOM, Noumea, New Caledonia
Mansour Ioualalen
Centre ORSTOM, Noumea, New Caledonia
This paper investigates the variability of sea surface salinity (SSS) in the western equatorial Pacific fresh pool. For this purpose, we processed data collected from thermosalinographs embarked on merchant ships. Two main cross-equatorial shipping lines that are representative of the oceanic conditions in the western tropical Pacific were selected: the Japan-Tarawa-Fiji line that crosses the equator near 173°E (eastern track) and the New-Caledonia-Japan line that crosses the equator near 156°E (western track). We show that there is a strong SSS variability in the region at monthly as well as interannual timescales. This high variability is attributed to the successive passages of a zonal salinity front, trapped in the (5°N5°S) equatorial band and migrating in phase with the southern oscillation index. We also found the eastern track to be more variable in SSS because it is more exposed to these SSS front incursions. We carried out a detailed study of the mechanisms responsible for this variability; it revealed that the rainfall input acts as a source of freshwater responsible for the existence of a contrasted distribution of SSS (mainly high-salinity waters in the central Pacific and low-salinity waters in the western Pacific). However, the main mechanism responsible for the SSS variability is zonal advection that makes the two distinct masses of water converge, resulting in a salinity front which shifts back and forth in the equatorial band.
Received 18 November 1996; accepted 16 June 1997; .
Citation: Hénin, C., Y. du Penhoat, and M. Ioualalen (1998), Observations of sea surface salinity in the western Pacific fresh pool: Large-scale changes in 19921995, J. Geophys. Res., 103(C4), 75237536.
Here is a cool map of the Atlantic Salinity changes.
http://www.whoi.edu/page.do?pid=12455&tid=282&cid=897
Trouble is that Mr Goddard hasn’t made it dull and boring enough something like this (Starts brushing rust of some very unexercised stats skills) :
Start…..
Testing the Influence of Climate Change on Northern Hemisphere Snow Coverage
Introduction
It has been suggested that increases in global average temperature should result in less snow coverage. This paper will test three hypotheses’ concerning that suggestion against the northern hemisphere snow coverage data held at: http://climate.rutgers.edu/snowcover/files/moncov.nhland.txt
The test set will be:
Hypothesis 1: Winter Snow coverage has changed over time
Hypothesis 2: Annual Snow coverage has changed over time
Hypothesis 3 Earlier Springs, Later Autumns has caused Snow Coverage to change over time
Assumptions
Winter can be considered as months 11, 12 and 01 (November December January)
Early Spring /Late Autumn should affect periods 02,03,09,10 (February, March, September, October)
Method
The mean and standard deviation of each selection will be calculated for each decade and the difference in means and standard deviation tested for significance at the 95% confidence level using the Fisher F test and the Student t Test.
Winter
Visual inspection of the Global average temperature graph in GISS Hansen, J et al 2006 suggests that the period 1966 to 1980 was relatively stable with little net change in average global temperature and we will use this as a baseline against which we will assess later decades. The rest of the data was divided into the periods 81 to 91, 91 to 2000 and 2000 to 2010. The mean and standard deviations were calculated for each set of samples and F and t against the baseline calculated using the standard formula.
Results
Baseline 11/81-01/91 11/91-01/2000 11/2000-01/2010
Mean 41134582 42776492 41347281 41521268
σ 6166676 5402701 5070364 6048009
Samples 42 33 27 30
Base to 81/91 Base to 91/2000 Base to 2000/2010
σd 1337889 1362937 1457637
t 1.227 0.156 0.265
F 0.76757 0.67605 0.96188
t Req’d 2.0 2.0 2.0
F Req’d 1.8408 1.8408 1.8408
Significance None None None
Annual
The data set was divided into the sets 66-76, 77-87, 88-98, 99-2010. The mean and standard deviations were calculated for each set of samples and F and t against the 66-76 set calculated using the standard formulae.
66-76 77-87 88-98 99-2010
Mean 27431721 25428376 24485368 24913328
σ 15703001 16381646 16305789 16677242
Samples 113 132 132 146
66/76 to 77/87 66/76 to 88/98 66/76 to 99/2010
σd 2053090 2048510 2021674
t 0.976 1.438 1.246
F 1.088303 1.078247 1.127933
t Req’d 1.96 1.96 1.96
F Req’d 1.3519 1.3519 1.3519
Significance None None None
Spring/Autumn
The data set was divided into the sets 66-76, 77-87, 88-98, 99-2010. The mean and standard deviations were calculated for each set of samples and F and t against the 66-76 set calculated using the standard formulae.
66-76 77-87 88-98 99-2010
Mean 29545013 27620185 26429362 27540621
σ 16321318 17464479 16246471 16412070
Samples 37 44 44 45
66/76 to 77/87 66/76 to 88/98 66/76 to 99/2010
σd 3759203 3632963 3631156
t 0.512 0.858 0.552
F 1.144988 1.009235 1.011152
t Req’d 2 2 2
F Req’d 1.8408 1.8408 1.8408
Significance None None None
Conclusion
There is no indication of any change in snow coverage in the northern hemisphere on either a winter, annual or spring/autumn basis and that any effects suggested should be assigned to random variation about a stable mean.
Critique
1) I have no idea about any cleansing, homogenisation or aggregation performed on this data prior to its presentation by Rutgers
2) Snow extent is only 1 part of the issue, thickness and mass would need to be considered for a full picture
3) I haven’t taken care to provide exactly similar sample sizes, however the F and t methods do not require it
4) I haven’t taken care to ensure that the same number of winter periods are present in each sample batch; this would increase the risk of a false positive and would have required further investigation if a weak indication of significance had been detected.
5) I used GISS/Hansen to assess a baseline for hypothesis 1. This may be contentious in some quarters.
6) I selected a baseline by eyeball only and identifying a stable period by use of a smoothing or curve fitting function would have been more rigorous.
Further Lines of Inquiry
It’s possible that there is deviation from a stable distribution on a shorter timescale than decadal. I would suggest generating the process control data from the requisite tables and plotting the seasonal groups as 3 element sample sizes on an X-Bar and R chart, using 66-80 as a sample capable process.
This data is hemisphere total, there may be effects at lower latitudes masked by the overall data. I would suggest an experimental design based on ANOVA which would be looking at quantifying latitude and temperature anomaly data.
End….
Ouch that hurt… must exercise more….:-)
So there you go thermageddon postponed by simple high school maths….:-)
Michael Ozanne,
Thanks for that! A lot of scientists feel that plain English is bad for their reputation. The best ones (like Feynman) use plain English as a primary tool.
kadaka (00:14:54) :
Why do you keep beating this dead horse? The dog likes her meat tender but not mashed to a pulp.
I thought that my last post buried the horse for good, but since you brought this up again [and again], here goes [and hopefully this will be last, unless you like to chew on cadavers]:
For your analogy to hold, you are arguing that a hypothetical Star Trek near-instantaneous scanning will take place on the complete assembled structure, of complete depth at a very fine resolution
No, only for the things that are relevant, which for a star or the Sun are only its mass, composition, and age [and to second order if it is part of a binary star or has giant nearby planets – but since none of those things apply to the Sun we can ignore them]. For the Sun, the occasional sunspot is not important for the evolution of the Sun, while a single rebar can have large consequences for a bridge. One includes what is relevant.
you would be so far into future hypothetical land that with such scanning technology and raw computing power you could know the position, size, and duration of each and every sunspot weeks in advance, which we certainly cannot do at this time.
None of these are relevant for the luminosity.
Now please stop trying to weaken my trust in solar calculations by insisting they can be as relatively sloppy as a bridge maximum load calculation.
Rather the other way around, one might consider being less sloppy about bridges [so they don’t fall down so often].
A C Osborn (03:28:17) :
He actually does exactly the same thing himself, except he is confident out to Billions of years in either direction, not just mere Millions.
There is a big difference. We have billions of stars that we can observe and use to check on our models, but we have only one Earth. If we had billions of Earths with different CO2, different ocean/land, different volcanism, different solar variations, different orbits, different biospheres, etc, then we would know how important the different influences were and would likely be better able to project. There is another difference: stars are hot and hot things are simpler than cold things. If I take a modern automobile [with driver] which is a very complex thing, enclose it in a box and heat it to 10,000 degrees, it would turn into a gas with virtually no complexity at all, and its behavior would be a lot more predictable.
Steve Goddard (06:22:42) :
The best ones (like Feynman) use plain English as a primary tool.
Complete nonsense. They use mathematics. They [or some, at least] can explain their results in plain English, but only to a point, as natural language is not sharp enough for this – or rather nobody would listen to a ten-hour natural language explanation of something that can be done in ten minutes with the proper mathematics. You even fail to use plain English properly, I’ll let you correct what is wrong with ‘a primary tool’.
Feynman in plain English :
“I’m smart enough to know that I’m dumb.”
“It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong”
“Science is the belief in the ignorance of the experts”
“The first principle is that you must not fool yourself and you are the easiest person to fool.”
“If you thought that science was certain – well, that is just an error on your part.”
“I believe that a scientist looking at nonscientific problems is just as dumb as the next guy.”
Steve Goddard (07:24:31) :
Feynman in plain English
Feynman doing science:
“The theory of a general quantum system interacting with a linear dissipative system:
A formalism has been developed, using Feynman’s space-time formulation of nonrelativistic quantum mechanics whereby the behavior of a system of interest, which is coupled to other external quantum systems, may be calculated in terms of its own variables only. It is shown that the effect of the external systems in such a formalism can always be included in a general class of functionals (influence functionals) of the coordinates of the system only. The properties of influence functionals for general systems are examined. Then, specific forms of influence functionals representing the effect of definite and random classical forces, linear dissipative systems at finite temperatures, and combinations of these are analyzed in detail. The linear system analysis is first done for perfectly linear systems composed of combinations of harmonic oscillators, loss being introduced by continuous distributions of oscillators. Then approximately linear systems and restrictions necessary for the linear behavior are considered. Influence functionals for all linear systems are shown to have the same form in terms of their classical response functions. In addition, a fluctuation-dissipation theorem is derived relating temperature and dissipation of the linear system to a fluctuating classical potential acting on the system of interest which reduces to the Nyquist-Johnson relation for noise in the case of electric circuits. Sample calculations of transition probabilities for the spontaneous emission of an atom in free space and in a cavity are made. Finally, a theorem is proved showing that within the requirements of linearity all sources of noise or quantum fluctuation introduced by maser-type amplification devices are accounted for by a classical calculation of the characteristics of the maser.”
Leif Svalgaard (06:33:10) :
So if your understanding is so good perhaps you can give us the exact date & time of the next Solar Flare?
And tell us exactly how many Sunspots there will be at noon on the 11th March 2010?
Leif Svalgaard (06:33:10) :
No, only for the things that are relevant, which for a star or the Sun are only its mass, composition, and age
So you also know the Exact age of the Sun as well, my, my, it must be a time travelling machine you use.
No let me guess you Calculated it!
After 270 comments and two days, no one has disputed my (mathematical) assertion that the past decade had the greatest winter snow extents in the Rutgers record.
It follows logically that we can’t be at a record decadal high, without having increased from any and all decades in the past.
Feynman wrote several excellent books in plain English, including one of my favorites. “What Do You Care What Other People Think?”
“In one chapter, he describes an impromptu experiment in which he showed how the O-rings in the shuttle’s rocket boosters could have failed due to cold temperatures on the morning of the launch. This failure was later determined to be the primary cause of the shuttle’s destruction.”
http://en.wikipedia.org/wiki/What_Do_You_Care_What_Other_People_Think%3F