January solar cycle 24 numbers, a new high for one, continued slumps for others

The NOAA Space Weather Prediction Center released their January 2014 solar data, and it has one small surprise, the 10.7 radio flux is the highest ever in cycle 24, the other metrics, not so much.

SSN has been about where the much adjusted prediction line says it should  be for the last four months. 

Latest Sunspot number prediction

The 10.7cm radio flux hits a new high.

Latest F10.7 cm flux number prediction

Meanwhile, the Ap magnetic index continues its slump as it has since October 2005, bumping along the bottom.

Latest Planetary A-index number prediction

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
210 Comments
Inline Feedbacks
View all comments
Gary Hladik
February 3, 2014 11:08 pm

Steven Mosher says (February 3, 2014 at 5:18 pm): “I suspect no one will object to adjusting the raw observed value to one that uses a consistent basis”
Steven Mosher says (February 3, 2014 at 8:06 pm): “When I was kid our house cost 15,000 dollars. I trust none of you will adjust that number.”
Heh heh. Mosh reminds us of the alarmists’ standard “If 99% of doctors said…”
http://wattsupwiththat.com/2014/02/02/if-99-doctors-said/
i.e. a false analogy:
http://en.wikipedia.org/wiki/False_analogy
Good one, Mosh. 🙂

jono1066
February 3, 2014 11:38 pm

when this cycle (24) is all over and the debrief seesion starts could I ask for a subkect to be added to the agenda
to construct a `summary for policy makers` with the front page being the original nasa propaganda page indicating the `massive solar cycle 24 being predicted` along with their suggested ssn chart, of course overlaid with the real (and smoothed) ssn.
it could be quite interesting.

February 3, 2014 11:40 pm

The SC 24 max is near (~2014/15). Solar polar fields are very interesting at this point. North seems to be shifting again. Both N and S will stay weak (close to zero) for some time (decades).
http://www.leif.org/research/WSO-Polar-Fields-since-2003.png
http://www.leif.org/research/Solar-Polar-Fields-1966-now.png

Zeke
February 3, 2014 11:49 pm

Pamela Gray says:
February 3, 2014 at 10:09 pm Those double peaks remind me of two side by side hills along the Columbia River…
When you look at it that way the raw data with or without adjustment is fine.

DaveR
February 4, 2014 12:16 am

If that was a stock chart, its a classic double top, which is surely followed by a rapid decline.

steveta_uk
February 4, 2014 1:13 am

DaveR, don’t forget that the classic double top on a stock chart is only classic if followed by a rapid decline. It has no predictive power whatsoever.

Brett Keane
February 4, 2014 1:49 am

Thanks, Leif and Ren, for noticing my hemisphere… Ren’s call on the October SSW led me to watch for effects. These were noticeable as a return to wintry weather in new Zealand. Leif: data on relative lack of SH Polar vortex breakdowns? I have been watching the changes here over the last 12yrs, as the southern chill grows, and our jet streams meander more. And some warmth-loving plants thrive less, being on their southern boundaries. Brett Keane

February 4, 2014 2:15 am

Where can I find the official observed 10.7 radio flux number ?
– In http://www.swpc.noaa.gov/ftpdir/weekly/RecentIndices.txt, one finds for January 2014: 158.6;
– In http://www.solen.info/solar/, 157.4 (measured).
Which of those two is the official radio flux number?
N.B.: The projected values of the smoothed sunspot number, showed in http://www.solen.info/solar/, could be exact if the solar activity would follow this simulation:
Sunspot number in
February 2014: 75
March 2014: 65
April 2014: 58
May 2014: 50
June 2014: 62
July 2014: 52
August 2014: 47
In any case, it is almost certain that a new maximum of the smoothed sunspot number is in the make!

Policycritic
February 4, 2014 2:30 am

Gail Combs says:
February 3, 2014 at 8:44 pm
The Fed already did via inflation (currency devaluation) The ‘value’ of the house is the same but the number of dollars is not.

Actually, it was the banks that inflated asset prices, Gail. Classic example: the 20G house in California in the 70s. As soon as we became monetarily sovereign in 1971, the bankers caught on to its meaning; the public was left in the dark about what it could mean for them (prosperity) and still are. So the bankers took advantage of it. They convinced the Californians to vote in vastly reduced property taxes, the traditional source of income for the state and local govts. [States (and local govts) need revenue, unlike the federal govt.] Once the property tax was drastically reduced, the banks stepped in to get that former property tax money with mortgages on higher house prices now rising because they were freed of ‘property tax’. The State still needed revenue, so it cut services, raised income and sales tax on everyone, and indebted the middle class and the poor. (The rich couldn’t take deductions on their hefty property taxes, but they could on their income tax.) All of this sold at the time as being good for the people.

February 4, 2014 3:16 am

Steven Mosher says:
February 3, 2014 at 8:06 pm
When I was kid our house cost 15,000 dollars.
I trust none of you will adjust that number.

Only a dishonest man (or a climatologist?) would adjust that number to something else and replace the historical number with the new “adjusted” number.
An economist might convert all historical house prices into today’s dollars via the published inflation table for comparison purposes without altering the historical record. An economist would never go back in history and “adjust” some house prices up and others down to support his computer model of CO2 (or political party in power or whatever) effecting housing price. (a climatologist might do so obviously, as we have watched them do it over these last decades)
Mr. Mosher, I wager that you would be yelling “it is the hottest year ever” as 2 miles of ice pile up over New York — all in adjusted temp reading of course.

cedarhill
February 4, 2014 3:31 am

Recall the NASA article on double peaks?
http://science.nasa.gov/science-news/science-at-nasa/2013/01mar_twinpeaks/
Is this the double peak discused in Mar 2013?

AJB
February 4, 2014 3:46 am

Edim says: February 3, 2014 at 11:40 pm
Interesting indeed:
http://postimg.org/image/cgcg3f7cn/full
http://postimg.org/image/5om10kid3/full

February 4, 2014 4:19 am

Solar frequency is low (long cycle) and that seems to slow everything down (even molecules here on Earth). The maximum comes a bit later and is longer, the polar reversal started, but it will take some time to finish and polar fields will remain close to zero for at least another cycle. Interesting times.

pochas
February 4, 2014 4:58 am

The polar vortex depends on the Brewer-Dobson circulation. When it is strong the polar vortex is well-defined, the polar air stays put, and we get warmer. When it is weak, polar air heads southward over northern continental land masses. But there are no large land masses adjacent to antarctica so southern polar air tends to remain isolated. That is why the north pole is so much warmer than the south pole, why the southern polar vortex is more stable, and why arctic temperatures respond faster to changing atmospheric conditions.

February 4, 2014 5:22 am

lsvalgaard says:
February 3, 2014 at 9:46 pm
“We have to go with what the data actually says, rather than with what think looks closer. Here are the official numbers: http://sidc.be/silso/DATA/monthssn.dat

32.9 66.9
64.3 66.8
55.2 64.6
69.0 61.7
64.5 58.9
66.5 57.8
63.0 58.2
61.4 58.1
53.3 58.6
61.8 59.7
40.8 59.6
62.9 58.7
38.1 58.4
57.9 57.6
72.4 57.9 *
78.7 59.9 *
52.5 62.6 *
57.0 65.5 *
66.0
37.0
85.6 *
77.6 *
90.3 *
82.0 *

The average sunspot number from the data you provided not only looks closer to 60 it actually is closer…
62.1 60.6

Richard Barraclough
February 4, 2014 6:32 am

Assuming the sunspot maximum is reached in late 2013 or early 2014, then it will be almost 14 years since the last sunspot maximum. According to the charts at http://www.solen.info only cycle 4 was a long as that (OK – I realise that’s from minimum to minimum, so not strictly comparable, but an indication of a long cycle, nevertheless).
Can any significance be attached to this or useful predictions made about the sun’s future behaviour?

MarkW
February 4, 2014 6:33 am

I thought the prediction line was for the smoothed SSN series. That’s way below where the prediction line has been, and unless the SSN numbers stay this high for the next several months, the smoothed line for January won’t reach the prediction line.

MarkW
February 4, 2014 6:36 am

Mosher, adjusting the flux to a 1AU constant is open, unambigous and every agrees on what the adjustment factor should be.
As opposed to the temperature adjustments that are secret and highly arbitrary.
Do you see the difference now?

MarkW
February 4, 2014 6:48 am

Policycritic says:
February 4, 2014 at 2:30 am
——-
Banks have no impact on inflation, that is the result of the fed inflating the money supply.
Bankers convinced people to cut their taxes? The people themselves had no interest in paying less taxes? Exactly how did the bankers go about forcing people to act in their own best interests?

February 4, 2014 6:49 am

Brett Keane says:
February 4, 2014 at 1:49 am
data on relative lack of SH Polar vortex breakdowns?
It is hard to give you a list of events that have not happened. The first [and only] sudden stratospheric warming ever observed in the southern hemisphere took place in 2002: http://digital.library.adelaide.edu.au/dspace/bitstream/2440/38004/2/04chapters5-8.pdf
rikgheysens says:
February 4, 2014 at 2:15 am
Where can I find the official observed 10.7 radio flux number ?
ftp://ftp.geolab.nrcan.gc.ca/data/solar_flux/daily_flux_values/fluxtable.txt
There are three values per day. Due to [small] systematic errors when there is snow on the ground, only the noon value [at 20:00 UT] is reliable [ http://www.leif.org/research/f107-Anomaly-Penticton.png http://www.leif.org/research/F107-Sawteeth.png http://www.leif.org/research/F107-sawteeth-2.png ]. You get slightly different averages depending on whether you only use the [good] noon value or use all three values during the day.
Sparks says:
February 4, 2014 at 5:22 am
The average sunspot number from the data you provided not only looks closer to 60 it actually is closer…
You are still not paying attention. The smoothed values [over a year] are in the right-most column and they show a maximum of 66.9 and will probably reach 70 in the months to come. The prediction is always of the smoothed values.

February 4, 2014 7:01 am

Richard Barraclough says:
February 4, 2014 at 6:32 am
Can any significance be attached to this or useful predictions made about the sun’s future behaviour?
I don’t think so, or at best in a statistical sense, which has no guarantee to hold in the future.

Tim Clark
February 4, 2014 7:21 am

What sun………
02/04/2014 0758 am
SSW Mount Hope, Sedgwick County, Kansas
Heavy snow e3.0 inch, reported by co-op observer.
Very heavy snow falling in Mount Hope with visibility
less than a tenth of a mile.
[ Pamela Gray says:
February 3, 2014 at 10:09 pm Those double peaks remind me of two side by side hills along the Columbia River…
When you look at it that way the raw data with or without adjustment is fine. ]
I prefer the larger unadjusted numbers.

February 4, 2014 7:26 am

Tim Clark says:
February 4, 2014 at 7:21 am
I prefer the larger unadjusted numbers.
In June the unadjusted numbers are smaller as we are a bit further from the Sun. If you care about what the Sun is doing, you must use the adjusted numbers

rgbatduke
February 4, 2014 7:40 am

When I was kid our house cost 15,000 dollars.
I trust none of you will adjust that number.

Seriously, is there some point to this? If you want to discuss adjustments to the thermometric data sets, or some specific set, by all means start a thread on it. Go into detail. I’d welcome it. If you are trying to not so subtly hijack a post on solar activity to defend adjustments made to thermometric data in one or more surface temperature sets, don’t, especially not with oblique and mean-spirited non-sequitors.
Also bear in mind Leif’s comments that scientists are not generally idiots and understand things like elliptical orbits and the 1/r^2 in solar intensity pretty well. As several others have pointed out, whether or not one should adjust the data for this in general depends on the use one wishes to put the data to. If one wishes to make inferences about e.g. the state of the sun, one basically inverts the intensity relationship to obtain knowledge on the surface of the sun. If one wishes to make inferences or predictions about the effect of the radiation on the Earth, then obviously one uses the actual measurement at the Earth.
In this regard, why pick on poor, hapless, helpless solar indices in an article on solar magnetic state? Why not go whole hog and attack the widespread use of a single number for solar intensity at TOA instead of correcting for the 91 W/m^2 swing in this number at the TOA as the Earth orbits the Sun even when/if solar state were perfectly constant?
Note well that if you were to open up a general article on land surface temperature reconstruction, nearly all of the “adjustments” made to the data are not of this sort — simple geometry and the application of laws of physics that nobody doubts or can misunderstand or misapply. They are an attempt to reconstruct an accurate and precise record of temperature from a comparative mere smattering of indifferently made surface measurements, and often the adjustments are made in the complete absence of information as to whether or not they are truly justified simply because we lack critical information about local state at the time the measurements were made and those assumptions are bound to be often wrong!
One doesn’t have to go into the distant past to observe this — many of the “official” thermometers contributing to the contemporary measurements of land surface temperatures are utterly and obviously corrupted by siting issues, as has been pointed out repeatedly. My favorite one locally is RDU airport, which tied an all time minimum temperature (for the date) last week at 7 F. The odd thing is that in my backyard 15 miles away, the minimum temperature was 5 F. In the surrounding countryside on all sides, the field of minimum temperatures was 4F to 5F. RDU is in the middle of a web of expressway and increasing urbanization, and the weather station is sitting a few meters away from a huge, heat retaining runway that is doused every few minutes with superheated jet exhaust consisting of CO_2 and H_2O vapor. It always reads hotter than the surrounding countryside.
But how much hotter? How can one “correct” for this obvious UHI effect? That’s very difficult to say indeed, and that is right now, with a huge network of personal (but unofficial) weather stations to observe. Two are within a mile of my house. One (at a local school) is well-sited out in a field, with trees nearby but no concrete particularly close to it. I tend to trust it, as it produces readings within 1F or so of what I read in my back yard. The other is at a house like mine in a neighborhood right next to mine, but they mounted it as far as I can tell right over their driveway facing south. During the afternoons it spikes up to highs that are absurdly higher than any of the surrounding weather stations.
Then there are the problems with the actual inhomogeneity of local temperatures and with just when the high or low temperature of a day occurs. Move a weather station five meters and one can often observe a 1-2 C variation in what it reads across a day — if the house I just referred to put the station elsewhere in their back yard I’m sure their spurious readings would disappear because I don’t think the house is in its own tiny tropical climate zone compared to mine. Then, while the low temperature of the day is frequently but not always at or just after dawn, the high is not so cooperative — it can happen anytime during the day or even the night and indeed is distributed with nonzero weight across all or most of the 24 hour interval, peaked sometime in mid-afternoon. There are problems with the normalization and accuracy of measuring devices. There are problems with record keeping. And we’re still working on contemporary measurements, often at sites set up by professionals using high quality instruments (but in the case of an airfield, for a completely different purpose than recording global surface temperatures — on an airfield they care about conditions at the tarmac as it is these that affect landing planes.
It is in some sense fundamentally abusive to repurpose these readings as part of a surface temperature reconstruction in the first place — they are broke and one cannot really fix them. How can one compare the 7C read at RDU last week to the record it tied back in 1977? In 1977 RDU had a single terminal, and was literally situated in the middle of surrounding woods and cow pastures with only two lightly travelled two lane highways providing access. Today they are rebuilding the second of three terminals for the second time, where the original terminal (still there, although renovated somewhere along the way) is so tiny, stuck out on one end of the airport loop, that people have a hard time finding it or the airlines it services. There are eight lane expressways on 2/3 sides, the original four lane route 70 is now lined with shopping centers, cow pastures and forests have been replaced by giant housing developments — there is even an “Airport Mall” with shopping, outlet stores, dining, hotels, and a square kilometer or so of tarmac. Perhaps when the wind blows in just the right direction, it doesn’t pull warm air in from one of these urban heat sinks. But alas, when it does it pulls it right over the neighboring runway instead.
I would say, if we were to “adjust” its record, that there is absolutely no doubt that we set a low temperature record (for the date) last week for the entire triangle area if not the whole state, but officially no such thing occurred, we merely tied a record set (paradoxically enough) in 1977, not in 1910 or 1870 or whatever date in the remote past one might expect given the 0.5 to 1.0 C global warming we supposedly experienced in the intervening century (depending on whether one takes one’s GISS with one’s coffee, or prefers the more moderate HADCRUT4 on a bagel).
In the end, if I were to try to “adjust” this record, renormalize, account for UHI, I would have to take the field of surrounding personal weather station reports, come up with some sort of criterion for eliminating the obviously broken ones like the one that reads 2 C too high where I can — note well that this is only possible because I have three very nearby readings (one of which I trust a great deal as it is my own and I know right where it is and how it varies over the day and when it reads incorrectly and when it reads correctly) it is inconsistent with, but what would I do if there were two incorrectly sited stations and one correctly sited? What would I do if one were not my own? Even in the best sited station in the area, how do I cope with the spurious variations due to wind direction, due to variation in the instrumental readings themselves as the electronics experiences some occult instability? One thing we teach in physics lab is that one never measures the true length or true time, that one always has to deal with ignored causes such as friction and drag forces and non-inertial reference frames, so that even in the humblest of experiments measuring the local surface gravitational acceleration constant g, one is never going to get the “accepted value” and besides, g isn’t really a constant. Nor is it easy to “correct” for all this, not without knowing the answer ahead of time. In fact, it is difficult and very, very expensive and cannot be done with ordinary undergrad lab equipment.
Then we could talk about GAST vs GASTA. There is a persistent myth that we can know and compare GASTA over a century and a half (or more!) of this incredibly messy, poorly documented data acquired from indifferent instrumentation with uncertain siting and irregular sampling and get results that are precise enough to be of use in estimating how much it has warmed (GASTA) while still leaving us in a state of profound ignorance as to what the mean temperature itself is right now (GAST). GASTA, we are told, is precise to within 0.15 C in HADCRUT4 (and note well, this is with the best of modern instrumentation!) GAST, NASA tells us, isn’t known to within 1C.
Of course, the 0.15C only refers to the last 30 or so years at best. GAST, GASTA, the average surface temperature in 1900 is pretty much a mystery. Maybe we know it within a half a degree. Maybe it is within a degree. Land measurements are sparse and it is absolutely impossible to assess errors like the ones I can directly observe within a mile of my own house, all equipped with fully automated modern instrumentation that never sleeps, gets drunk, takes a holiday, and that records the full 24 hour interval at a granularity of a minute or so so that high and low temperatures really are high and low temperatures even if the one occurs at midnight and the other in mid-afternoon (and with a precision of at least a full 0.1C). And then, in 1900 we knew nothing at all about polar temperatures (especially south polar temperatures) and knew virtually nothing about sea surface temperatures. Perhaps 70% of the Earth’s surface completely unsampled (or sampled by methodology so crude that it might well produce measurements that are worse than no measurement at all, just as undergrad lab measurements of g would be, by being systematically biased too low but with no way to ever correct for that if g were not a constant!
If you insist on using the price of your house as a metaphor, there is one important way in which that metaphor is apropos. The price of a house is not a fixed reflection of its absolute value according to some objective measure, it is a reflection of what people are willing to pay to someone that derives advantage from maximizing that number. When your house was purchased originally, that value was, indeed, what people were willing to pay in dollars. Dollars, of course, are not like degrees, are they? Economics and physics are very similar, no doubt, but degrees F, C, or K are extremely precisely defined in a time-independent way. Instrumentation can be variable, measurement technique can be variable, siting and methodology can be variable, but 1 K then is 1 K now.
Then, house value in dollars is not measured, it is voted on. In the hands of a skilled realtor and with a coat of cheap whitewash, its “value” (as measured in sale price) can vary tremendously from one week to the next, from one potential buyer to the next. Even “house appraisal” is highly variable and subject to collusion and whim — the appraisal of a seller and buyer often not agreeing to within 10% for obvious reasons — both parties have an interest, one in maximizing and the other in minimizing.
The great tragedy of modern climate science is that it has, indeed, become a lot more like selling a house than like making an objective scientific measurement with honestly stated error bars and other limitations in the result. Every single person participating makes a living from the public’s willingness to “buy” the argument that we are on the brink of a climate disaster. The results of every measurement are literally brokered to grant agencies, to the governments of every planet on Earth, and to the public underlying those governments, with a derived value that is 90% due to the perception of disaster, 10% to actual value.
Suppose you wish to convince me that we are in the grip of runaway inflation so that I will invest a tremendous amount of my personal resources in buying contemporary real property, so you try to convince me that housing prices are rising through the roof compared to what they have ever been before. You point to all sorts of new construction (all of which you manage and sell) as evidence of this, and make me sit through a long, dreary presentation on the causes of inflation and how even thought it appears that there are many houses on the market that haven’t increased in price, you can prove that after adjusting for this and that they really have.
I, of course, doubt this. I know that you make money — a lot of money — selling houses, and you make less money selling them at a fair market price than you make if you can get people to pay a bit extra with a glib argument. I look into things on my own. I discover a lot of 50 houses that have been on the market since maybe 1870 — they are all nice, old houses, well-maintained, and they sell pretty much every year. They constitute a record of inflation that isn’t subject to your complex system of “average house value” adjustments for this and for that, because people always pay for these houses in pure gold, not “adjustable” dollars.
It turns out that 25 of the fifty houses sold for a peak price not in the last decade, but in the 1930s. The other 25 sold for a peak price not heavily weighted to the last couple of decades, but spread out pretty uniformly over all the years in between, with about as many selling in the early 1900s as in the late 1900s. At this point I might — with some reason — doubt your system of adjustments.
As a simple matter of fact, roughly half of the state record high temperatures were set in the 1930s. The other half were set in a fairly balanced way over all of the other decades. A perfectly reasonable person might conclude that the continental United States was, as a whole, hotter in the 1930s than it is today, and indeed that it isn’t particularly hotter today than it was anytime in the last 140-150 years. Maybe a bit, but not a whole lot. If one looks at the distribution of continental high temperature records, the same thing is observable.
The thing about being a salesman is that a good one actually believes in what they sell. They have to! Humans are very good at picking up on insincerity. When they try to sell you a house, or a car, for exactly what is on the sticker, they manage to convince themselves that this house really is special and worth that extra 10%, or they manage to convince themselves that the overall market really is higher than what people are paying. They are always completely honest, but honest in a way that is constantly trying to get a potential customer to pay the asking price. And every sale they make reinforces their belief — the true value of what they sell is precisely what people are willing to pay. The narrative is the value.
A good recipe, perhaps, for economics in the free market. A terrible recipe for science. We do not vote on the value of g, we measure it. We do not average over measurements of g conducted by undergrads taking into physics and use them to correct precise measurements; if anything we go the other way. We do not — or should not — make support of research contingent on a perception that the science must work out any particular way, not unless we want science to devolve into confirmation bias tainted car sales, where a good story in consumer reports or a sexy model in car ads set the price. If a scientific narrative fails various simple sanity checks, it is the responsibility of science to be cautious and avoid overreaching its own conclusions, especially when one is confronted with the dread career-ending null result, with a result that continues an essentially boring narrative rather than an exciting, let’s save the world narrative.
I’m a believer not in adjustments per se, especially of the records in the remote past where we cannot possibly know how to correctly adjust them, but in consistency, in justified adjustments in instrumentation that can be directly defended and cross checked. At this point, we have had good instrumentation for ten years (post-ARGO, perhaps), adequate instrumentation for 34 or 35 years, and have sparse records based on good instrumentation for around 60 years. Some of that data is still subject to sanity check correction. At this point the records based on different instrumentation are pretty well mutually constrained — nobody will believe a steadily ramping GISS if HADCRUT4, UAH, and RSS all remain flat. This will become even more so over time — ARGO may or may not be precisely normalized (and is still sparse, dammit) but over time it will only get better as it is forced to be in agreement with multiple independent checks.
In thirty to fifty years, we might actually know how to adjust instrumental records. We will get there by finally having sufficiently good and complete instrumentation that we can get the actual answer without adjustment, as usual. You cannot squeeze blood from a turnip, or information from corrupt antique data. You cannot build a working antiballistic missile system from even a precise set of measurements of g from the early part of the 20th century. We are only now getting to where we can measure local gravity (via e.g. GRACE) well enough to correct for gravitational variation in measured sea level, for the variations an antiballistic missile has to deal with as it follows a long trajectory over a VARIABLE gravitational field that does not, in fact, follow any particularly simple “just physics” formula. Our ability to measure things like surface albedo in real time, global variations in measured surface radiation at the TOA, and more are still next to nonexistent — we can sample a bit, no more. Countless experiments remain to be done, some of them bone simple, such as directly measuring the full upwelling spectrum at the top of the troposphere at one specific location over e.g. the Sahara Desert, where one should be able to directly measure the CO_2 linked GHE with a minimum of confounding influences over timescales as short as a decade.
In the meantime, let’s not get crazy, OK?
rgb

Tim Clark
February 4, 2014 7:57 am

[ lsvalgaard says:
February 4, 2014 at 7:26 am ]
Leif………
I think you missed the allusion Pamela was referring to, or perhaps I did too.
Hint: It was not about the sun’s figures.