As many readers know, I follow the Average Magnetic Planetary Index (Ap) fairly closely as it is a proxy indicator of the magnetic activity of our sun. Here is the latest Ap Graph:
I’ve pointed out several times the incident of the abrupt and sustained lowering of the Ap Index which occurred in October 2005.
click for a larger image
David Archibald thinks it may not yet have hit bottom. Here is his most recent take on it.

The low in the Ap Index has come up to a year after the month of solar cycle minimum, as shown in the graph above of 37 month windows of the Ap Index aligned on the month of solar minimum. For the Solar Cycle 23 to 24 transition, the month of minimum is assumed to be Ocotber 2008. The minimum of the Ap Index can be a year later than the month of solar cycle minimum, and the period of weakness can last eighteen months after solar cycle minimum.
The graph also shows how weak this minimum is relative to all the minima since the Ap Index started being measured in 1932. For the last year, the Ap Index has been plotting along parallel to the Solar Cycles 16 – 17 minimum, but about four points weaker. Assuming that it has a character similar to the 16 – 17 minimum, then the month of minimum for the Ap Index is likely to be October 2009 with a value of 3.
The shape of the Ap Index minima is similar to, but inverted, the peaks in neutron flux, which are usually one year after the month of solar minimum.
David Archibald
January 2009

if [ $topic != “solar” ]
then
discard($topic)
else
accept($topic)
fi
🙂
Ah the things you think about while fishing. The current state of the sun whether it leads to a grand minima or not should lead to a more stimulating and informed debate on climate. Mathematically, the variation in TSI is not sufficient to cause a little ice age on its own. However, reduced TSI with a cold PDO and potentially a cold AMO could produce significant cooling.
The current La Nina event appears to be driven by the PDO reversal. This can lead to 25 to 30 years of stable or slightly cooling global temperature averages. If the reduced TSI continues during the cool PDO, there is a greater possibility of cooling. (My opinion not Lief’s) Once a true trend can be identified, it will force quite a few well educated fellows to rethink their positions.
The one person I feel has the best grasp of the situation is A.A. Tsonis. He has one particularly interesting paper dealing with the synchronization of weather oscillations. His math is well over my head, but produced some interesting and very logical conclusions on natural climate variability. A paper I wish more people on both sides of the debate would read. Then maybe we can get to the real question that needs to be answered. What is the Earth’s climate sensitivity?
When scientists take two estimates of climate sensitivity and divide by two to get an agreeable number I felt politics instead of science was driving climate science. CO2 does behave as a greenhouse gas. By itself, a doubling of CO2 can increase retained radiation by 1 W/M2 plus or minus a quarter Watt roughly (about 0.75 degrees C of warming). That is a lot less than the Hanson compromised 3 degrees C. The question is the ratio of water vapor feedback/forcing. According to Tsonis’ results, it is much less than the Hanson et al estimates so brilliantly deduced from outdated but well cited papers, such as Lean 2000 for example.
Tamino, a well educated and normally rational thinker, finds Lean 2000 “plausible” despite numerous other papers that greatly reduce the TSI impact suggested in her original paper. No offense to Dr. Lean, hers was a good first attempt, but science does occasionally make progress. Climate science will not make progress until well educated and normally rational thinkers regain their natural curiosity and skeptism.
The next decade or so should force them to regain their natural curiosity.
Then what do I know? I am just a fisherman.
braddles (03:32:15) :
This is only the second thread “hijacked” by the programmers, and it’s a good thread for it since there doesn’t seem to be much follow up on Archibald’s prediction. I recommend you skip following this thread, there are plenty of others.
As for better moderation, you can bet the moderators at RealClimate would have shut this discussion down by now. (Some of them even work on ModelE.) Be careful of what you wish for.
I am rather bemused by some of the suggestions (object oriented Perl) (FORTRAN is not a controlled language), lack of appreciation for the amount of number crunching to be done, the focus on programming language rather then understanding the physics, etc.
I did write a longer post last night, but lost it and don’t want to take the time to rewrite it. I’ll just note that C++ fans might want to check out the Blitz++ library, see http://ubiety.uwaterloo.ca/~tveldhui/papers/iscope97/index.html
John Finn (03:56:57) :
Was there cooling during SC20? The mid-20th century cooling began in the 1940s. SC20 didn’t start until 1964 – around 20 years later. The end of SC20 (in 1976) actually signalled the start of the modern warming era.
Like a lot of the solar theory stuff – things just don’t add up.
The Sun is but one driver, we also had the PDO shift to negative around 1940 and there is some talk about the earlier nuclear tests also having some impact, I am not sure if atomic denotations affect climate, maybe someone has some data on this?
But you cant deny SC20 was a cool period, and once ended the temperatures moved up immediately as you noted.
Here is Dr. Judith Lean talking about the solar cycle influence and global warming theory in 8 minutes (she is a very fast talker).
A couple of interesting points – she pulls ENSO, volcano, GHG, aerosols influences out to arrive at the solar cycle residual – And then she splits the atmosphere into the surface, middle troposphere and stratosphere in which she shows there is much bigger solar cycle influence in the stratosphere +/-0.3C versus the surface at +/-0.1C and the lower troposphere is also higher +/-0.2C. Something we will have to take into account when we talk about the solar cycle in the future.
Per my earlier posting, section labeled: NOAA & GISS:
A bit of errata:
At this point though, my ‘first blush’ is that NOAA has the false precision problem. They hand over ‘monthly mean’ data in 1/100 degree C precision. I don’t see how that is even remotely possible.
That ought to have been 1/100 degree F precision. Sorry… Staring at code too long can make you a bit fuzzy…
Speaking of which, I’ve done a detail pass of step0 and a moderate pass of step1, along with a cursory pass of steps 2, 3, 4, & 5.
The code style varies dramatically from step to step. My impression is that this is either tied to the ‘era’ when it was originally written or, more likely, to the individual who had that bit to write. Some of the style is a bit, er, puzzling (like every time you use a FORTRAN program, which is often, the source is recompiled inline to a binary, used, then the binary deleted…)
The overall flow is that step0 is a pre-process glued on after 1-n had been running for a while. It takes in the raw data from several sources and does a basic ‘stick the datasets together and toss clearly trash and duplicate records’. It also converts the US data to C to better merge with the world data.
The only questionable thing I’ve found so far is a conversion from F to C that takes F (in monthly means of 0.01 precision from data originally read with a 0.1 resolution and converted to whole digits of precision for reporting, meaning there are 2 digits of ‘false precision’. Everything after the decimal point) and converts it to C in tenths of a degree.
The part I’m not sure about is the precedence order of evaluation and the overall impact on final validity of the temp in C.tenths. The code is:
if(temp.gt.-99.00) itemp(m)=nint( 50.*(temp-32.)/9 ) ! F->.1C
Which looks for a ‘data missing flag’ of -99 and as long as valid data are in the field, converts the temp (a real data type in -xxx.x F format) into itemp (an integer data type in XXXX C format, with the last digit being tenths, for each month m [do loop counter is m 1,12] )
I learned FORTRAN under F77 rules long ago and don’t remember a ‘nint’ function (this code is F90), but I do remember that 9 is an int while 9. is a float. So we have -32.(float) and 50.*(float) but /9 int …
This all leaves me a bit unsure what the actual computation would be (with type conversions) and what the actual outcome would be. When I learned it, the type conversions were automatic (and often unplanned…) while ‘nint’ looks like a type conversion function wrapper. Any ideas welcome…
I’m not too worried, though, since the fractional part of the temperature in F is fictional anyway… (Well, I’m worried, just not about this particular bit of computation putting excessive error into the fractional digits, since they are already in doubt…)
After that, steps1 and 2 do the basic shake & bake of tossing old or odd records and ‘adjusting’ urban UHI to rural neighbors. (step 2 is the only python step – maybe a rewrite?) Then steps 3, 4, and 5 do the mapping to ‘zones’ and ‘grid boxes’ and conversion from temperatures to anomalies and then the averaging of averages begins in ernest. The anomaly data are once again changed to match what they ought to be, based on what their neighbors are doing. Then the grids, zones, hemispheres, etc. get repetitively averaged together to reach The One True Mean …
While it is too early to speak definitively (I’ve not got a detailed pass of the anomaly, grids, zones, hemispheres etc. code) my overall reaction to it is: By the time you have this many math steps with massaging, averaging, averaging averages, type conversions, precision conversions, neighbor adjustments, neighboring region adjustments, etc. how can you have any idea that the final number is representative of anything in the real world? Especially in the ‘tenths place’.
It takes me about a 2 days to get through a section, and the later ones look harder. Since I can only put a day or two on this per week, it will likely be several weeks to months before I reach an end.
To all the folks who suggested an open source rewrite: I think that would be a fine idea. A large quantity of this code could be dumped just by getting rid of all the recompile steps and all the data file shuffle that constantly happens (input file -> copy -> process -> new file -> copy …) for several files at each step.
Though the first thing, that I think would be more productive, would be to simply pick up the NOAA already UHI adjusted data (avoiding the Hansen method UHI shake & Bake), merge it with clean world data, and find an “unadulterated by the reference station method” trend.
Snippets from the gistemp.txt ‘readme’ type file:
Step 2 : Splitting into zonal sections and homogeneization […]
To speed up processing, Ts.txt is converted to a binary file and split
into 6 files, each covering a latitudinal zone of a width of 30 degrees.
The goal of the homogeneization effort is to avoid any impact (warming
or cooling) of the changing environment that some stations experienced
by changing the long term trend of any non-rural station to match the
long term trend of their rural neighbors, while retaining the short term
monthly and annual variations. If no such neighbors exist, the station is
completely dropped, if the rural records are shorter, part of the
non-rural record is dropped.
Look at the recent jet stream with air plunging down from Alaska and warm running up from the gulf and tell me that ‘latitudinal’ makes sense? I think there is a basic flaw here in assuming that air masses are uniform in a latitude on a long term basis (enough so to allow ‘adjustment’…)
Step 3 : Gridding and computation of zonal means (do_comb_step3.sh)
————————————————
A grid of 8000 grid boxes of equal area is used. Time series are changed
to series of anomalies. For each grid box, the stations within that grid
box and also any station within 1200km of the center of that box are
combined using the reference station method.
And after the stations have been history corrected to what’s ‘right’ for their latitude, their anomalies are further corrected based on their grid box neighbors, and whatever a ‘reference station method’ is… (I looked at that bit of code and it will take a bit more time to unravel…)
And then there are two more steps left to go after this…
So again, I’d use the NOAA data to ‘do the right thing’ for an audit check before I’d rewrite this into another language …
Harold Ambler (07:09:52) :
if you could leave a comment (”Hello” would be enough) on my weather and climate blog (http://www.talkingabouttheweather.com).
Done, under Enhanced Gore Effect…
squidly (10:15:43) :
sdk (09:31:49) : my vote would be to use C, and a procedural approach […]
If you are going down the C road, I would recommend C++ and write it in a modular / OOP architecture
Before everyone gets all worked up about a million line project… The actual code involved is fairly small and rather direct. Mostly, so far, it’s been taking table data and doing minor edits or simple math. Frankly, it’s the sort of thing best done with a database and simple report language. (Pick up set a, load fields, pick up set b, load fields from different order, etc. Dump file in yet another order with type conversion).
While this may change in later steps, its mostly the fragmented nature of it that is confusing. (4 scripts in a mix of sh and ksh feeding 1/2 dozen FORTRAN programs of about 1/4 to 1/2 page each, with 4 input files and 3 output files plus a dozen gratuitous intermediate files… just to join 3 similar data sets and turn F to C in one of them…) It’s all the format, variable declarations, file read/write, etc. that accomplish nothing that make it slow slogging. If it were rewritten with efficiency in mind, I think you are looking at about one programmer week, or less, whatever language they used (as long as they were familiar with it…)
Conceptually the process is trivial. Pick up NOAA data. Pick up Antarctic data, Pick up GHCN data. Convert to consistent degrees C or F. Toss broken data (missing or interpolated [flagged with M] and very old pre 1880, remove dupes and overlaps) then produce a merged data set. Plot trends.
Now Hansen has a bunch of boxing, gridding, etc steps with averaging averages of averages that you could modestly easily duplicate, but frankly I’d just plot the trends for each station and average those trends. If that trend is up over ‘a very long time’ we have warming. If not, we don’t. Further, I wouldn’t use the monthly mean, I’d do one trend with the monthly MAX data and another with the monthly MIN data. Now you can see if we have a warming trend, a trend to wider ranges, whatever.
My ‘next step’ is going to be a catalog of number of pieces and lines of code in each, just to make it clear that this isn’t some incredibly complex thing… Frankly, I was surprised at just how small (& limited) it is. It’s not a GSM, it’s a data massage on some single digit MB files…
Syl (11:01:17) :
E.M.Smith
Good work!
Thanks!
Probably not, unfortunately. NOAA does not actually make UHI adjustments to the data, but make an allowance in uncertainty instead, from what SteveM has been able to find.
I looked at the site you posted. Interesting but a lot of it is speculative.
At this point all I can say is that the NOAA download site claims they have a UHI adjusted data set. Investigating exactly what that means will have to fall to someone else, as I’m up to my eyeballs in code right now…
What I can say with certainty is that I’m not impressed with the ‘reference station method’ in GISS and in fact I suspect you would be better off doing no UHI adjustment and just looking at the raw result, THEN deciding if you needed to do something to fix it. (And I’d even prefer a ‘fudge number’ assigned to each station I.D. based on the site evals being done, to the semi random shake & bake in GISS …)
For an open source re-coding of the various models to have true value, it should be in an “open language”.
By that I mean a computer language that is so ubiquitous that it is available to anyone of average means and can compile and run on just about any hardware. It should also be a language that has robust well understood libraries and functions.
To serve the purpose of being an “open source” independent review, you should ensure that a couple college students could run the code on a Linux Beowulf system built from stray lab computers, or on a multimillion dollar mainframe.
Even if you only re-code “black box” modules into an open source code, you have accomplished some useful work. To re-code, it you must understand the black box, which would at least lead to a paper annotating the original code and what it was doing, and perhaps pointing out some boundary conditions where the code block does silly things, such as resulting in undefined behavior according to the language specification, or questionable mathematical, and statistical procedures.
In that regard, I would be inclined to stay away from all proprietary languages that require high license costs or run on limited numbers of platforms, but to use languages like Perl, C, C++, java, SQL etc. which have a large base of experienced coders and compile on just about any hardware, as appropriate for the specific task at hand in that module. Build each black box using code that is most appropriate for the primary task of the module and then make sure its input and output have good error checking routines so it provides the proper input/output to the next black box.
Regarding the discussion of ice age triggers there have also been suggestions that extra terrestrial dust is a possible trigger. They suggest that periodically the earth passes through a “dusty region of space” which might have climate impacts.
http://www.astrobiology.com/news/viewpr.html?pid=20481
Although there is debate on this issue as well.
http://www.sciencemag.org/cgi/content/summary/280/5365/828b
The book Earth Under Fire discusses the possibility of cosmic dust falls being associated with climate and interestingly enough having a cycle that matches the solar cycle.
http://books.google.com/books?id=FRr231sanc4C&pg=PA125&lpg=PA125&dq=ice+age+interplantary+dust&source=web&ots=Gxh0rWkzih&sig=ixlwYxpzMpOtCzh0OHV1KRDUp7g&hl=en&sa=X&oi=book_result&resnum=5&ct=result#PPA121,M1
Larry
Carsten Arnholm, Norway (17:06:51) :
Squidly (16:25:40) :
@ur momisugly Richard M (14:55:24) :
If the existing code is non-trivial and not well understood, the second option will be the fastest.
Before this turns into a 100 person with P.M/ MSProject controled pyramid building effort 😉 perhaps an example would be helpful…
Step0 has 7 FORTRAN programs and 4 wrapper scripts. The scripts have several sizes but many are about 1 page. Below I’m going to paste the entirety of one of the FORTRAN programs. It’s one of the larger ones, and without the context it’s not very understandable, but just look at the size. It isn’t like we’re simulating Alaska here… It’s just a fancy cut/past data filter…
integer itmp(12),itmp0(12)
real dif(12)
C**** replace GHCN (unit 2) by USHCN (unit 1) data; new file: unit 12
C**** assumption is that IDs are ordered numerically low->high in both
C**** input files
open(3,file=’ushcn-ghcn_offset_noFIL’,form=’formatted’)
open(1,file=’USHCN.v2.mean_noFIL’,form=’formatted’)
open(2,file=’v2.meany’,form=’formatted’)
open(12,file=’v2.meanz’,form=’formatted’) ! output
open(20,form=’formatted’,file=’GHCN.last_year’)
! CountryCode,ID,year,T-data
iyrmax=0
read(3,*) iddiff,dif
10 read(1,'(i3,i9,i4,12i5)’,end=100) icc0,id0,iyr0,itmp0 ! read USHCN
if(iddiff.lt.id0) read(3,*) iddiff,dif
20 read(2,'(i3,i9,i4,12i5)’,end=200) icc,id,iyr,itmp ! read GHCN
if(iyr.gt.iyrmax) iyrmax=iyr
if(id.lt.id0.or.icc.ne.425) then
C**** just copy non-USHCN station (incl. non-US stations)
write (12,'(i3,i9.9,i4,12i5)’) icc,id,iyr,itmp !
go to 20
end if
!!! if(id.gt.id0.or.iyr.gt.iyr0) stop ‘should not happen’
if(id.gt.id0.or.iyr.gt.iyr0) then
C**** id-GHCN>id-USHCN or same ID but extra years: merge in USHCN data
30 write (12,'(i3,i9.9,i4,12i5)’) icc0,id0,iyr0,itmp0
write(*,'(a6,i9,i5,12i5)’) ‘ushcn ‘,id0,iyr0,itmp0
read(1,'(i3,i9,i4,12i5)’,end=100) icc0,id0,iyr0,itmp0 ! USHCN
if(iddiff.lt.id0) read(3,*) iddiff,dif
if(id.lt.id0) then ! until USHCN overtakes
write (12,'(i3,i9.9,i4,12i5)’) icc,id,iyr,itmp
go to 20
end if
if(id.gt.id0.or.iyr.gt.iyr0) go to 30
end if
! or catches up: id=id0
C*** skip early years not present in USHCN data
if(iyr.lt.iyr0) then
! write (12,'(i3,i9.9,i4,12i5)’) icc,id,iyr,itmp
write(*,'(a11,i9.9,i5,12i5)’) ‘skip v2.mn ‘,id ,iyr ,itmp
go to 20
end if
C*** replace GHCN by USHCN data if present
if(iddiff.ne.id0) stop ‘wrong iddiff’
do m=1,12
if(itmp0(m).gt.-9000) itmp0(m)=itmp0(m)-dif(m)
end do
write (12,'(i3,i9.9,i4,12i5)’) icc0,id0,iyr0,itmp0
go to 10
c**** No more USHCN data – copy data for remaining non-USHCN stations
100 if(id.gt.id0.or.icc.ne.425)
* write(12,'(i3,i9.9,i4,12i5)’) icc,id,iyr,itmp
110 read(2,'(i3,i9,i4,12i5)’,end=200) icc,id,iyr,itmp
if(iyr.gt.iyrmax) iyrmax=iyr
write(12,'(i3,i9.9,i4,12i5)’) icc,id,iyr,itmp
go to 110
200 write(20,*) iyrmax
stop
end
integer itmp(12),itmp0(12)
real dif(12)
C**** replace GHCN (unit 2) by USHCN (unit 1) data; new file: unit 12
C**** assumption is that IDs are ordered numerically low->high in both
C**** input files
open(3,file=’ushcn-ghcn_offset_noFIL’,form=’formatted’)
open(1,file=’USHCN.v2.mean_noFIL’,form=’formatted’)
open(2,file=’v2.meany’,form=’formatted’)
open(12,file=’v2.meanz’,form=’formatted’) ! output
open(20,form=’formatted’,file=’GHCN.last_year’)
! CountryCode,ID,year,T-data
iyrmax=0
read(3,*) iddiff,dif
10 read(1,'(i3,i9,i4,12i5)’,end=100) icc0,id0,iyr0,itmp0 ! read USHCN
if(iddiff.lt.id0) read(3,*) iddiff,dif
20 read(2,'(i3,i9,i4,12i5)’,end=200) icc,id,iyr,itmp ! read GHCN
if(iyr.gt.iyrmax) iyrmax=iyr
if(id.lt.id0.or.icc.ne.425) then
C**** just copy non-USHCN station (incl. non-US stations)
write (12,'(i3,i9.9,i4,12i5)’) icc,id,iyr,itmp !
go to 20
end if
!!! if(id.gt.id0.or.iyr.gt.iyr0) stop ‘should not happen’
if(id.gt.id0.or.iyr.gt.iyr0) then
C**** id-GHCN>id-USHCN or same ID but extra years: merge in USHCN data
30 write (12,'(i3,i9.9,i4,12i5)’) icc0,id0,iyr0,itmp0
write(*,'(a6,i9,i5,12i5)’) ‘ushcn ‘,id0,iyr0,itmp0
read(1,'(i3,i9,i4,12i5)’,end=100) icc0,id0,iyr0,itmp0 ! USHCN
if(iddiff.lt.id0) read(3,*) iddiff,dif
if(id.lt.id0) then ! until USHCN overtakes
write (12,'(i3,i9.9,i4,12i5)’) icc,id,iyr,itmp
go to 20
end if
if(id.gt.id0.or.iyr.gt.iyr0) go to 30
end if
! or catches up: id=id0
C*** skip early years not present in USHCN data
if(iyr.lt.iyr0) then
! write (12,'(i3,i9.9,i4,12i5)’) icc,id,iyr,itmp
write(*,'(a11,i9.9,i5,12i5)’) ‘skip v2.mn ‘,id ,iyr ,itmp
go to 20
end if
C*** replace GHCN by USHCN data if present
if(iddiff.ne.id0) stop ‘wrong iddiff’
do m=1,12
if(itmp0(m).gt.-9000) itmp0(m)=itmp0(m)-dif(m)
end do
write (12,'(i3,i9.9,i4,12i5)’) icc0,id0,iyr0,itmp0
go to 10
c**** No more USHCN data – copy data for remaining non-USHCN stations
100 if(id.gt.id0.or.icc.ne.425)
* write(12,'(i3,i9.9,i4,12i5)’) icc,id,iyr,itmp
110 read(2,'(i3,i9,i4,12i5)’,end=200) icc,id,iyr,itmp
if(iyr.gt.iyrmax) iyrmax=iyr
write(12,'(i3,i9.9,i4,12i5)’) icc,id,iyr,itmp
go to 110
200 write(20,*) iyrmax
stop
end
But you cant deny SC20 was a cool period, and once ended the temperatures moved up immediately as you noted.
….and you can’t deny that SC19 (the strongest cycle ever recorded) was also during a cool period.
Ron de Haan (01:55:25) :
Looking for the switch that caused Glaciation within a period of 1 year?
http://1965gunner.blogspot.com/2008/08/last-ice-age-happened-in-less-than-year.html
When it’s 40 below outside, your heater is on the opposite side of the room, you open the door to get some firewood, and the inside of your house by the door drops precipitously.
Just pop a hole in the R value of the atmosphere somewhere and plenty of heat will go rushing out. Like, say, the pole.
To get back on topic:
http://www.leif.org/research/Ap%201844-2009.pdf
Ap may go down to 4 in the next few months before heading back up. Note that Ap for Odd-Even cycle transitions [like right now] is typically 25% lower than for Even-Odd cycle transitions. This is a consequence of the geometry of the interaction between the solar wind and the Earth’s magnetosphere.
There are but a few sources of heat on Earth.
One is the gravitational collapse to form the body (leftover).
Another would be tidal forces acting on the planet.
The one that actually maintains the heat input to keep up the current equilibrium (the Sun).
What else is there?
Mess with the input, tinker with the R value of the atmosphere, shade the place with vulcanism and you get a new net value that has to find equilibruim.
My point is that the oscillations may drive the place to keep the equilbrium that it seeks, but they cannot change the net heat value of the planet.
Radioactive decay in the core, is the major one you have missed.
Heat input to the atmosphere from meteorite burn up (has anyone quantified this with even a back of the envelope estimate?).
Estimates for the mass of material that falls on Earth each year range from 100 to 1000 metric tons of meteorites per day. Most of this mass would come from dust-sized particles. If all that mass strikes the atmosphere at typical meteorite velocities of 20 km/sec that is an awful lot of Kinetic energy to be dissipated as heat!
Electrical currents induced in the core, mantle and atmosphere by the effects of solar wind and solar storms buffeting the magneto sphere.
Larry
Thanks for your feedback. In explanation, originally I had considered including phrases such as “… by qualified scientists” or “… by qualified individuals”, but felt that – if disclosure were ever mandated – it would give an out to those not wishing to disclose by claiming that any potential reviewer was “not qualified”.
To answer my own question I took a shot of a ball park calculation. I hope I did not make any silly error here:
=====================
Meteorite energy at 20 km/sec ~= 200,000,000 joules/kg
100 – 1000 metric tons/day estimated meteorite mass striking the earth
100 metric tons/day = 100,000 kg/day, 1000 metric tons/day = 1,000,000 kg/day
At 100,000kg/day x 200,000,000 joules/kg = 2^13 Joules /day = 231,400,000 watts/day = 9,641,667 watts/hr
The Earth’s cross sectional area = π×radius2 = 49.3 million square miles (128,000,000,000 M^2).
Solar isolation on the atmosphere ~= 1,366 watts per square meter or about 1.75×10^12 W
1.75×10^12 W x 24 = 4.19635 x 10^13 watt hours
231,400,000 watt hours / 4.19635 x 10^13 watt hours ~= 0.0000055 ~= 0.00055% of solar energy input
At 1,000,000 kg/day = 2,314,000,000 watts/day
2,314,000,000 watt hours / 4.19635 x 10^13 watt hours ~= 0.000055 ~= 0.0055% of solar energy input.
Larry
hotrod (11:14:43) :
Lots of water every year falls on us from “out there”.
Also-
Plasma. Bubbles, wind, stream, what you will;
http://mcf.gsfc.nasa.gov/Fok/PUA1911.pdf
http://fenyi.sci.klte.hu/publ/Praga2002.pdf
http://www.agu.org/pubs/crossref/2003/2002JA009690.shtml
http://www.souledout.org/magfieldsaudio/magfields.html#plasma
E.M.Smith (07:37:18) :
The part I’m not sure about is the precedence order of evaluation and the overall impact on final validity of the temp in C.tenths. The code is:
if(temp.gt.-99.00) itemp(m)=nint( 50.*(temp-32.)/9 ) ! F->.1C
Which looks for a ‘data missing flag’ of -99 and as long as valid data are in the field, converts the temp (a real data type in -xxx.x F format) into itemp (an integer data type in XXXX C format, with the last digit being tenths, for each month m [do loop counter is m 1,12] )
I learned FORTRAN under F77 rules long ago and don’t remember a ‘nint’ function (this code is F90), but I do remember that 9 is an int while 9. is a float. So we have -32.(float) and 50.*(float) but /9 int …
This all leaves me a bit unsure what the actual computation would be (with type conversions) and what the actual outcome would be. When I learned it, the type conversions were automatic (and often unplanned…) while ‘nint’ looks like a type conversion function wrapper. Any ideas welcome…
NINT is a standard Fortran (also F77) intrinsic function meaning ‘nearest integer’. It is used for conversion between floating point numbers ant integers. It rounds up or down depending of the value. A similar intrinsic function INT simply truncates the decimals. So NINT(value) = INT(value + 0.5)
Using an integer in the denominator is sloppy practice and may have led to different results for different implementations, but I guess it is safe to assume today that since the numerator is a floating point number, the result will be as if the integer 9 constant was a floating point 9. constant. So the rounding is only in the NINT.
So the result is a Celsius * 10 integer, meaning the value is rounded to the nearest 0.1C
This tiny detail is illustrating why and how it can be hard to read other people’s code and why an open source project might be a good idea.
Referring to a previous comment someone made, I think the idea would not be to recreate GISS, but to make something that people thought made sense and was well documented, transparent and easily available for inspection by interested parties. Then if the outcome of such algorithms don’t agree with GISS etc., you have an opportunity to find out why.
E.M.Smith (07:37:18) :
http://www.nsc.liu.se/~boein/f77to90/a5.html says nint is in F77. It rounds to the nearest integer. (int() rounds toward 0 (down for positive number, up for negative. Yuck.)
Most languages will take “real op int” and convert the int to a real.
Then the meteoric mass falling on Earth is along the lines of AGW CO2…miniscule by comparison.
The radioactive decay should be rather constant (and falling slowly over time).
Electrical currents induced in the core, mantle and atmosphere by the effects of solar wind and solar storms buffeting the magneto sphere.
This one is clearly solar forced, or lack of it.
Just seems to me that Solar is our big gun in this process…so far.
I’m positive that if we keep digging at it and exploring how that process works, that we shall be able to define it.
Once that is done, whatever discrepancy remains we will then know how much is still missing.. Like the way the missing mass has driven more science to find dark energy and dark matter.
Although it is obviously derived from the sun, it is a “hidden input” as most people focus on the direct radiant energy from the sun (light and IR) when they are talking about “solar input. Recently we are talking about indirect effects like the changes in cloud developement due to the solar effects on the magnetosphere, but I have not heard anyone try to tally up a value for how much energy is coupled directly by magnetic/electrical effects to the atmosphere itself, when they are talking about solar input.
I have no clue how big the actual energy inputs might be, but induction heating is used to melt scrap iron, if strong coupling exists it is conceivable that large amounts of energy could be transferred in a non-obvious way. It is possible, that the electrical current heating induced by solar storms both in the body of the earth and the atmosphere might be significant compared to the direct radiant heating from the solar irradiance.
I have never seen any sort of limits placed on this mechanism, or any measurements of it made.
The ground currents developed by lightning strikes are significant and obviously produce local heating. Likewise I am not aware of anyone trying to put a value on direct heating of the atmosphere by lightning. This might in the grander scheme of thing be a gnat sneeze in a hurricane, or the world wide power dissipation for thunderstorms might be pretty impressive. It if nothing else is a way of moving energy from high altitudes to low altitudes, as static electrical energy harvested from rising water crystals, eventually gets released as lightning inside the cloud and to ground as these space charges try to equalize.
I know that the ground currents induced by solar storms can also be significant! I would not want to pay the power bill for the northern lights if they were being powered off the grid. That power eventually degrades to heat! How much heating is generated in the atmosphere by the kinetic energy of ions and electrical currents in the auroral displays?
Could those sudden stratospheric heating episodes be due to electrical energy being degraded to heat?
People forget how little we know of the electromagnetic activity in the atmosphere. It was just a few years ago that “sprites” and “elfs” were discounted as unproven or optical illusions. They were found to be impressively powerful events.
http://elf.gi.alaska.edu/
I submit that it is at least possible that a considerable amount of energy is coupled directly into the atmosphere and surface of the earth due to magnetic effects. Perhaps they are not included in the energy accounting, because no one has thought to quantify them, and add them to the earths energy budget. If they are trivial, than like Edison learning from tests that showed what did not work, we have eliminated one other possible mode of power transfer to the earths weather systems.
Larry
The USA now has a President that made the following statement:
“Because the truth is that promoting science isn’t just about providing resources — it’s about protecting free and open inquiry. It’s about ensuring that facts and evidence are never twisted or obscured by politics or ideology.” Barack Obama
And recently he anounced (sic) that he will communicate with all Americans directly via e-mail.
What could go wrong?
If you want to know whether Obama is really listening to the American Public or if he is listening to Al Gore follow what happens to NAIS (animal ID) The USDA wants to regulate farming with fines up to $500,000 and 10 years in jail so the issue is almost as critical as the carbon dioxide tax. Out of the top ten or so Ag issues, three were “Stop NAIS” and a couple more were support small farms/ farm freedom. “Protect Our Food Supply – Stop NAIS!” Actually made into the top 25 despite a complete media blackout and the lies spread by the USDA.
http://www.change.org/ideas
http://libertyark.net/
This gives me hope that the internet can, with the help of the sun, counter the “Global Warming Hoax” too. Al Gore is also anti-American farming.
This comes from the Ag Journal, Billings, Montana: “At a recent ceremony at the White House, Vice President and presidential candidate Al Gore let slip what many have long believed was his real intention as regards to U.S. agriculture.
“While presenting a national award to a Colorado FFA member, Gore asked the student what his/her life plans were. Upon hearing that the FFA member wanted to continue on in production agriculture, Gore reportedly replied that the young person should develop other plans because our production agriculture is being shifted out of the U.S. to the Third World.”
http://showcase.netins.net/web/sarahb/farm/
I wonder what Gore thinks American citizens are supposed to do for a living once he shuts down manufacturing, farming and outsources computer programming?
To Robert Bateman (10:14)
One potentially significant source of heat (which is almost never talked about and could be an alternative theory to the greenhouse effect itself) is “gravitational compression.”
What makes a star heat up and compress so that nuclear fusion is possible?
What makes Venus so hot? Why is Jupiter 20,000K at its core? Why is Mars colder even though it has so much CO2 in its atmosphere?
The density of the atmosphere itself produces a gravitational compression and this produces heat (beyond the compression which exists in the mantle and in the core). The weight of the atmosphere itself produced a warming and acts a non-greenhouse blanket keeping the heat in.
The density of the atmosphere itself is a heating and heat-trapping mechanism (disregarding any impact from the Sun.)
This theory has more explanatory power for the various temperatures seen around the solar system and in the stars. Of course, EM radiation from the Sun comes in and leaves the planet and the rates at which this happens is modulated by the absorption frequencies of the different molecules so the greenhouse effect has to also play a role.