This comment from rgbatduke, who is Robert G. Brown at the Duke University Physics Department on the No significant warming for 17 years 4 months thread. It has gained quite a bit of attention because it speaks clearly to truth. So that all readers can benefit, I’m elevating it to a full post
Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons. First — and this is a point that is stunningly ignored — there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!
This is reflected in the graphs Monckton publishes above, where the AR5 trend line is the average over all of these models and in spite of the number of contributors the variance of the models is huge. It is also clearly evident if one publishes a “spaghetti graph” of the individual model projections (as Roy Spencer recently did in another thread) — it looks like the frayed end of a rope, not like a coherent spread around some physics supported result.
Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.
Say what?
This is such a horrendous abuse of statistics that it is difficult to know how to begin to address it. One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again. One cannot generate an ensemble of independent and identically distributed models that have different code. One might, possibly, generate a single model that generates an ensemble of predictions by using uniform deviates (random numbers) to seed
“noise” (representing uncertainty) in the inputs.
What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. They are not independent, their differences are not based on a random distribution of errors, there is no reason whatsoever to believe that the errors or differences are unbiased (given that the only way humans can generate unbiased anything is through the use of e.g. dice or other objectively random instruments).
So why buy into this nonsense by doing linear fits to a function — global temperature — that has never in its entire history been linear, although of course it has always been approximately smooth so one can always do a Taylor series expansion in some sufficiently small interval and get a linear term that — by the nature of Taylor series fits to nonlinear functions — is guaranteed to fail if extrapolated as higher order nonlinear terms kick in and ultimately dominate? Why even pay lip service to the notion that or
for a linear fit, or for a Kolmogorov-Smirnov comparison of the real temperature record and the extrapolated model prediction, has some meaning? It has none.
Let me repeat this. It has no meaning! It is indefensible within the theory and practice of statistical analysis. You might as well use a ouija board as the basis of claims about the future climate history as the ensemble average of different computational physical models that do not differ by truly random variations and are subject to all sorts of omitted variable, selected variable, implementation, and initialization bias. The board might give you the right answer, might not, but good luck justifying the answer it gives on some sort of rational basis.
Let’s invert this process and actually apply statistical analysis to the distribution of model results Re: the claim that they all correctly implement well-known physics. For example, if I attempt to do an a priori computation of the quantum structure of, say, a carbon atom, I might begin by solving a single electron model, treating the electron-electron interaction using the probability distribution from the single electron model to generate a spherically symmetric “density” of electrons around the nucleus, and then performing a self-consistent field theory iteration (resolving the single electron model for the new potential) until it converges. (This is known as the Hartree approximation.)
Somebody else could say “Wait, this ignore the Pauli exclusion principle” and the requirement that the electron wavefunction be fully antisymmetric. One could then make the (still single electron) model more complicated and construct a Slater determinant to use as a fully antisymmetric representation of the electron wavefunctions, generate the density, perform the self-consistent field computation to convergence. (This is Hartree-Fock.)
A third party could then note that this still underestimates what is called the “correlation energy” of the system, because treating the electron cloud as a continuous distribution through when electrons move ignores the fact thatindividual electrons strongly repel and hence do not like to get near one another. Both of the former approaches underestimate the size of the electron hole, and hence they make the atom “too small” and “too tightly bound”. A variety of schema are proposed to overcome this problem — using a semi-empirical local density functional being probably the most successful.
A fourth party might then observe that the Universe is really relativistic, and that by ignoring relativity theory and doing a classical computation we introduce an error into all of the above (although it might be included in the semi-empirical LDF approach heuristically).
In the end, one might well have an “ensemble” of models, all of which are based on physics. In fact, the differences are also based on physics — the physicsomitted from one try to another, or the means used to approximate and try to include physics we cannot include in a first-principles computation (note how I sneaked a semi-empirical note in with the LDF, although one can derive some density functionals from first principles (e.g. Thomas-Fermi approximation), they usually don’t do particularly well because they aren’t valid across the full range of densities observed in actual atoms). Note well, doing the precise computation is not an option. We cannot solve the many body atomic state problem in quantum theory exactly any more than we can solve the many body problem exactly in classical theory or the set of open, nonlinear, coupled, damped, driven chaotic Navier-Stokes equations in a non-inertial reference frame that represent the climate system.
Note well that solving for the exact, fully correlated nonlinear many electron wavefunction of the humble carbon atom — or the far more complex Uranium atom — is trivially simple (in computational terms) compared to the climate problem. We can’t compute either one, but we can come a damn sight closer to consistently approximating the solution to the former compared to the latter.
So, should we take the mean of the ensemble of “physics based” models for the quantum electronic structure of atomic carbon and treat it as the best predictionof carbon’s quantum structure? Only if we are very stupid or insane or want to sell something. If you read what I said carefully (and you may not have — eyes tend to glaze over when one reviews a year or so of graduate quantum theory applied to electronics in a few paragraphs, even though I left out perturbation theory, Feynman diagrams, and ever so much more:-) you will note that I cheated — I run in a semi-empirical method.
Which of these is going to be the winner? LDF, of course. Why? Because theparameters are adjusted to give the best fit to the actual empirical spectrum of Carbon. All of the others are going to underestimate the correlation hole, and their errors will be systematically deviant from the correct spectrum. Their mean will be systematically deviant, and by weighting Hartree (the dumbest reasonable “physics based approach”) the same as LDF in the “ensemble” average, you guarantee that the error in this “mean” will be significant.
Suppose one did not know (as, at one time, we did not know) which of the models gave the best result. Suppose that nobody had actually measured the spectrum of Carbon, so its empirical quantum structure was unknown. Would the ensemble mean be reasonable then? Of course not. I presented the models in the wayphysics itself predicts improvement — adding back details that ought to be important that are omitted in Hartree. One cannot be certain that adding back these details will actually improve things, by the way, because it is always possible that the corrections are not monotonic (and eventually, at higher orders in perturbation theory, they most certainly are not!) Still, nobody would pretend that the average of a theory with an improved theory is “likely” to be better than the improved theory itself, because that would make no sense. Nor would anyone claim that diagrammatic perturbation theory results (for which there is a clear a priori derived justification) are necessarily going to beat semi-heuristic methods like LDF because in fact they often do not.
What one would do in the real world is measure the spectrum of Carbon, compare it to the predictions of the models, and then hand out the ribbons to the winners! Not the other way around. And since none of the winners is going to be exact — indeed, for decades and decades of work, none of the winners was even particularly close to observed/measured spectra in spite of using supercomputers (admittedly, supercomputers that were slower than your cell phone is today) to do the computations — one would then return to the drawing board and code entry console to try to do better.
Can we apply this sort of thoughtful reasoning the spaghetti snarl of GCMs and their highly divergent results? You bet we can! First of all, we could stop pretending that “ensemble” mean and variance have any meaning whatsoever bynot computing them. Why compute a number that has no meaning? Second, we could take the actual climate record from some “epoch starting point” — one that does not matter in the long run, and we’ll have to continue the comparison for the long run because in any short run from any starting point noise of a variety of sorts will obscure systematic errors — and we can just compare reality to the models. We can then sort out the models by putting (say) all but the top five or so into a “failed” bin and stop including them in any sort of analysis or policy decisioning whatsoever unless or until they start to actually agree with reality.
Then real scientists might contemplate sitting down with those five winners and meditate upon what makes them winners — what makes them come out the closest to reality — and see if they could figure out ways of making them work even better. For example, if they are egregiously high and diverging from the empirical data, one might consider adding previously omitted physics, semi-empirical or heuristic corrections, or adjusting input parameters to improve the fit.
Then comes the hard part. Waiting. The climate is not as simple as a Carbon atom. The latter’s spectrum never changes, it is a fixed target. The former is never the same. Either one’s dynamical model is never the same and mirrors the variation of reality or one has to conclude that the problem is unsolved and the implementation of the physics is wrong, however “well-known” that physics is. So one has to wait and see if one’s model, adjusted and improved to better fit the past up to the present, actually has any predictive value.
Worst of all, one cannot easily use statistics to determine when or if one’s predictions are failing, because damn, climate is nonlinear, non-Markovian, chaotic, and is apparently influenced in nontrivial ways by a world-sized bucket of competing, occasionally cancelling, poorly understood factors. Soot. Aerosols. GHGs. Clouds. Ice. Decadal oscillations. Defects spun off from the chaotic process that cause global, persistent changes in atmospheric circulation on a local basis (e.g. blocking highs that sit out on the Atlantic for half a year) that have a huge impact on annual or monthly temperatures and rainfall and so on. Orbital factors. Solar factors. Changes in the composition of the troposphere, the stratosphere, the thermosphere. Volcanoes. Land use changes. Algae blooms.
And somewhere, that damn butterfly. Somebody needs to squash the damn thing, because trying to ensemble average a small sample from a chaotic system is so stupid that I cannot begin to describe it. Everything works just fine as long as you average over an interval short enough that you are bound to a given attractor, oscillating away, things look predictable and then — damn, you change attractors.Everything changes! All the precious parameters you empirically tuned to balance out this and that for the old attractor suddenly require new values to work.
This is why it is actually wrong-headed to acquiesce in the notion that any sort of p-value or Rsquared derived from an AR5 mean has any meaning. It gives up the high ground (even though one is using it for a good purpose, trying to argue that this “ensemble” fails elementary statistical tests. But statistical testing is a shaky enough theory as it is, open to data dredging and horrendous error alike, and that’s when it really is governed by underlying IID processes (see “Green Jelly Beans Cause Acne”). One cannot naively apply a criterion like rejection if p < 0.05, and all that means under the best of circumstances is that the current observations are improbable given the null hypothesis at 19 to 1. People win and lose bets at this level all the time. One time in 20, in fact. We make a lot of bets!
So I would recommend — modestly — that skeptics try very hard not to buy into this and redirect all such discussions to questions such as why the models are in such terrible disagreement with each other, even when applied to identical toy problems that are far simpler than the actual Earth, and why we aren’t using empirical evidence (as it accumulates) to reject failing models and concentrate on the ones that come closest to working, while also not using the models that are obviously not working in any sort of “average” claim for future warming. Maybe they could hire themselves a Bayesian or two and get them to recompute the AR curves, I dunno.
It would take me, in my comparative ignorance, around five minutes to throw out all but the best 10% of the GCMs (which are still diverging from the empirical data, but arguably are well within the expected fluctuation range on the DATA side), sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time. That wouldn’t make them actually disappear, of course, only mothball them. If the future climate ever magically popped back up to agree with them, it is a matter of a few seconds to retrieve them from the archives and put them back into use.
Of course if one does this, the GCM predicted climate sensitivity plunges from the totally statistically fraudulent 2.5 C/century to a far more plausible and stillpossibly wrong ~1 C/century, which — surprise — more or less continues the post-LIA warming trend with a small possible anthropogenic contribution. This large a change would bring out pitchforks and torches as people realize just how badly they’ve been used by a small group of scientists and politicians, how much they are the victims of indefensible abuse of statistics to average in the terrible with the merely poor as if they are all equally likely to be true with randomly distributed differences.
rgb
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Steve Oregon says:
June 19, 2013 at 10:36 am
Nick Stokes,….
Because at some point even you must acknowledge the pig is damn ugly.
>>>>>>>>>>>>>>>>>>
Don’t insult the pig.
angech says:
June 19, 2013 at 7:21 am
…
The chance of the earth warming up from year to year is 50.05 percent. ]. Why ? because the current long range view shows that we are still very slowly warming over the last 20,000 years. Climate models should basically be random walk generators reverting to the mean…
This is one of those factoids that gets one accused of cherry picking. Twenty thousand years ago is the Last Glacial Maximum, well more like 19,000, but near enough. In fact the planet has been “cooling” for roughly the last 100 ky or so, warming for the last 20 ky, cooling for the last 8 ky, etc. It illustrates the reason that “trends” are not merely controversial but very likely meaningless in climate discussions, and completely irrelevant when attempting to tease out minor (global) anthropic influences.
Tim Ball says (June 19, 2013 at 9:54 am): “O’Keefe and Kueter explained how a model works:”
Thanks for the reference, Tim. It looks like a good place to start.
I like this paragraph early on:
“A model is considered validated if it is developed using one set of data and its
output is tested using another set of data. For example, if a climate model was
developed using observations from 1901 to 1950, it could be validated by testing
its predictions against observations from 1951 to 2000. At this time [2004], no climate model has been validated.” (emphasis mine)
It seems our political masters have been making policy based on chicken entrails. Again.
Lets say you went to a garage sale and bought a box of rusted and broken toys for a penny. No child can or play with them. They have zero play value. But one of them is (or was) a Buddy L. Another a Lionel 700E. Etc. Not a one of them works but they may have some value to a collector.
It would seem these models have no value in predicting climate but they may hold value for those out to collect funding and/or power and influence.
Duster says: June 19, 2013 at 2:15 pm
>Nick Stokes says:
>But are they means of a controlled collection of runs from the same program? That’s different. I’m just asking for something really basic here. What are we talking about? Context? Where? Who? What did they say?
“Nick, this response is so disingenuous it should be embarrassing.”
How? RGB made a whole lot of specific allegations about statistical malpractice. We never heard the answers to those basic questions I asked in your quote. After a long rant about assuming iid variables, variances etc, all we get in this thread is reference to a few people talking about ensemble means. Which you’ll get in all sorts of fields – and as I’ve said here, a multimodel mean is even prominent in a featured WUWT post just two weeks ago.
I make this post as a regular reader of this blog but rare commentator, as my expertise in life has little to offer to the debate (other than an ability, as a fraud investigator, to recognise bluster and obfustication as a defence mechanism by those subject of a justified query of their past activities).
First, I must agree with Mr Courtney when he gives credit to Nick Stokes for his courteous attempts to answer the fusillade of well argued criticisms of the AGW/CAGW/Climate Change mantra. I do therefore wonder why, if the science is so “settled”, he is so alone in his mission. Surely, if those others who so resolutely believe that they have got it right are so confident, not only with their case, but in the errors of the sceptics, they would enter the fray and offer argument to prove their detractors wrong. We have seen here in this post a refutation that is (whilst way outside my expertise) carefully argued by someone who is clearly knowledgeable in the field, yet (apart from Nick’s valiant but unsuccesful attempts) not one supporter of the allegedly overwhelming consensus group is willing to point out where its author goes wrong. Instead, we are offered weasel comments (I paraphrase) like “we don’t engage with trolls, it only encourages them”.
Message to the non-sceptics: Answer the criticisms and you might just gain some headway with the growing band of sceptics. Carry on like you are and your cause is doomed.
Nick,
Just because it’s a prominent WUWT post doesn’t mean it’s correct. Never knew you were such a fanboi.
Its actually quite funny watching these modelling guys falling over each other.
HadCrud and Giss have been “adjusted” to create artificial warming trends.
The modellers calibrate to these 2 temperature series.. (which probably means using high feedback factors)
Then they wonder why their projections are all way too high !!
Sorry guys, but ………. DOH !!!
And of course, if they want their models to start producing reasonable results, they have to first admit to themselves that HadCrud and GISS are severely tainted ,and that there hasn’t really been much warming since about 1900 .
Don’t see that happening somehow. 😉
Dr. Brown (and Nick and GregL): In AR4 WG1 section 10.1, the IPCC admits some of the problems you discuss, calling the collection of models they use an “ensemble of opportunity” and warning against statistical interpretation of the spread. The real problem is that these caveats are found only in the “fine print” of the report, instead of being placed in the caption of each relevant graph.
“Many of the figures in Chapter 10 are based on the mean and spread of the multi-model ensemble of comprehensive AOGCMs. The reason to focus on the multi-model mean is that averages across structurally different models empirically show better large-scale agreement with observations, because individual model biases tend to cancel (see Chapter 8). The expanded use of multi-model ensembles of projections of future climate change therefore provides higher quality and more quantitative climate change information compared to the TAR. Even though the ability to simulate present-day mean climate and variability, as well as observed trends, differs across models, no weighting of individual models is applied in calculating the mean. Since the ensemble is strictly an ‘ensemble of opportunity’, without sampling protocol, the spread of models does not necessarily span the full possible range of uncertainty, and a statistical interpretation of the model spread is therefore problematic. However, attempts are made to quantify uncertainty throughout the chapter based on various other lines of evidence, including perturbed physics ensembles specifically designed to study uncertainty within one model framework, and Bayesian methods using observational constraints.
One of the first papers to properly explore ensembles of models was Stainforth et al, Nature 433, 403-406 (2005)
http://media.cigionline.org/geoeng/2005%20-%20Stainforth%20et%20al%20-%20Uncertainty%20in%20predictions%20of%20the%20climate%20response%20to%20rising%20GHGs.pdf
“Don’t insult the pig.” [Gail Combs at 2:23PM today]
Aww. That was sooo cute. No lipstick needed there, that’s for sure!
And who could insult this little fella?
So no chimpanzee can predict the weather.
But the average of ten chimpanzees gets it pretty much right!
In fact – the more chimpanzees, the more right they will be!
Yeah right..
Unlike, Babe, however…. the Fantasy Science Club’s models will NEVER get the sheep herded into the pen (i.e., a real problem solved or even plausibly defined!).
There will be no Farmer Hoggett smiling down on them saying, “That’ll do, pig. That’ll do.”
“Pig slop!” is all the Fantasy Science Club can ever expect or deserve to hear.
No matter how sincere and earnest they are.
The point is that nobody cares. The writers of these models care only about showing warming if people do not stop using energy or use renewable energy or pay more to their government, etc. They do not care if they are correct or if anyone knows if they are correct or incorrect. The average person on the street does not care because they could not understand the statistics and they already know that humans are causing the planet to warm up or cause extreme weather or whatever they are calling it today. As I go through my everyday life, I notice that I am the only one that cares about the cost of electricity, gas, heating oil, etc. I am pretty sure I have more money than my friend but she has her air conditioning on in 70 F temperatures whil my furnace is still kicking in at night. My boss, who pays for carbon credits whenever he sends anything by UPS, has his air on 24/7 all spring, summer and fall in Philly. My sister from the UK who believes this climate disruption stuff uses air conditioning in the car at 72 F. I know she knows how to roll down a window.
There are times I think I am living in a nightmare because nobody could think up stuff this strange.
My take away is that we should examine the most important ‘piece’ and add on pieces to refine the models. Obviously it is no real scientists idea that we should assume CO2 dunnit and build all the models with that as the most important piece that then that has to be rationalized by exaggerating “lesser” cooling pieces.
An engineering approach: I think it would be worthwhile to design a model of an earth-size sphere of water with a spin, a distance from the sun, tilt, etc., and an undefined atmosphere that allows gradational latitudinal heating to occur to occur and see what happens to create currents and heat flow. After you have the general things that happen just because it is water being heated. Now you add some solid blocks that divide the water into oceans – a zig-zag, equal width ocean oriented N-S (Atlantic) a very large wedge shaped ocean with the apex at the north end and broadening out to south (Pacific), leave a circular ocean space at the north pole end with a small connection to the Pacific and a wide connection to the Atlantic. Put a large circular block centered on the south pole and trim the blocks on the southern end to permit complete connection of the southern ocean to both the pacific and Atlantic with a restriction…. Now “reheat” the model and see what the difference is. Then characterize the atmosphere and couple it with the ocean and lets see what we get, permitting evaporation and precipitation, winds and storms. We will better understand the magnitudes of the different influences of the major pieces. Finally we can play with this model – reduce sun’s output, add orbital changes, magnetics etc. etc. If we need a bit more warming, we can then perhaps add an increment from CO2 or some other likely parameter.
Frank says: June 19, 2013 at 3:49 pm
“Dr. Brown (and Nick and GregL): In AR4 WG1 section 10.1, the IPCC admits some of the problems you discuss, calling the collection of models they use an “ensemble of opportunity” and warning against statistical interpretation of the spread. The real problem is that these caveats are found only in the “fine print” of the report, instead of being placed in the caption of each relevant graph.”
It’s hardly in the fine print – it’s prominent in the introduction. It seems like a very sensible discussion.
People do use ensemble averages. That’s basically what the word ensemble means. What we still haven’t found is anything that remotely matches the rhetoric of this post.
Nick, how many of the climate models failed regarding their temperature projections? How many of the climate models succeeded regarding their temperature projections? Please don’t try to distract, answer the question.
Alan D McIntire says:
June 19, 2013 at 5:25 am
===============
Alan, there is a wide range of viewers at WUWT and quite a bit more than a few have a very good grasp of physics. Those that don’t probably never read past the very first part of Dr. Brown’s post. Some of us have read and reread what he wrote so as to not miss anything. Brown’s post is for those of us that can grasp the context or those that wish to. Kind of like WUWT University.
I believe that the level of knowledge of the majority readers here @ur momisugly WUWT would be astounding by any measure. To be quite honest, it would be a boring site for one not so knowledgeable.
Nick, If you are an Aussie, you know the expression..
You can’t polish a t***, but you can sprinkle it with glitter. !
I see unemployment rearing its ugly head for lazy Nintendo players. If only they went out more often and reduced pressing the ENTER button, writing up crap and getting generous funding. This whole fraud has been driven by money and the ‘hidden’ agenda of activists. The time for being nice is over. Should I be nice to con artists who defraud little old women? Of course not.
Frank says:
June 19, 2013 at 3:49 pm
because individual model biases tend to cancel (see Chapter 8).
===========
Sorry, but that is nonsense. If the model biases showed random distribution around the true (observed) mean then one could argue that the ensemble mean might have some value. However, that is not the case.
Look at the model predictions. They are all higher than observed temperatures. The odds of this being due to chance are so fantastic as to be impossible.
What the divergence between the ensemble mean and observation is showing you is that the models have systemic warm bias, in no uncertain terms, and the odds that this is accidental are as close to zero as to be zero.
As an old reader of climate audit I must admit I simply ignore Nick Stokes. 0 credibility. Always wrong, always missing the big picture.
“An engineering approach: I think it would be worthwhile to design a model of an earth-size sphere of water with a spin, a distance from the sun, tilt, etc., … and see what happens to create currents and heat flow.” [Gary Pearse at 4:22PM today]
A real model! Of course, only a genuine gearhead (that is a compliment, by the way), would come up with that. GREAT IDEA!
There’s an old globe-shaped, revolving sign that the Seattle P-I used (until it — thank You, Lord — went out of business a few years ago (2009?)). That might be used… . Have NO idea where it is, now. Size? Not sure, but, from what I recall when driving by it, it was likely about 10 feet in diameter. Yeah, I realize an old Union 76 ball or any ball (not have to have been an actual Earth globe) would do, but, it would LOOK cool. Maybe you could make use of the neon tubing used for the lettering, too. And the motor.
Well, sorry WUWT scholars for all that very basic level brainstorming about Pearse’s model. Kind of fun to think about how it could be implemented (at the most basic level, I mean — I have NO IDEA how to design the details!).
Gary Pearse says:
June 19, 2013 at 4:22 pm
My take away is that we should examine the most important ‘piece’ and add on pieces to refine the models. Obviously it is no real scientists idea that we should assume CO2 dunnit and build all the models with that as the most important piece that then that has to be rationalized by exaggerating “lesser” cooling pieces…..
>>>>>>>>>>>>>>>>>>>>>>>
HUH?
The whole point of IPCC was to show ‘ CO2 dunnit’
The IPCC mandate states:
The World Bank had put in an employee, Robert Watson, as the IPCC chair. Recently the World Bank produced a report designed to scare the feces out of people WUWT discussion and link to The World Bank 4 degree report
The World Bank went even further and was whispering in the US treasury department’s ear on how to SELL CAGW so the US government can collect more taxes (to pay the bankers interest on their funny money debt)
Now the TRUE face of the World Bank, and the elite
This Graph shows world bank funding for COAL fired plants in China, India and elsewhere went from $936 billion in 2009 to $4,270 billion in 2010. (20% of that money came from US tax payers) What we tax payers ‘Bought’ with those tax dollars was up to a 40% loss in wages (men with a high school education) and an aprox. 23% unemployment rate.
Meanwhile according to the IMF …the top earners’ share of income in particular has risen dramatically. In the United States the share of the top 1 percent has close to tripled over the past three decades, now accounting for about 20 percent of total U.S. income… Seems all those solar farms and windmills produce lots of $$$ for the top 1% because the USA sure isn’t producing much of anything else.U.S.-based industry’s share of total nonfarm employment has now sunk to 8.82 percent Shadow Government Statistics link 1 (explanation) and link 2
President Obama will face a continuing manufacturing recession….The brightest face that can reasonably be put on this news is that U.S.-based manufacturing is starting to look like the rest of the American economy – barely slogging along. If you look at the Alternate Gross Domestic Product Chart that is not cheery news.
However it does make the financiers very happy.
Smart Meters are needed so power companies can shut down the power to your house when the wind stops blowing or a cloud passes the sun.
Carbon Trading is a fraud that produces nothing but poverty. It does not produce a single penny of wealth and instead acts as a short circuit across the advancement and wealth of an entire civilization.
Dr. Brown’s argument is very elegant but there is nothing like telling a
Markcitizen that the wealthy elite wants to pick his pocket to get his attention no matter what his political point of view. (Unless of course he is on the receiving end of the suction hose.)Gary Hladik says:
June 19, 2013 at 2:39 pm
“A model is considered validated if it is developed using one set of data and its
output is tested using another set of data.”
==========
The “hidden data” approach is routinely used in computer testing. Divide the data in half, train the model on one half, and see if it can predict the other half, the missing half better than chance. This is so fundamental to computer science that it is a given.
It is surprising this has never been done with climate models. Except of course that it is highly likely that it has been done. And the results will have shown that the models have no predictive skill. Which is why these sorts of results have never been published and why instead we have ensemble means instead of validated models.
The simple fact is that there is no known computer algorithm that can compute the hidden data except in the case of trivial problems, in anything less than the lifetime of the universe.