Guest essay Energy Matters
In geology we use computer models to simulate complex processes. A good example would be 4D simulation of fluid flow in oil and gas reservoirs. These reservoir models are likely every bit as complex as computer simulations of Earth’s atmosphere. An important part of the modelling process is to compare model realisations with what actually comes to pass after oil or gas production has begun. It is called history matching. At the outset, the models are always wrong but as more data is gathered they are updated and refined to the point that they have skill in hind casting what just happened and forecasting what the future holds. This informs the commercial decision making process.
The IPCC (Intergovernmental Panel on Climate Change) has now published 5 major reports, the First Assessment Report (FAR) in 1990. This provides an opportunity to examine what has been forecast with what has come to pass. Examining past reports is quite enlightening since it reveals what the IPCC has learned in the last 24 years.
I conclude that nothing has been learned other than how to obfuscate, mislead and deceive.
Figure 1 Temperature forecasts from the FAR (1990). Is this the best forecast the IPCC has ever made? It is clearly stated in the caption that each model uses the same emissions scenario. Hence the differences between Low, Best and High estimates are down to different physical assumptions such as climate sensitivity to CO2. Holding the key variable constant (CO2 emissions trajectory) allows the reader to see how different scientific judgements play out. This is the correct way to do this. All models are initiated in 1850 and by the year 2000 already display significant divergence. This is what should happen. So how does this compare to what came to pass and with subsequent IPCC practice?
I am aware that many others will have carried out this exercise before and in a much more sophisticated way than I do here. The best example I am aware of was done by Roy Spencer [1] who produced this splendid chart that also drew some criticism.
Figure 2 Comparison of multiple IPCC models with reality compiled by Roy Spencer. The fact that reality tracks along the low boundary of the models has been made many times by IPCC sceptics. The only scientists that this reality appears to have escaped are those attached to the IPCC.
My approach is much more simple and crude. I have simply cut and pasted IPCC graphics into XL charts where I compare the IPCC forecasts with the HadCRUT4 temperature reconstructions. As we shall see, the IPCC have an extraordinary lax approach to temperature datums and in each example a different adjustment has to be made to HadCRUT4 to make it comparable with the IPCC framework.
Figure 3 Comparison of the FAR (1990) temperature forecasts with HadCRUT4. HadCRUT4 data was downloaded from WoodForTrees [2] and annual averages calculated.
Figure 3 shows how the temperature forecasts from the FAR (1990) [3] compare with reality. It should be quite clear that the best model is the Low Model. I cannot easily find the parameters used to define the Low, Best and High models but the report states that a range of climate sensitivities from 1.5 to 4.5˚C are used. It should be abundantly clear that the Low model is the one that lies closest to the reality of HadCRUT4. The High model is already running about 1.2˚C too warm in 2013.
Figure 4 The TAR (2001) introduced the hockey stick. The observed temperature record is spliced onto the proxy record and the model record is spliced onto the observed record and no opportunity to examine the veracity of the models is offered. But 13 years have since past and we can see how reality compares with the models in that very short time period.
I could not find a summary of the Second Assessment Report (SAR) from 1994 and so jump to the TAR (third assessment report) from 2001 [4]. This was the year (I believe) that the hockey stick was born (Figure 4). In the imaginary world of the IPCC, Northern Hemisphere temperatures were constant from 1000 to 1900 AD with not the faintest trace of Medieval Warm Period or Little Ice Age where real people either prospered or died by the million. The actual temperature record is spliced onto the proxy record and the model world is spliced onto that to create a picture of future temperature catastrophe. So how does this compare with reality?
Figure 5 From 1850 to 2001 the IPCC background image is plotting observations (not model output) that agree with the HadCRUT4 observations. Well done IPCC! The detail of what has happened since 2001 is shown in Figure 6. To have any value or meaning all of the models should have been initiated in 1850. We would then see that the majority are running far too hot by 2001.
Figure 5 shows how HadCRUT4 compares with the model world. The fit from 1850 to 2001 is excellent. That is because the background image is simply plotting observations in this period. I have nevertheless had to subtract 0.6˚C from HadCRUT4 to get it to match the observations while a decade earlier I had to add 0.5˚C. The 250 year x-axis scale makes it difficult to see how models initiated in 2001 now compare with 13 years of observations since. Figure 6 shows a blow up of the detail.
Figure 6 The single vertical grid line is the year 2000. The blue line is HadCRUT4 (reality) moving sideways while all of the models are moving up.
The detailed excerpt illustrates the nature of the problem in evaluating IPCC models. While real world temperatures have moved sideways since about 1997 and all the model trends are clearly going up, there is really not enough time to evaluate the models properly. To be scientifically valid the models should have been run from 1850, as before (Figure 1), but they have not. Had they been, by 2001 they would have been widely divergent (as 1990) and it would be easy to pick the winners. But they are brought together conveniently by initiating the models at around the year 2000. Scientifically this is bad practice.
Figure 7 IPCC future temperature scenarios from AR4 published in 2007. It seems that the IPCC has taken on board the need to initiate models in the past and in this case the initiation date stays at 2000 offering the same 14 years to compare models with what came to pass.
For the Fourth Assessment Report (AR4) [5] we move on to 2007 and the summary shown in Figure 7. By this stage I’m unsure what the B1 to A1F1 scenarios mean. The caption to this Figure in the reports says this:
Figure SPM.5. Solid lines are multi-model global averages of surface warming (relative to 1980–1999) for the scenarios A2, A1B and B1, shown as continuations of the 20th century simulations. Shading denotes the ±1 standard deviation range of individual model annual averages. The orange line is for the experiment where concentrations were held constant at year 2000 values. The grey bars at right indicate the best estimate (solid line within each bar) and the likely range assessed for the six SRES marker scenarios. The assessment of the best estimate and likely ranges in the grey bars includes the AOGCMs in the left part of the figure, as well as results from a hierarchy of independent models and observational constraints. {Figures 10.4 and 10.29}
Implicit in this caption is the assertion that the pre-year 2000 black line is a simulation produced by the post-2000 models (my bold). The orange line denotes constant CO2 and the fact that this is a virtual flat line shows that the IPCC at that time believed that variance in CO2 was the only process capable of producing temperature change on Earth. I don’t know if the B1 to A1F1 scenarios all use the same or different CO2 increase trajectories. What I do know for sure is that it is physically impossible for models that incorporate a range of physical input variables, initiated in the year 1900, to be closely aligned and to converge on the year 2000 as shown here. It is a physical impossibility as demonstrated by the IPCC models published in 1990 (Figure 1).
So how do the 2007 simulations stack up against reality?
Figure 7 Comparison of AR4 models with reality. Since 2000, reality is tracking along the lower bound of the models as observed by Roy Spencer and many others. If anything, reality is aligned with the zero anthropogenic forcing model shown in orange.
Last time out I had to subtract 0.6˚C to align reality with the IPCC models. Now I have to add 0.6˚C to HadCRUT4 to achieve alignment. And the luxury of tracking history from 1850 has now been curtailed to 1900. The pre-2000 simulations align pretty well with observed temperatures from 1940 even though we already know that it is impossible for the pre-2000 simulations to have been produced by a large number of different computer models programmed to do different things – how can this be? Post 2000, reality seems to be aligned best with the orange no CO2 rise / no anthropogenic forcing model.
From 1900 to 1950 the alleged simulations do not in fact reproduce reality at all well (Figure 8). The actual temperature record rises at a steeper gradient than the model record. And reality has much greater variability due to natural processes that the IPCC by and large ignore.
Figure 8 From 1900 to 1950 the alleged AR4 simulations actually do a very poor job of simulating reality, HadCRUT4 in blue.
Figure 9 The IPCC view from AR5 (2014). The inconvenient mismatch 1900 to 1950 observed in AR4 is dealt with by simply chopping the chart to 1950. The flat blue line is essentially equivalent to the flat orange line shown in AR4.
The fifth assessment report (AR5) was published this year and the IPCC current view on future temperatures is shown in Figure 9 [6]. The inconvenient mismatch of alleged model data with reality in the period 1900 to 1950 is dealt with by chopping that time interval off the chart. A very simple simulation picture is presented. Future temperature trajectories are shown for a range of Representative Concentration Pathways (RCP). This is the completely wrong approach since the IPCC is no longer modelling climate but different human, societal and political choices, that result in different CO2 trajectories. Skepitcalscience provides these descriptions [7]:
RCP2.6 was developed by the IMAGE modeling team of the PBL Netherlands Environmental Assessment Agency. The emission pathway is representative of scenarios in the literature that lead to very low greenhouse gas concentration levels. It is a “peak-and-decline” scenario; its radiative forcing level first reaches a value of around 3.1 W/m2 by mid-century, and returns to 2.6 W/m2 by 2100. In order to reach such radiative forcing levels, greenhouse gas emissions (and indirectly emissions of air pollutants) are reduced substantially, over time (Van Vuuren et al. 2007a). (Characteristics quoted from van Vuuren et.al. 2011)
AND
RCP 8.5 was developed using the MESSAGE model and the IIASA Integrated Assessment Framework by the International Institute for Applied Systems Analysis (IIASA), Austria. This RCP is characterized by increasing greenhouse gas emissions over time, representative of scenarios in the literature that lead to high greenhouse gas concentration levels (Riahi et al. 2007).
This is Mickey Mouse science speak. In essence they show that 32 models programmed with a low future emissions scenario have lower temperature trajectories than 39 models programmed with high future emissions trajectories.
The models are initiated in 2005 (the better practice of using a year 2000 datum as employed in AR4 is ditched) and from 1950 to 2005 it is alleged that 42 models provide a reasonable version of reality (see below). We do not know which, if any, of the 71 post-2005 models are included in the pre-2005 group. We do know that pre-2005, each of the models should be using actual CO2 et al concentrations and since they are all closely aligned we must assume they all use similar climate sensitivities. What the reader really wants to see is how varying climate sensitivity influences different models using fixed CO2 trajectories and this is clearly not done. The modelling work shown in Figure 9 is effectively worthless. Nevertheless, let us see how it compares with reality.
Figure 10 Comparison of reality with the AR5 model scenarios.
With models initiated in 2005 we have only 8 years to compare models with reality. This time I have to subtract 0.3˚C from HadCRUT4 to get alignment with the models. Pre-2005 the models allegedly reproduce reality from 1950. Pre-1950 we are denied a view of how the models worked then. Post-2005 it is clear that reality is tracking along the lower limit of the two uncertainty envelopes that are plotted. This is an observation made by many others [e.g 1].
Concluding comments
- To achieve alignment of the HadCRUT4 reality with the IPCC models the following temperature corrections need to be applied: 1990 +0.5; 2001 -0.6; 2007 +0.6; 2014 -0.3. I cannot think of any good reason to continuously change the temperature datum other than to create a barrier to auditing the model results.
- Comparing models with reality is severely hampered by the poor practice adopted by the IPCC in data presentation. Back in 1990 it was done the correct way. That is all models were initiated in 1850 and used the same CO2 emissions trajectories. The variations in model output are consequently controlled by physical parameters like climate sensitivity and with the 164 years that have past since 1850 it is straight forward to select the models that provide the best match with reality. In 1990, it was quite clear that it was the “Low Model” that was best almost certainly pointing to a low climate sensitivity.
- There is no good scientific reason for the IPCC not adopting today the correct approach adopted in 1990 other than to obscure the fact that the sensitivity of the climate to CO2 is likely much less than 1.5˚C based on my and others’ assertion that a component of the Twentieth Century warming is natural.
- Back in 1990, the IPCC view on climate sensitivity was a range from 1.5 to 4.5˚C. In 2014 the IPCC view on climate sensitivity is a range from 1.5 to 4.5˚C. 24 years have past and billions of dollars spent and absolutely nothing has been learned! The wool has been pulled over the eyes of policy makers, governments and the public to the extent of total brain washing. Trillions of dollars have been misallocated on energy infrastructure that will ultimately lead to widespread misery among millions.
- In the UK, if a commercial research organisation were found cooking research results in order to make money with no regard for public safety they would find the authorities knocking at their door.
References
[1] Roy Spencer: 95% of Climate Models Agree: The Observations Must be Wrong
[2] Wood For Trees
[3] IPCC: First Assessment Report – FAR
[4] IPCC: Third Assessment Report – TAR
[5] IPCC: Fourth Assessment Report – AR4
[6] IPCC: Fifth Assessment Report – AR5
[7] Skepticalscience: The Beginner’s Guide to Representative Concentration Pathways
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.











rgbatduke says:
June 12, 2014 at 9:16 am Aww, c’mon, RGB, tell us what your really think! Guffaw. And then Mosh steps into the pit with some drivel, and gets about 52 broadsides. Someone ought to figure out how to make this “climate science” a Punch and Judy show, charging admission at the door for the next breathless revelations, which of course will change by the next show. Couldn’t be more fun, except that Punch (warmists, communists, hate humans, etc.) actually wants to kill Judy (everybody else).
It has been obvious since the 1980s that the “scientists” can not predict the future climate, and yet billions (trillions perhaps?) has been squandered on the results of these worthless computer models. It would be far better to get some pagans to examine the entrails of a chicken. Far better.
Can we find a Roman haruspex to practice climate divination via inspection of entrails? Any experts in divination out there?
Thanks, Euan Mearns. Very good work.
“In the UK, if a commercial research organisation were found cooking research results in order to make money with no regard for public safety they would find the authorities knocking at their door.”
…But of course the IPCC has no jurisdiction that can even poke a stick at them never mind prosecute.
Niff,
This is a very good point, maybe we should stop banging our heads against a brick wall, and instead simply try to bring about some reform that makes the UN subject to legal challenge, through an international court. Much of the UNs stupidity comes about because of it’s lawlessness
In particular such a reform may make it possible to constrain UN assaults on national sovereignty via treaties. By allowing person disadvantaged by the treaties to sue the UN for damages.
Mearns says: Comparing models with reality is severely hampered by the poor practice adopted by the IPCC … . … The variations in model output are consequently controlled by physical parameters like climate sensitivity … .There is no good scientific reason for the IPCC not adopting today the correct approach … that the sensitivity of the climate TO CO2 is likely much less than 1.5˚C … . … [T]he IPCC view on climate sensitivity was a range from 1.5 to 4.5˚C. … The wool has been pulled over the eyes of policy makers, governments and the public to the extent of total brain washing. Bold, caps added.
IPCC says: Climate sensitivity refers to the … change in the annual mean global surface temperature [MGST] following a doubling of the atmospheric equivalent carbon dioxide concentration. Bold added.
A skein is in the author’s eyes. Reality is that no one measures, and no climatologist is known who tries to measure, the rise in temperature FOLLOWING a rise in atmospheric CO2 concentration (C). A principle of science is that a cause must precede its effects, and to its credit, IPCC properly defines climate sensitivity as an effect, i.e., an event following, the rise in CO2, and models it accordingly. However, once defined, twice forgotten, apparently. No one in this field even attempts to assess the lead/lag relationship between CO2 and MGST. The definition is a cover for what climatologists actually do.
Reality is that they measure is the rise in temperature DURING a rise in CO2, and rationalize that T must be the effect from C past. It’s a bootstrap: AGW is evidence of AGW. But the Law of Solubility teaches that a rise in temperature causes water to emit CO2, and the temperature dependent flux from the ocean, being 15 times the greater (in GtC/yr), swamps man’s feeble emissions.
AGW has the Cause & Effect relationship exactly back-to-front. Global warming is the cause of the rise in atmospheric CO2. Nevertheless, one can always measure the relative rates of Delta-T to 2 x C, and some use the mere existence of that ratio as implicit evidence that C causes T. It doesn’t. T causes C, and reality is that climate sensitivity as defined is zero.
Yet the wrong number is still calculable from measurements to be inserted into models.
How fortuitous for everyone that the ratio is smaller than IPCC’s lower limit. The toast fell jelly side up.
Just take any model, any one of them, divide by 3 (my estimate of the CO2 fudge factor they use) and bingo the model matches reality.
My thesis…take out the value for CO2 forcing and the model will work…there done now you can send the nobel prize to me at….
Mosher writes “Basically, Pauli postulates a unicorn.”
And was right. The difference between AGW theory and conservation of energy is that conservation of energy is a strong principle whereas sensitivty predictions of 3C are weak.
If Pauli’s unicorn didn’t exist then our understanding of nature would have been turned on its head. If sensitivity turned out to be 0.5C then it would be no big deal and we’ll work out what feedbacks make that so. It wouldn’t be earth shattering.
rgbatduke says:
June 12, 2014 at 9:16 am
Excellent.
And yet, that is precisely what they do not do in computing the MME mean as explicitly described and stated in AR5, chapter 9, section 9.2.2.3. Nor do they account for the fact that some model results included as “one model” represent (themselves) the mean results of hundreds of runs, while others only managed to complete a run or three, so that for some models 100 votes turns out to have the same statistical weight as one vote for some others. Also explicitly acknowledged in AR5, 9.2.2.3. Nor do they account for the fact that e.g. GISS has some six or seven models that are all closely related variants of the same basic program that are all counted as “independent” samples in this pseudo-“ensemble” which both strongly biases the mean of the entire collection towards whatever that variant produces and makes the “error estimate” even more meaningless than it already was. Oh, wait, no it doesn’t. That’s impossible, isn’t it, just like a negative pressure. And yes, AR5 explicitly acknowledges that error in 9.2.2.3. It even acknowledges that this creates “challenges” in interpreting any MME results as having any predictive or other value whatsoever.
What it doesn’t do is present the fact that all of this averaging and averaging of averages carefully conceals the fact that the actual model runs, one at a time, don’t generally look anything like the real climate. And that’s the rub. There is no such thing as an “average result” for the climate produced by any model. If a model produces daytime temperatures of 400 C and nighttime temperatures of 200 C, we cannot average this and assert “Look, our model produces average temperatures of 300 C, and that’s pretty close, so our model might be right. If we have two models and one produces averages of 400 C and the other produces averages of 200 C we cannot average them to 300 C and then crow that we’ve produced successful models. If we produce a model that shows it getting hot at night and cool during the day, but otherwise gets an average temperature that is in perfect agreement with measurements — sorry Charlie, still a failed model, and no, we cannot average it with other failed models in any way justified by the laws of statistics and claim that doing the averaging makes it any more likely to be correct.
If it did (or rather, if it appeared to), it would be pure luck, as there is no theory of statistics that proves or asserts that human errors or biases are uniformly and symmetrically distributed in such a way that they generally cancel. Indeed, we have many centuries of history that prove the exact opposite.
It is remarkable that otherwise intelligent people will go to such great lengths to defend what is fairly obviously a horrendous abuse of statistics undertaken for the sole purpose of defending a point they already believe to be true, independent of any possible evidence or argument that might contradict it. And to be fair, this happens with monotonous frequency on both sides of this particular debate. Warmists are “certain” that they are right and a catastrophe is inevitable, they are equally zealous in their self-righteous condemnation of anybody that disagrees, and in their willingness to spend any amount of other people’s wealth, health, happiness and life to take measures that even the proponents acknowledge will have no meaningful effect. Deniers are equally “certain” that there is no such thing as a greenhouse effect, that more CO_2 is if anything desirable, that everybody who disagrees with them in this is a commie pinko liberal, and that it is all part of a giant conspiracy.
I personally have no idea if the hypothesis is right or wrong. I don’t think anybody does. I think it is about as well-founded in actual evidence and argument as belief in Santa Claus or the Greek Pantheon, I think that the climate models that predict it are way beyond merely “dubious”, but just because climate models are broken and generally suck doesn’t mean that the basic assertion is wrong, only that we cannot trust the means most often used to try to prove it right. I do think that looking over the correspondence between CO_2 and temperature over the Phanerozoic Era (the last 600 million years, where we have decent data on both via a variety of proxies) that — well, there isn’t any. Correspondence, that is. CO_2 levels have generally but irregularly descended from 7000 ppm to the recent low water mark of 190 ppm (which really was nearly catastrophic). Temperatures have fluctuated over a much smaller relative range and are flat to rising slightly over the exact same interval. There is simply no visible first order correlation between atmospheric CO_2 and global average temperature visible anywhere in the geological record. Nobody who looked at the data, or a scatter plot of the data, would conclude “Gee, CO_2 is correlated with temperature.” They’d conclude the opposite.
rgb
Euan I find it funny how in the private market place you get paid for your results, good results you can continue working. Yet in the public market place the reverse is true. Bad results mean more money to study or fix it it, You can see that in climate science, the post office, and the VA. Yet somehow we should be happy that government so lavishly rewards incompetence.
Euan Mearns
“…makes it difficult to see how models initiated in 2001 now compare with 13 years of observations since.”
You can find it here:
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-ts-26.html
Please make comparison with observation and update your post so that it will tell the story that you want to tell in a single graph.
This figure of the IPCC is probably the most important figure for the near term projection and it IS WRONG.
Thank you Euan for your work.
I’m always wondering, how there can be a sensible notion of ‘global’ temperature. We know climate zones and seasons of the year. From CET it is well known that temperature in summer season is much less increasing (0.3°C) than in winter season (about 1.3°C) – over 350 years. Thus Atlantic climate seems to reduce the stress of cold winters, while summer temperatures increase only mildly. Is there anybody in UK/Scotland/Ireland complaining about this fact? IPCC should inform their clients (I mean the tax payers) whether they can really expect a CAGW, or only a comfortable local warming.
rgbatduke:
Thankyou for your trenchant posts throughout this thread.
I write to support your post at June 12, 2014 at 10:29 pm.
I always summarise the issue of why averaging GCM results is an error by saying
Average wrong is wrong.
This is comprehensible to people with no knowledge of statistical concepts such as independent data sets.
Richard
The problem here is that the FAR 1990 “business as usual” scenario assumes that we would have gotten more than 1.2 W/m² since 1990 (see FAR Annex, Figure A.6). But the actual emissions we have seen since 1990 has been lower than FARs’ BAU scenario for CO2, lower for methane, and lower for CFC’s (see FAR Annex, Figure A.3, and compare to NOAA’s Aggregate Greenhouse Gas Index at http://www.esrl.noaa.gov/gmd/aggi/aggi.html). Which means the actual forcing we have seen since 1990 is about half that of FAR’s BAU scenario, and is in fact very close to FAR’s Scenario D.
And the predicted temperature rise under Scenario D is pretty close to what we have actually experienced (See FAR Chapter 6, Figure 6.11). Which means that the FAR models pretty much get it right, if we give them the right inputs.
Reblogged this on The GOLDEN RULE and commented:
A scientific look at the IPCC’s failure to produce realistic computer models, the basis for alarmist action against our societies welfare.
“Examining past reports is quite enlightening since it reveals what the IPCC has learned in the last 24 years.
I conclude that nothing has been learned other than how to obfuscate, mislead and deceive.”
We can’t really take Mosher seriously in this thread since he is active in adjusting the temperature trend up so that it starts to get closer to the climate model’s predictions.
He shouldn’t be talking about the scientific method when he is violating its basic premise about objective observations.
rgbatduke said: there is no theory of statistics that proves or asserts that human errors or biases are uniformly and symmetrically distributed in such a way that they generally cancel. Indeed, we have many centuries of history that prove the exact opposite.
———————
this witticism deserves amplification !! Thanks for that.
good thing I had already swallowed my coffee, otherwise it would have been all over my screen & keyboard.
All “anomalies”. What was the “average temperature” they predicted? Or did they even predict one? Since we know “adjustments” are ongoing in the past “data”, I think this is of critical importance. Why rely on an “anomaly” which allows fiddling with past data to achieve? Pin them down to an actual “average temperature”!
So, the idea that 24 years is a long time for zero progress is unscientific.
That’s great, except for the part where warmists decided over a decade ago that the science was settled and we had to start wrecking the economy RIGHT NOW or we’re all doomed. Oh yeah, and spend tens of billions annually on the research that’s either totally unnecessary (settled!) or totally unproductive (reality).
Neutrino theory proponents… not so much.
The divergence between IPCC and reality did not start in 2000, it started in 1930. If one accepts the HadCRU / GISS NOAA reconstructions of global temperature then one has to accept that all the meteorologists of the world misread their thermometers.. i.e. from 1930 to 1940 they progressively read the thermometers higher than they should and from 1940 to 1975 they went from reading them high to reading them too low. From 1975 to the present they progressively under read them.
HadCRU and co came to the salvation of these errant meteorologists by correcting their collective errors. These are the adjustments they make to the historic record.
Not only that, but the corrections for station deterioration and UHI are all but non existent.
The latest corrections to the Iceland record illustrate the point. compare the GISS 2011 temperature reconstruction for Reykjavik with their 2013 reconstruction. Like many places in the world including the US earlier graphs show the 1940 period was as warm or warmer than today so essentially the trend (crest to crest of the 60 year warming cooling cycle) is flat.
Arhh but then GISS HadCRU and co do not want to recognise that natural cycle exists.
Question is – why are their clearly faulty reconstructions continually used.
Rather than doing their thing of trying to manufacture a global temperature with the paucity of data they have, would it not be better to take a number of well distributed stations with good siting history etc and simply average the annual mean temperatures from these stations.
Two points with respect to Steven Mosher‘s attempted comparison
with high-energy physics.
One, and relatively trivial–I do not know where this author got the idea that Hahn &
Meitner discovered that the beta decay spectrum was continuous in 1911. Meitner
& Hahn wrote to Rutherford in 1911:
But Hahn thought a secondary process was interfering with spectrum–he didn’t think
that the spectrum was continuous. That discovery was made by James Chadwick,
later discoverer of the neutron, just before World War 1 (1914). (see Chadwick’s Nobel
Prize biography, http://chadwick.nobmer.com/1.htm ).
Second, the reason that the neutrino postulate was not immediately laughed out of existence
was that the continuous spectrum observed in beta decay was a genuine crisis
for physics. Conservation of energy is one of the most fundamental principles of
physics–not just modern, but going all the way back to Newton. Yet here, in this one
area, it looked as those energy was not conserved. Despite Mosh’s derogatory
“unicorn” comment, Pauli’s postulate of a light, uncharged particle that interacted
weakly and thus was very hard to detect was, in fact, the most conservative
possible explanation put forward for this crisis. (Conservative in the sense
of not requiring any fundamentally new physics.) Cowan & Reines did in fact,
in 1956, directly demonstrate the existence of the neutrino.
Now, note the differences.
(1) The principles involved for high-energy physics were simple. Particles
interacting with each other according to relatively well-understood laws
behaved pretty much as expected, except in the domain of beta decay.
High-energy physics was an emerging field, but there were principles that
were the absolute BEDROCK of physics being challenged.
Conservation of energy had been observed in macroscopic form for 3
centuries. The most conservative, cautious interpretation was that
a new particle must exist. This new particle allowed the laws of physics that
had been observed at the macroscopic scale, and in other respects at the
microscopic scale, to remain intact.
(2) By contrast, climate modeling is using computer codes to solve
physics problems that are, on the scale of the entire Earth, not well-understood.
Yes, fluid flow, Navier-Stokes equations, yes, yes–but here there are a number
of competing and confounding factors ( clouds, actions at different scales,
and don’t forget potential numerical analysis inaccuracies ) which simply
do not exist in the [beta] decay problem.
Conclusion: Mosher’s analogy and plea for patience is quite a bit of a reach.
Well, in any case, Climate is and Climate does its climate thing, and we have thermometers to measure the facts – and as Alexius Meinong reminds us: “Truth is a purely human construct, but facts are eternal.” So I thought hmmm…, temperature arises from the zeroth law of thermodynamics and all the rest is nothing but energy shovelling around. What happens, therefore, if I try measuring ‘Climate’ in energy terms, like kWh or joule — even treating the IPCC forecast of a 4°C temperature rise by 2100 as ‘gospel’?. I tried, with the result at http://cleanenergypundit.blogspot.co.uk/2014/06/eating-sun-fourth-estatelondon-2009.html .
Generally, I specifically published my whole spreadsheet with its calculation methods shown, so that anyone can replicate it in about 10 minutes flat, I reckon. It is then also possible to change any input value to see what happens to the end result in a split-second. E.g. I have chosen 80 years for the time to achieve the 4°C temperature rise ‘consensed’ by the politicians of the IPCC. If anyone thinks that should be any other period, just put it in your spreadsheet, again for instant result. Same applies to any other input value anyone wishes to explore.
Computer models are like fashion models. Seductive, unreliable, easily corrupted and they make ordinarily sensible people make fools of themselves.
“These reservoir models are likely every bit as complex as computer simulations of Earth’s atmosphere.”
Now that sentence made me choke on my coffee! Is it worth reading any further?