Guest Essay by: Ken Gregory
The Canadian Centre for Climate Modeling and Analysis located at the University of Victoria in British Columbia submitted five runs of its climate model CanESM2 for use in the fifth assessment report of the International Panel on Climate Change (IPCC). The climate model produces one of the most extreme warming projections of all the 30 models evaluated by the IPCC. (See Note 1.) The model badly fails to match the surface and atmosphere temperature observations, both globally, regionally, as presented in six graphs.
Global Trends
The graph below compares the near-surface global temperatures to the model runs.
Figure 1. Canadian climate model simulations of near-surface global temperatures and three datasets of observations.
The five thin lines are the climate model runs. The thick black line is the average (mean) of the five runs. The satellite data in red is the average of two analysis of lower troposphere temperature. The radiosonde weather balloon data is from the NOAA Earth Space Research Laboratory. The surface data is the HadCRUT4 dataset. (See Note 2 for the data sources.) The best fit linear trend lines (not shown) of the model mean and all datasets are set to zero at 1979, which is the first year of the satellite data. We believe this is the best method for comparing model results to observations.
Any computer model that is used to forecast future conditions should reproduce known historical data. A computer climate models that fails to produce a good match to historical temperature data cannot be expected to give useful projections.
Figure 1 shows that the computer model simulation after 1983 produces too much warming compared to the observations. The discrepancy between the model and the observation increase dramatically after 1998 as there has been no global near-surface warming during the last 16 years. With the model and observation trends set to zero in 1979, the discrepancy between the model mean of the near-surface global temperatures and the surface observations by 2012 was 0.73 °C. This discrepancy is almost as large as the 0.8 °C estimated global warming during the 20th century. The model temperature warming trend as determined by the best fit linear line from 1979 to 2012 through the model mean is 0.337 °C/decade, and the average trend of the three observational datasets is 0.149 °C/decade. Therefore, the model temperature warming rate is 226% of the observations (0.377/0.147 = 226%).
The large errors are primarily due to incorrect assumptions about water vapor and cloud changes. The climate model assumes that water vapor, the most important greenhouse gas, would increase in the upper atmosphere in response to the small warming effect from CO2 emissions. A percentage change in water vapor has over five times the effect on temperatures as the same percentage change of CO2. Contrary to the model assumptions, radiosonde humidity data show declining water vapor in the upper atmosphere as shown in this graph.
The individual runs of the model are produced by slightly changing the initial conditions of several parameters. These tiny changes cause large changes of the annual temperatures between runs due to the chaotic weather processes simulated in the model. The mean of the five runs cancels out most of the weather noise because these short term temperature changes are random.
The climate model assumes that almost all of the temperature changes are due to anthropogenic greenhouse gas emissions and includes insignificant natural causes of climate change. The simulated weather noise includes the effects of ENSO (El Niño and La Niña), but does not include multi-decadal temperature changes due to natural ocean oscillations or solar-induced natural climate change other than the small changes in the total solar irradiance. The historical record includes the effects of large volcanoes that can last for three years. The projections assumes no large future volcanoes. There has been no significant volcanoes since 1992 that could have affected temperatures as shown in this graph of volcanic aerosols.
The model fails to match the temperature record because the model over estimates the effects of increasing greenhouse gases and it does not include most natural causes of climate change.
Figure 2 compares the mid-troposphere global temperatures at the 400 millibar pressure level, about 7 km altitude, to the model. (millibar = mbar = mb. 1 mbar = 1 hPa.)
Figure 2. Canadian climate model simulations of mid-troposphere global temperatures and two datasets of observations. The balloon data is at the 400 mbar pressure level, about 7 km altitude.
The discrepancy in 2012 between the model mean of the global mid-troposphere temperatures and the satellite and balloon observations were 1.26 °C and 1.04 °C, respectively. The model temperature trend is 650% of the satellite trend.
The satellites measure the temperature of a thick layer of the atmosphere. The satellite temperature weighting function describes the relative contributions that each atmospheric layer makes to the total satellite signal. We compared the balloon trends weighted by the lower troposphere satellite temperature weighting functions to the near-surface observed trends for global and tropical data. Similar comparisons were made for the mid-troposphere. The weighted thick layer balloon trends for the lower and mid-troposphere were similar to the near-surface and 400 mbar balloon trends, respectively. We conclude that the satellite lower-troposphere and mid-troposphere temperature trends are approximately representative of the near-surface and 400 mbar temperature trends, respectively. (See notes 3 for further information.)
Tropical Trends
Figure 3 compares the near-surface temperatures in the tropics from 20 degrees North to 20 degrees South latitude to the climate model simulations. The model temperature trend is 300% of the average of the three observational trends. The discrepancy of model to near-surface observation trends in the tropic is much greater than for the global average.
Figure 3. Canadian climate model simulations of near-surface tropical temperatures and three datasets of observations.
Figure 4 compares the warming trend in the tropical mid-troposphere to the observations. The increasing tropical mid-troposphere water vapor in the model makes the warming trend at the 400 mbar pressure level 55% greater than the surface trend.
The discrepancy in 2012 between the model mean of the mid-troposphere tropical temperatures and the balloon observations was 1.33 °C. The model temperature trend is 560% of the average of the two observational trends.
Figure 4. Canadian climate model simulations of mid-troposphere 400 mbar tropical temperatures and two datasets of observations.
The discrepancy is even greater at the 300 mbar pressure level, which is at about 9 km altitude.
Figure 5 compares the model to the balloon temperatures at the 300 mbar pressure level in the tropics. The tropical warming trend at 300 mbar in the model is exactly twice the model surface trend. In contrast, the temperature trend of the balloon data at 300 mbar is an insignificant 3% greater than the surface station trend. The discrepancy in 2012 between the model mean of the mid-troposphere tropical temperatures at the 300 mbar level and the balloon observations was 1.69 °C.
The temperature trends (1979 to 2012) of the tropical atmosphere at 7 km and 9 km altitude are an astonishing 470% and 490% of radiosonde balloon data. These are huge errors in the history match!
Figure 5. Canadian climate model simulations of mid-troposphere 300 mbar tropical temperatures and weather balloon observations.
South Trends
In the far south, the near-surface modeled temperature tend is in the opposite direction from the observations. The figure 6 compares the model near-surface temperatures from 50 degrees South to 75 degrees South latitude to the observations. The modeled temperatures from 1979 are increasing at 0.35 °C/decade while the surface temperatures are decreasing -0.07 °C/decade.
Temperatures over most of Antarctica (except the Antarctic Peninsula) have been falling over the last 30 years. The Antarctic sea ice extent is currently 1 million square km greater than the 1979 to 2000 mean. The sea ice extent has been greater than the 1979 to 2000 mean for all of 2012 and 2013 despite rising CO2 levels in the atmosphere.
Figure 6. Canadian climate model simulations of near-surface southern temperatures (50 S to 75 S latitude) and two datasets of observations.
Summary of Trend Errors
The table below summarizes the model trend to observation trend ratios.
| Model Trend to Observations Ratios (1979 to 2012) | |||
| Model/Satellite | Model/Balloon | Model/Surface | |
| Global Surface | 254% | 209% | 220% |
| Global 400 mb | 650% | 315% | |
| Tropics Surface | 304% | 364% | 249% |
| Tropics 400 mb | 690% | 467% | |
| Tropics 300 mb | 486% | ||
| South Surface | 3550% | -474% |
The table shows that the largest discrepancies between the model and observations are in the tropical mid-troposphere. The model error increases with altitude and is greatest at the 300 mbar pressure lever at about 9 km altitude. The ratio of the modeled tropical mid-troposphere 300 mbar warming rate to the surface warming rate is 200%, and is a fingerprint of the theoretical water vapor feedback. This enhanced warming rate over the tropics, named the “hot spot”, is responsible for 2/3 of the warming in the models. The fact that the observations show no tropical mid-troposphere hot spot means that there is no positive water vapor feedback, so the projected warming rates are grossly exaggerated.
Model results with large history match errors should not be used for formulating public policy. A model without a good history match is useless and there can be no confidence in its projections. The lack of a history match in the Canadian model output shows that the modeling team have ignored a fundamental requirement of computer modeling.
A global map of the near-surface air temperature from the model for April 2013 is here.
Anti-Information
Patrick Michaels and Paul “Chip” Knappenberger compared the model output to actual 20th century temperature data to determine what portion of the actual data can be explained by the model. They wrote, “One standard method used to determine the utility of a model is to compare the “residuals”, or the differences between what is predicted and what is observed, to the original data. Specifically, if the variability of the residuals is less than that of the raw data, then the model has explained a portion of the behavior of the raw data.”
In an article here, they wrote “The differences between the predictions and the observed temperatures were significantly greater (by a factor of two) than what one would get just applying random numbers.” They explained that a series of random numbers contain no information. The Canadian climate model produces results that are much worse than no information, which the authors call “anti-information”.
The Canadian Model Contributes to Fuel Poverty
The results from the Canadian climate model were used in a U.S. Global Change Research Program report provided to the US Environmental Protection Agency to justify regulating CO2. The authors of that report were told that the Canadian climate model produces only anti-information. They confirmed this fact, but published their report unchanged. The Canadian government has indicated it will follow the lead of the U.S.A. in regulating CO2 emissions.
The Canadian climate model is also used by the IPCC to justify predictions of extreme anthropogenic warming despite the fact that the model bears no resemblance to reality. As does the climate model, the IPCC ignores most natural causes of climate change and misattributes natural climate change to greenhouse gas emissions. Here is a list of 123 peer-reviewed papers published from 2008 to 2012 on the solar influence on climate that were ignored by the IPCC in the fifth assessment report.
Climate alarmism based on climate models that don’t work has so far cost the world $1.6 trillion in a misguided and ineffective effort to reduce greenhouse gas emissions. These efforts have caused electricity prices to increase dramatically in Europe causing fuel poverty and putting poor people at risk. High fuel costs and cold winter weather are blamed for 30,000 excess deaths in Britain last year. Europe’s energy costs have increased by 17% for consumers and 21% for industry in the last four years. The Canadian climate model failures has contributed to this misery.
Canadian politicians and taxpayers need to ask why we continue to fund climate models that can’t replicate the historical record and produce no useful information.
Ken Gregory, P.Eng.
Ten years of providing independent
climate science information
Notes:
1. The Canadian Earth System Model CanESM2 combines the CanCM4 model and the Canadian Terrestrial Ecosystem Model which models the land-atmosphere carbon exchange. Table 9.5 of the IPCC Fifth Assessment Report Climate Change 2013 shows that the CanESM2 transient climate sensitivity is 2.4 °C (for double CO2). The 30 model mean transient climate sensitivity 90% certainty range is 1.2 °C to 2.4 °C.
2. The Canadian climate model CanESM monthly data was obtained from the KNMI Climate Explorer here. The satellite data was obtained from the University of Alabama in Huntsville here (LT) and here (MT), and from Remote Sensing Systems here (LT) and here (MT). The radiosonde weather balloon data was obtained from the NOAA Earth System Research Laboratory here. The global surface data is from the HadCRUT4 dataset prepared by the U.K. Met Office Hadley Centre and the Climate Research Unit of the University of East Anglia, here. The HadCRUT4 tropical data (20 N to 20 S) was obtained from KNMI Climate Explorer. An Excel spreadsheet containing all the data, calculations and graphs is here.
3. The average global weather balloon trends of all pressure levels from 1000 mbar to 300 mbar weighted by the LT satellite temperature weighting functions is only 3% greater than the balloon trend at 1000 mbar, confirming that the LT satellite trends is representative of the near-surface temperature trend. The weighted average tropical weather balloon trends are 3% greater than the average of the 1000 mbar balloon trend and the surface station trend. The temperature weighting functions for land and oceans were obtained from the Remote Sensing Systems website. The average global and tropical balloon trends of all pressure levels weighted by the mid-troposphere (MT) satellite temperature weighting functions are 13% and 4% greater than the balloon trends at 400 mbar. If the MT satellite trends were adjusted by these factors to correspond to the 400 mbar pressure level, the global and tropical satellite trend adjustments would be -0.017 °C/decade and -0.004 °C/decade, respectively. Since the MT satellite trends are already less than the 400 mbar balloon trends, no adjustments were applied.
=============================
PDF version of this article is here
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
If Weaver isn’t visible or audible in Lotusville ( Victoria ), find out where the most fashionable activists are being arrested for anti-pipeline hijinx and I think you will find Dr. Weaver.
This is a good start. Now repeat this analysis for ALL of the models that contribute to CIMP5, one at a time. Note well that the treatment of the mean in the graph above IS a legitimate usage of statistics as the five model runs are drawn from an actual statistical ensemble of outcomes of THIS model given a Monte Carlo (random) perturbation of the initial conditions within the general range of the expected errors in the inputs.
One thing that they (sadly) did not do is a formal computation of sigma (from the data) relative to the mean that would permit us to apply an actual hypothesis test to the model result with the null hypothesis “this model is correct”. If the model is correct, then the probability of getting “reality” given the model is — eyeballing the data only, sadly, easily less than 0.01, and I wouldn’t be surprised if the failure is 3\sigma by the end, less than 0.001. We would be justified in rejecting this model as it isn’t even close to reality — it fails well beyond the usual 95/5% confidence level (which I don’t think much of anyway) — out there at the 99% or 99.9% level. If the model is correct, we are forced to conclude that the current neutral temperature behavior that has persisted for the last 15+ years is literally a freak of nature, so unlikely that it would have only occurred in 1 future time evolution in a 1000 started from the approximately correct initial conditions.
Of course this is almost certainly untrue. A much more sensible thing to do is invert the order of importance — assume that nature did what nature is most likely to do, so that the systematic deviation from this in even a small ensemble of samples is rather unlikely give an correct model.
The big question then is, why are the results from this model included in the spaghetti graph of figure 1.4 in the SPM or anywhere else in the report unless it were to point out its failure? All it does is shift the “mean model performance” up (as if this has some predictive value when it is in fact meaningless) and define an improbably high upper bound for the range of possible future temperatures.
So the correct thing to do is not only repeate the analysis above for every model in CIMP5 or used in AR5, but then to reject the models that fail an elementary hypothesis test, allowing for the fact that in 30 or so tries the criteria for failure needs to be a lot more stringent and re-publish all of the figures and numbers with the failed models removed.
This would almost instantly remove all cause for alarm — completely eliminate any possibility of a “catastrophe”, for starters, for example — and replace it with a healthy “concern” that global warming is at least moderately likely to continue erratically for the rest of the century, if the climate does not perversely decide to cool instead in spite of increased CO_2. It would also force climate scientists to look carefully at the features of the most successful models (which are obviously going to be the ones predicting the least warming) to begin to get an empirical idea of how much climate sensitivity has been overestimated. Yes, even AR5 is dropping climate sensitivity, but not nearly enough. Indeed, it is hard to see what would be enough. It’s currently difficult to positively ascertain whether or not net feedback from increased CO_2 is zero or even negative, so that total climate sensitivity could be close to zero and still be consistent with the data.
rgb
Well said Robert. It is not as if there is only one model to choose from. The problem has been there was only one alarming ECS value. Now that it is going the way of the dodo, we are left with the un-alarming magnitude of lesser fowls.
And to think I just re-read this from John Brignall’s “Numberwatch”:
The law of computer models
The results from computer models tend towards the desires and expectations of the modellers.
Corollary
The larger the model, the closer the convergence.
Leave it to Mosher to basically make the comment (to boil it down to brass tacks) “A crappy model gives more useful information than no model”.
As the author of this piece has shown, “anti-information” is far worse than no information, so in THIS PARTICULAR CASE, no information would, in fact, be better.
Certainly, in Mosher’s example of military planning, there are indeed cases where hindcasting the model would make no sense whatsoever, but we aren’t talking military planning and strategy here, we are talking climate, and we have a pretty good idea of what the climate has been for the past 150 years, and we know that it RELATES TO how the climate will be 150 years from now.
I am willing to buy the idea that aerial warfare 150 years ago looks nothing like aerial warfare now, which also looks nothing like aerial warfare 150 years from now. I am NOT willing to buy the idea that climate 150 years ago, climate now, and climate 150 years from now bear no relationship to each other.
– – – – – – – –
Jquip,
Hey, nice to get a comment.
Did your parody evolve from a paraphrase of a Lewandowsky & Cook pseudo-science paper? It seems familiar to their intellectual toxic waste.
Actually, that was pretty well done. I enjoyed it. Thanks.
John
Guess why the Canadian Model star scientist has turned politician and op-ed writer in the Gleube & Mail?
“The climate model assumes that water vapor, the most important greenhouse gas, would increase in the upper atmosphere in response to the small warming effect from CO2 emissions.”
Why is such an assumption made at all? Why isn’t the water vapor content a quantity that is implicitly solved for in the governing Navier-Stokes equations?
Do they use a constituitive relationship which ties CO2 to water vapor? That would just be a giant fudge knob!
I suggest studying how to model using the Battle Of Britain. There is plenty of historical data about the designs of British and German aircraft, the constraints, and how engagements worked out. Game makers have. And done it well.
Models of F-22 combat that had never yet happened did not show what would happen. But they were useful; they framed the arguments quashed some speculations, refuted some scenarios, and guided analysts.
The reader has already spotted one big difference between modeling aircraft and modeling climate. The numbers.
The speeds of the planes, their range, rate of climb, etc. are known. But we don’t know or agree about the numbers for climate; even the temperatures for 1935 in Oklahoma seem to subject to change.
KTWO
How does one model in the human element of aerial combat? Factors like eyesight and reaction times?
Just b/c a plane can do x – mechanically does not mean that x gets done b/c of human factors.
Every time a scientist quotes equations in response to doubts, as if a mechanical isolated reaction represents the entire atmosphere with all the thousands of other influences, they are closing their eyes. You don’t need equations, theories, absorption spectra etc to read a damned thermometer, if it is not rising then adjust your bloody theories, don’t claim the heat is hiding somewhere but if we wait long enough it’s going to turn up and keep spending even more to stop something which. has. already. stopped. (if it ever really even started that is).
Dr. Weaver, along with his computer models, consulted with the Provincial and municipal governments and advised them that the SLR will be around 1 m.
The engineering study that was based on the computer models recommends that the Richmond (greater Vancouver) dykes must be upgraded to the tune of, $100,000,000.00.
Real dollars for a problem that may not exist.
The UK Prime Minister has just announced that the government will rein back on green energy taxes. He should use the conclusions from this paper, to counter the shrill arguments of his Liberal Democrat partners. But I doubt that he will.
This model does particularly bad over the last few decades. Others do better. You can see a comparison of all runs of all CMIP5 runs to global land/ocean temps (NCDC, GISTemp, HadCRUT4) here: http://i81.photobucket.com/albums/j237/hausfath/globalmodelobscomps1880-2100_zps38674af4.png
One things to note is that surface temperatures are not the only metric of interest; precipitation is also useful to evaluate. That said, there does need to be more critical evaluation of models based on the physics they include and the accuracy of their hindcast/forecast. Simply lumping all models together into a multi-model mean in the name of “model democracy” is shortsighted.
What’s all the fuss about?
It is just another climate computer model with pre-determined results, as required by the Canadian climate establishment for the continuation of funding.
The real question is: Are there any climate models without this bias?
RokShox says: October 24, 2013 at 12:31 pm
The amount of water vapor in the lower atmosphere is determined by precipitation system and the Clausius–Clapeyron relation. Air in clouds and immediately next to the ocean surface is at or near 100% relative humidity, so as temperatures increase the absolute humidity there also increases. The average absolute humidity also increases between the clouds and the ocean surface with increasing temperatures.
The humidity in the upper atmosphere, above most clouds is controlled by other processes that are not resolved by climate models. Susan Solomon write about the lower Stratosphere, which also applies to the upper troposphere,
Solomon, a lead IPCC author, is saying the models are crap. The model assumes various parameters to control upper atmosphere water vapor, rather than directly assuming a water vapor response there.
Upper atmosphere water vapor is important because as reported in a previous guest post
http://wattsupwiththat.com/2013/03/06/nasa-satellite-data-shows-a-decline-in-water-vapor/
“A water vapor change in the 300-200 mb layer has 81 times the effect on OLR than the same change in the 1013-850 mb near-surface layer.” as displayed in my graph here;
http://www.friendsofscience.org/assets/documents/FOS%20Essay/OLR_PWV_bar.jpg
Unprecedented cherry picking from GRL…
http://onlinelibrary.wiley.com/doi/10.1002/2013GL057188/abstract
Abstract
[1] Arctic air temperatures have increased in recent decades, along with documented reductions in sea ice, glacier size, and snowcover. However, the extent to which recent Arctic warming has been anomalous with respect to long-term natural climate variability remains uncertain. Here we use 145 radiocarbon dates on rooted tundra plants revealed by receding cold-based ice caps in the Eastern Canadian Arctic to show that 5000 years of regional summertime cooling has been reversed, with average summer temperatures of the last ~100 years now higher than during any century in more than 44,000 years, including peak warmth of the early Holocene when high latitude summer insolation was 9% greater than present. Reconstructed changes in snow line elevation suggest that summers cooled ~2.7 °C over the past 5000 years, approximately twice the response predicted by CMIP5 climate models. Our results indicate that anthropogenic increases in greenhouse gases have led to unprecedented regional warmth.
==
Another hockey stick cut and paste?
Go Canucks!! says: October 24, 2013 at 12:57 pm
The staff of the Canadian Centre for Climate Modelling and Analysis is listed here;
http://www.ec.gc.ca/ccmac-cccma/default.asp?lang=En&n=EF40AED9-1
Dr. Weaver is not listed. Did he recently leave the centre?
The citizens of British Columbia are subject to a high carbon tax to avoid the dangers of sea level rise. Here is my graph of sea level rise on the B.C. coast.
http://www.friendsofscience.org/assets/documents/FOS%20Essay/Sea_Level_Canada_West.jpg
The graph shows the average monthly sea level of 10 tide gauge stations on the B.C. coast. The black line is the linear best fit to the data. Over the period 1973 to 2011 the average sea level has declined at 0.5 mm/year.
Steven Mosher – You say your car’s miles-to-empty model is always wrong but useful. That’s because the amount by which it is wrong is small compared with the total and with the accuracy that you actually need, ie, it is “information” not “anti-information”. Similarly, my watch is always wrong, but it is useful, as is our local weather forecast. In your “bomb” example, you don’t use the TacBrawler(“TB”) model, you use TB+1. In all these examples, there actually has been historical data telling us how useful they are. So, instead of the climate model in question, CanESM, should we use say CanESM-0.2pd or CanESM/3? (Doug Proctor and Solomon Green make similar suggestions). The timescales shown are probably too short to be sure, but eyeballing the charts suggests that the model will be pretty useless regardless of how it is used. For climate models, some sort of match to periods such as 1940-70 and 2000+ is needed. Because today’s pattern seems to be a repeat of warming periods in the further past such as the MWP, Roman and Minoan, the models also need to be able to reproduce something of those periods, and the cooler periods between them, too. I am pretty sure that not one single climate model in IPCC use today can do that.
RokShox says:
October 24, 2013 at 12:31 pm
“The climate model assumes that water vapor, the most important greenhouse gas, would increase in the upper atmosphere in response to the small warming effect from CO2 emissions.”
___________
Not so — According to Andrew Weaver “Every one knows – that water vapour in the atmosphere will condense, form clouds and will make rain – thus it is treated as a CONSTANT in (our) climate models.” This statement was given following his presentation at UVic to Chemistry – faculty, students and members of the Chemical Institute of Canada. It was in response to my question “… why did you only talk about CO2 and not once mention Water vapour. I have to say that his flippant retort was unprofessional – to say the least.
Thanks Ken.
“The model assumes various parameters to control upper atmosphere water vapor”
Amazing that such a key physical effect – water vapor feedback – is not treated as an implicit consequence of the underlying physics, but is treated as a parameterization. Parameters = knobs.
Am I just dense? It appears to me that the tweaking done to the model to match/tune model output to observations clearly demonstrates the wrong dials were tweaked since the “don’t touch the dials anymore” projection phase no longer matches the current observations. This seems a simple conclusion, readily made, yet no climate researcher has come forth to state the obvious. So either I’m dense, or the researchers are. It can’t be both.
Ken Gregory says:
October 24, 2013 at 1:36 pm
The citizens of British Columbia are subject to a high carbon tax to avoid the dangers of sea level rise. Here is my graph of sea level rise on the B.C. coast.
http://www.friendsofscience.org/assets/documents/FOS%20Essay/Sea_Level_Canada_West.jpg
The graph shows the average monthly sea level of 10 tide gauge stations on the B.C. coast.
Over the period 1973 to 2011 the average sea level has declined at 0.5 mm/year.
_____________________
The high tax, she works, eh?
Ah, Mosh, reality strikes!
During the Falklands war the Brits wanted to close Stanley airport. If memory serves, they used five Vulcan bombers, two of them for radar suppression and three to deliver old WW2 iron bombs to crater the runway.
The bombs went off just fine. But only one hit the runway. One crater. Quickly repaired, too. All the rest were mis-aimed, digging beautiful craters off to the side.
That’s reality, Mosh. Put it in your model and stick it….well, never mind.
BTW: That was an extraordinary operation, well worth googling. It had everything going for it: imagination, daring, skill, great planning. Problem is, it failed
A miss is as good as a mile in climate science. Billions are being spent on a 1 degree change in average global temperature. Maybe even trillions. Mosh, you must admit that at the very minimum that same money COULD have been spent on better health care and cleaner water in most if not all places on Earth that it was needed. And the improvement could have been sustained.