“We know there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know” — Donald Rumsfeld
Ed Zuiderwijk, PhD
An Observation
There is something strange about climate models: they don’t converge. What I mean by that I will explain on the basis of historical determinations of what we now call the ‘Equilibrium Climate Sensitivity’ (ECS), also called the ‘Charney Sensitivity’ (ref 1), defined as the increase in temperature at the bottom of the Earth’s atmosphere when the CO2 content is doubled (after all feedbacks have worked themselves through). The early models by Plass (2), Manabe & co-workers (3) and Rowntree & Walker (4) in the 1950s, 60s and 70s gave ECS values from 2 degrees Centigrade to more than 4C. Over the past decades, these models have grown into a collection of more than 30 climate models brought together in the CMIP6 ensemble that forms the basis for the upcoming AR6 (‘6th Assessment Report’) of the IPCC. However the ECS values still cover the interval 1.8C to 5.6C, a factor of 3 difference in results. So after some 4 decades of development climate models have still not converged to a ‘standard climate model’ with an unambiguous ECS value; rather the opposite is the case.
What that means becomes clear when we consider what it would mean if, for instance, the astrophysicists found themselves in a similar situation with, for example, their stellar models. The analytical polytropic description of the early 20th century gave way years ago to complex numerical models that enabled the study of stellar evolution – caused by changing internal composition and the associated changes in energy generation and opacity – and which also in 1970, when I took my first steps in the subject, offered a reasonable explanation of, for example, the Hertzsprung-Russell diagram of star populations in star clusters (5). Although always subject to improvement, you can say that those models have converged to what could be called a canonical star model. The different computer codes for calculating stellar evolution, developed by groups in various countries, yield the same results for the same evolutionary phases, which also agree well with the observations. Such convergence is a hallmark of the progress of the insights on which the models are based, through advancement of understanding of the underlying physics and testing against reality, and is manifest in many of the sciences and techniques where they are used.
If the astrophysicists were in the same situation as the climate model makers, they would still be working with, for example, a series of solar models that predict a value of X for the surface temperature give or take a few thousand degrees. Or that, in an engineering application, a new aircraft design should have a wing area of Y, but it could also be 3Y. You don’t have to be a genius to understand that such models are not credible.
A Thesis
So much for my observation. Now what it means. I will here present my analysis in the form of a thesis and defend it with an appeal to elementary probability theory and a little story:
“The fact that the CMIP6 climate models show no signs of convergence means that, firstly, it is likely that none of those models represent reality well and, secondly, it is more likely than not that the true ECS value outside the interval 1.8-5.6 degrees.”
Suppose I have N models that all predict a different ECS value. Mother Nature is difficult, but she is not malicious: there is only one “true” value of ECS in the real world; if that were not the case, any attempt at a model would be pointless from the outset. Therefore, at best only one of those models can be correct. What is then the probability that none of those models are correct? We know immediately that N-1 models are not correct and that the remaining model may or may not be correct. So we can say that the a priori probability that any model is incorrect is [(N-1+0.5)/N] = 1–0.5/N. This gives a probability that none of the models is correct from (1-0.5/N)^N, about 0.6 for N>3. So that’s odds of 3 to 2 that all models are incorrect; this 0.6 is also the probability that the real ECS value falls outside the interval 1.8C-5.6C.
Now I already hear the objections. What, for example, does it mean that a model is ‘incorrect’? Algorithmic and coding errors aside, it means that the model may be incomplete, lacking elements that should be included, or, on the other hand, that it is over-complete with aspects that do not belong in it (an error that is often overlooked). Furthermore, these models have an intrinsic variation in their outcome and they often contain the same elements so those outcomes are correlated. And indeed the ECS results completely tile the interval 1.8C-5.6C and for every value of ECS between the given limits models can be found that can produce that result. In such a case one considers the effective number of independent models M represented by CMIP6. If M = 1 it means that all models are essentially the same and the 1.8C-5.6C is an indication of the intrinsic error. Such a model would be useless. More realistic is an M ~ 5 to 9 and then you come back to the foregoing reasoning.
What rubs most with climatologists is my claim that the true ECS is outside the 1.8C-5.6C interval. There are very good observational arguments that 5.6C is a gross overestimate so I am actually arguing that probably the real ECS is less than 1.8C. Many climatologists are convinced that that is instead a lower limit. Such a conclusion is based on a fallacy, namely the premiss that there are no ‘known unknowns’ and especially no ‘unknown unknowns’, ergo that the underlying physics is fully understood. And, as indicated earlier, the absence of convergence of the models tells us that precisely that is not the case.
A Little Story
Imagine a parallel universe (theorists aren’t averse to that these days) with an alternate Earth. There are two continents, each with a team of climatologists and their models. The ‘A’ team on the landmass Laputo has 16 models that predict an ECS interval 3.0C to 5.6C, a result, if correct, with major consequences for the stability of the atmosphere; the ‘B’ team at Oceania has 15 models that predict an ECS interval 1.8C to 3.2C. The two teams are unaware of each other’s existence, perhaps due to political circumstances, and are each convinced that their models set hard boundaries for the true value of the ECS.
That the models of both teams give such different results is because those of the A-team have ingredients that do not appear in those of the B-team and vice versa. In fact, the climatologists on both teams are not even aware of the possible existence of such missing aspects. After thorough analysis, both teams write a paper about their findings and send them, coincidently simultaneously, to a magazine published in Albion, a small island state reknowned for its inhabitant’s strong sense of independence. The editor sees the connection between the two papers and decides to put the authors in contact with each other.
A culture shock follows. The lesser gods start a shouting match. Those in the A team call the members of the B team: ‘deniers’ who in their turn shout: ‘chickens’. But the more mature of both teams realize they’ve had a massive blind spot about things the other team knew but they themselves not. That those ‘unknowns’ had firmly bitten both teams in the behind. And the smartest realize that now the combined 31 models are a new A team to which the foregoing applies a fortiori: that there could arise a new B team somewhere with models that predict ECS values outside the 1.8C-5.6C range.
Forward Look
So it may well be, no, it is likely that once the underlying physics is properly understood, climate models will emerge that produce an ECS value considerably smaller than 1.8C. What could such a model look like? To find out we look at the main source of the variation between the CMIP6 models: the positive feedback on water vapour (AR4, refs 6,7). The idea goes back to Manabe & Wetherald (8) who reasoned as follows: a warming due to CO2 increase leads to an increase in the water vapour content. Water vapour is also a ‘greenhouse gas’, so there is extra warming. This mechanism is assumed to ‘amplify’ the primary effect of CO2 increase. Vary the strength of the coupling and add the influence of clouds and you have a whole series of models that all predict a different ECS.
There are three problems with the original idea. The first is conceptual: the proposed mechanism implies that the abundance of water vapour is determined by that of CO2 and that no other regulatory processes are involved. What then determined the humidity level before the CO2 content increased? The second problem is the absence of an observation: one would expect the same feedback on initial warming due to a random fluctuation of the amount of water vapour itself, and that has never been established. The third problem is in the implicit assumption that the increased water vapour concentration significantly increases the effective IR opacity of the atmosphere in the 15 micron band. That is not the case. The IR absorption by water vapour is practically saturated which makes the effective opacity, a harmonic mean, insensitive to such variation.
Hence, the correctness of the whole concept can be doubted, to say the the least. I consider therefore models in which the feedback on water vapour is negligible (and negative if you include clouds) as much more realistic. Water vapour concentration is determined by processes independent of CO2 abundance, for instance optimal heat dissipation and entropy production. Such models give ECS values between 0.5C and 0.7C. Not something to be really concerned about.
References
- J. Charney, ‘Carbon dioxide and climate: a scientific assessment’, Washington DC: National Academy of Sciences, 1979.
- G. N. Plass, ‘Infrared radiation in the atmosphere‘, American Journal of Physics, 24, 303-321 , 1956.
- S. Manabe and F. Möller, ‘On the radiative equilibrium and heat balance of the atmosphere‘ Monthly Weather Review, 89, 503-532, 1961.
- P. Rowntree and J. Walker, ‘Carbon Dioxide, Climate and Society‘: IIASA Proceedings 1978 (ed J. Williams), pages 181–191. Pergamon, Oxford, 1978.
- http://community.dur.ac.uk/ian.smail/gcCm/gcCm_intro.html
- V. Eyring et al, ‘The CMIP6 landscape’ Nature Climate Change, 9, 727, 2019.
- M. Zelinka, T. Myers, D. McCoy, et al. ‘Causes of higher climate sensitivity in cmip6 models‘, Geophysical Research Letters, 47, e2019GL085782, 2020. https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019GL085782.
- S. Manabe and R. Wetherald, ‘Thermal equilibrium of the atmosphere with a given distribution of relative humidity’, J. Atmos. Sci., 24, 241-259, 1967.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
After a short search on “Climate Models Clouds” this comes up:
Clouds, Arctic Crocodiles and a New Climate Model January 2020
From NASA no less and this quote:
“It stands to reason that any computer model that hopes to explain
past climates or forecast how ours will change would have to take
clouds into account. And that’s primarily where climate models fall
short.”
Words mean things and the quote says “primarily” which means clouds aren’t the only shortcoming.
So why is so much stock put with climate models?
Above I mention the grid scale resolution needed actually to model clouds, tens to hundreds of meters rather than hundreds of kilometers, as now.
There presently ins’t enough computing power in Christendom to achieve this, or in any other and all -doms.
Convergence is just some 40 years in the future — and always will be.
Might it be that ECS is not a constant, but instead varies with other parameters such as the global mean temperature or greenhouse gas concentration?
go here: https://climate.nasa.gov/vital-signs/carbon-dioxide/
The canard that CO2 is well-mixed globally doesn’t jive with what NASA shows at this site. Not only is the CO2 not well-mixed it shows the US, Siberia, and China as having very high CO2 concentrations.Yet the US and Siberia are two regions that have been seeing cooling, not warming. How can CO2 then be the control knob?
Tim
I’m afraid that “well-mixed,” like beauty, is in the eye of the beholder. I have never seen a formal definition of “well-mixed,” and probably never will. As commonly used, it is as ambiguous as “catastrophic warming” or “becoming more acidic,” which is how the game is played by alarmists.
Is the AR6 equivalent of Table 9.5 in the AR5 report publicly available yet? It has the ECS and TCR results from individual models.
C’mon man, all the talking doomsayer “climate heads” in the news all use RCP 8.5.
That’s got to be accurate, right? >sarc
And the saying that “some models are useful” does NOT apply to climate models
Their use is actually a REAL DANGER to society and the world, just like using a “wrong” engineering model would be.
“Believing” these erroneous models is doing GREAT DAMAGE to many parts of global infrastructure and badly affecting many societies, especially those in the developing world.
It is causing the delay of reliable electricity infrastructure to those countries while at the same time causing great UNRELIABILITY and INSTABILITY in the electricity supplies of developed societies
Not to mention the political, societal and civil unrest it is creating.
Any real scientist would be able to identify the flaws in the Climate Models. The fact that they aren’t fixing the obvious problems pretty much proves they benefit from the flawed output.
1) The change in W/M^2 per unit of CO2 shows a log decay, the climate models assume a linear relationship
2) CO2 isn’t related to Temp, W/M^2, the climate models model CO2 in a linear relationship and not W/M^2 in a log decay relationship
3) The only mechanism by which CO2 can affect climate change is through the effect of 15 micron LWIR. 15 Micron LWIR has a black body temp of -80C, and won’t warm water
4) The oceans are warming, and CO2 and 15 micron LWIR won’t warm water
5) The Poles, N and S Hemi. Land and Ocean, and lower 1/3rd, Equator, and upper 1/3rd all have identical CO2 yet different temp trends. CO2 of equal amounts can’t cause the temperature differentials
6) The UHI and Water Vapor corrupt the temperatures. If you control for the UHI and Water Vapor and isolate the impact of CO2 on temperature you find that CO2 doesn’t cause warming. The models focus on warming.
Simply look at a desert location to isolate the impact of CO2 on temperatures. You will see that CO2 doesn’t cause warming.
Alice Springs (23.8S, 133.88E) ID:501943260000 https://data.giss.nasa.gov/cgi-bin/gistemp/stdata_show_v3.cgi?id=501943260000&dt=1&ds=5
Warmists never want to deal with the radiation H2O absorbs directly from the sun in the near IR spectrum. It is substantial. I don’t know for sure, but a lot of them mistake this for “feedback” from CO2. One only has to look at this to see something is out of whack in the “radiation budget” because this is never accounted for.
Jim
If this is your personal graphic, there are a couple of typo’s that should be corrected:
Readiation and Wavelenght
Clyde, not my graphic. Found on Bing images.
I have an accurate climate model grounded in sound physics. It is light years ahead of any other climate model that are founded on a fairy tale.
The average surface temperature of the Earth is set by thermostatic limits at the poles of -2C and the tropical warm pools at 30C. Hence:
Average Global Surface temperature = {30 + (-2)}/2 = 14C or 57F
Glaciation occurs when the Atlantic warm pool fails to reach the 30C limit annually, resulting in ice accumulation,mulation on adjacent land.
Something that has bugged me for a while is finding missing heat in the deep oceans. We are told that a key indicator of global warming is observed in ocean heat uptake. The top 100m of the ocean has reportedly warmed 0.57C in the past 65 years. The top 700m .22C and the top 2000m 0.11C. That means the zone 700 to 2000m has warmed 0.06C.
To conduct the amount of heat to cause 0.06C rise in the 1300m below 700m from the surface mixed layer to 700m would take 430years. So the heating of the deep ocean in 65 years requires a different process.
Given that the thermocline is a function of heat conduction from the top mixed layer and cooling by upward flow from the cool zone, fed from the poles, I determine that the only way to get “rapid” warming at depth is to slow the rate of evaporation so the upward rate of flow is reduced. That is detailed in the attached charts.
To get close to the temperature changes observed in the different layers, requires the change in net evaporation rate to occur at least 100 years before the the start of the deep ocean record in 1955.
A reduced net evaporation rate is consistent with increase in the area of warm pools, where the surface insolation is reduced and there is convergence of moist air so the net evaporation rate is negative over the warm pools.
This agrees with the increase in salinity seen over the last 400 years. Higher salinity would slow the rate of evaporation.
What about convection, turbulence and up and down-welling?
The title “The Problem with Climate Models” infers only only one problem with climate models? Me thinks they have many more problems than just ECS!
An ECS around 0.7 C/doubling will work very nicely I suspect.
That is my finding from 250 years of HadCET, and the findings of Lindzen and Spencer from the AMSU data.
Of course this requires the Svensmark Mechanism to be recognized as major driver of global temperature. Which would not be popular with lucratively paid government climate scientists, who would then lose their jobs.
(I also think TCR and ECS are fairly close to each other, otherwise the ~60 year cycle wouldn’t be so pronounced in HadCRUT 3 and the AMO.)
I did an exercise looking at Australia’s CSIRO CMIP3 and CMIP5 models over the same region. In the 4 years between the data sets, they actually COOLED the Nino34 region by 0.8C.
Of course this region has had constant temperature for the last 4 decades so the only way you get a rising trend and have the model accurate at time of generation is to cool the past.
…so the only way you get a rising trend and have the model accurate at time of generation is to cool the past.
Easy to do when alarmists are in charge of maintaining historical temperature record. Models hind-casting garbage data are sure to forecast garbage, no matter what ECS they assume.
This is a point I have made MANY times !!
FAKE trend in the hind-cast data => FAKE trend in the forecasting.
Absence of progress is mentioned as an indicative characteristic of a pseudoscience amongst others that would also apply to climate change research over-reliant on computer models.
“…none of those models represent reality well …” One of the difficulties appears to be in grasping that the flow of energy gets redirected wrt wavenumber.
“ We know immediately that N-1 models are not correct and that the remaining model may or may not be correct. So we can say that the a priori probability that any model is incorrect is [(N-1+0.5)/N] = 1–0.5/N. This gives a probability that none of the models is correct from (1-0.5/N)^N, about 0.6 for N>3. So that’s odds of 3 to 2 that all models are incorrect”
This is fine and well if considering the outcome as a binary event – absolutely right or absolutely wrong. But surely we can agree that a “true” ECS of 2 degrees would not be functionally distinguishable from, say, a modeled ECS of 2.0001 degrees, even though the modeled result would be wrong, strictly speaking.
Again, you show your TOTAL LACK OF COMPREHENSION of error propagation.
As expected.
Is that your Weekly-FAIL, or do you have more to come. !
One problem with the massively wide range in models is that you can dial in positive and negative feedbacks based on assumptions that the next modeler will disagree with and dial in their speculative feedbacks to represent unknowns. 1.8C to 5.6C is an insanely wide range considering we’ve had over 100 years worth of surface measurements with CO2 going up and various other factors changing. To think that one model would have triple the warming of another model of the same planet, with, what should be the same conditions, tells you something isn’t right.
In models, you have to try to capture everything known accurately and estimate the unknowns using speculation. The range of speculation is what causes the range in sensitivity and wide range in temperature projections that grow to huge values 100 years from now.
But why not give MUCH more weight to the obvervations on the real planet(recorded empirical data) and just project that trend out and only make slight adjustments to speculative items?
If the trend has been 1.4 deg C/century, assume the trend forward will continue to be very close to 1.4 deg C/century. The trend is your friend.
Even if there were several unknown variables/factors that had an influence on the rate of warming over the last century…………it doesn’t matter because the data is the data and it measured the REAL warming, with every one of those factors…………known and unknown in there. You don’t have to know and represent every factor.
You can be just as ignorant of those same factors going forward…..but using the trend will dial them in automatically. Yes, they could change but if you don’t even know them to begin with, how do you accurately determine how they might change?
You can’t.
The trend and change over the last 100 years has every single element…. knowns and unknowns baked in.
It doesn’t necessarily tell you the sensitivity to CO2 with accuracy because of the unknowns that you don’t know……which means your attempts to accurately project the next 100 years will be flawed if you based them on CO2 sensitivity “speculative guesses”
But using the trend, allows you to project the next 100 years based on that trend that had every single unknown in it by definition.
All these brilliant PhD scientists beating their heads against the wall because they need thousands of mathematical equations to accuractely represent the physics of all sorts of dynamics to try to represent the CO2 sensitivity to project the global temperature for the next 100 years.
.
And the wide range of projections tells you how speculative and unknown the unknowns are.
Then, an operational meteorologist armed with only empircal data that accurately measured the last 100 years, can project it out for the next 100 years and likely get much closer to reality than the vast majority of the climate model projections.
There’s just way too much weighting on modeling equations and guessed at sensitivity and not enough weighting given to the real world/empirical data/observations.
If you look just at the model outputs, after about 20 years they all turn into basically linear trends of equation y = mx + b. Simple projections of a linear trend. And almost all of them have a higher “m” than the real world. your 1.4C / century.
Normally, science looks for convergence to verify a hypothesis. In the “settled science” of Climate Change, divergence is considered verification of the hypothesis.
How many people have taken the trouble to go back and look in detail at the original Manabe and Wetherald (M&W) model and their underlying assumptions? [M&W, 1967] They started by ASSUMING an equilibrium average climate. This idea goes back to Pouillet in 1836 and comes from a fundamental misunderstanding of climate energy transfer [Pouillet 1836]. Conservation of energy for a stable climate on planet earth requires an approximate long term planetary energy balance between the absorbed solar flux and the long wave IR flux returned to space. Using an average solar flux of 1368 W m-2, an albedo (reflectivity) of 0.3 and an illumination area ratio (sphere to disk) of 4, the average LWIR flux is about 240 W m-2. (The exact number depends on satellite calibration). Simple inspection of the CERES IR images gives a value of about 240 ±100 W m-2 [CERES, 2011]. There is NO exact short term energy balance.
Furthermore, the spectral distribution of the outgoing longwave radiation (OLR) at the top of the atmosphere (TOA) is not that of a blackbody near 255 K. The OLR consists of the upward emission of the LWIR flux from many different levels in the atmosphere. The emission from each level is modified by the absorption and emission of the levels above. The OLR does not define an ‘effective emission temperature’. It is just a cooling flux. There is no 255 K temperature that can be subtracted from an ‘average’ surface temperature of 288 K to give a ‘greenhouse effect’ temperature of 33 K [Taylor, 2006].
Thermal equilibrium means that the rate of heating equals the rate of cooling. The lunar surface under solar illumination is in thermal equilibrium so that the absorbed solar flux is re-radiated back to space as LWIR radiation as it is received. There is almost no time delay. The earth is very different from the moon. It has an atmosphere with IR active species (‘greenhouse gases’), mainly H2O and CO2. It also has oceans that cover about three quarters of the surface. In addition, the period of rotation is also faster, 24 hours instead of 27.3 days. On the real planet earth there are significant time delays between the absorption of the solar flux and the emission of the LWIR flux. This is irrefutable evidence of non-equilibrium energy transfer. Diurnal time delays or phase shifts between the peak solar flux at local noon and the surface temperature response can easily reach 2 hours and the seasonal phase shift at mid latitudes for the ocean surface temperature may reach 8 weeks. This is not new physics. The phase shift for the subsurface ground temperature was described by Fourier in 1824 [Fourier, 1824]. It has been ignored for almost 200 years. Similar non-equilibrium phase shifts are also found in other energy storage devices such as capacitors in AC electronic circuits.
The surface temperature is determined at the surface by the interaction of four main time dependent flux terms with the surface thermal reservoir. These are the absorbed solar flux, the net LWIR emission, the moist convection and the subsurface transport. (This does not include rainfall or freeze/thaw effects). The fluxes are interactive and should not be separated and analyzed independently of each other. A change in surface temperature requires the calculation of the change in heat content or enthalpy of the surface reservoir divide by the local heat capacity [Clark, 2013]. The (time dependent) downward LWIR flux from the lower troposphere to the surface limits the surface cooling by net LWIR emission. In order to dissipate the excess solar heat, the surface warms up until the excess heat is removed by moist convection. This is real source of the so called greenhouse effect. The ocean-air and land-air interfaces have different energy transfer properties and have to be analyzed separately. In addition, for the oceans, long range transport by ocean currents is important.
The M&W ‘model’ has nothing to do with planet earth. It was simply a mathematical platform for the development and evaluation of atmospheric radiative transfer algorithms. M&W left physical reality behind as soon as they made their first assumption of an exact flux balance between an average absorbed solar flux and the LWIR flux returned to space. They started with a static air column divided into 9 or 18 layers. The IR species were CO2, H2O and O3 simulated using the spectroscopic constants available in 1967. The surface was a blackbody surface with zero heat capacity. This absorbed all of the incident radiation and converted it to blackbody LWIR emission. To simulate the atmospheric temperature profile they fixed the relative humidity in each air layer. The water vapor concentration therefore changed with temperature as the surface and layer temperatures changed. The model was run iteratively until the absorbed solar flux matched the outgoing LWIR flux. It took about a year of model time (step time multiplied by the number of steps) to reach equilibrium. Actual computation time was of course much less. In 1967, getting such a model to run at all and then reach equilibrium was a major achievement. However, the effects of surface heat capacity, ocean evaporation and convection were ignored. When the CO2 concentration in the M&W model was increased, there was a decrease in the LWIR flux emitted at the top of the atmosphere. In order to reach a new ‘equilibrium state’ the surface temperature and the tropospheric temperatures had to increase. As the temperature increased, the water vapor concentration also increased. This then ‘amplified’ the surface warming produced by the CO2. All of this was a mathematical artifact of the input modeling assumptions. There is no equilibrium climate on planet earth.
Unfortunately the ‘global warming apocalypse’ predicted by the M&W model became a lucrative source of research funds that was too good given up. Two early climate ‘bandwagons’ were created. First, the radiative transfer algorithms could be improved with better spectroscopic constants and more IR species. Second, a large number of M&W ‘unit’ models could be incorporated into a global circulation model. In addition, everyone one needed the biggest and fastest computer available. No one tried to calculate the change surface temperature from first principles or otherwise independently validate the M&W model. Global warming had been created by model definition. Do not kill the goose that lays the golden eggs. By 1975, M&W had created a ‘highly simplified’ global circulation model that still produced ‘global warming’ and by 1978, eleven more (minor) IR species had been added to the M&W model [M&W, 1975; Ramanathan and Coakley, 1978].
Instead of correcting the equilibrium assumption, three additional invalid assumptions were added to the M&W model by Hansen and his group in 1981 [Hansen et al, 1981]. First, the ‘blackbody surface’ was replaced by a 2 layer ‘slab’ ocean. This was used to add heat capacity and a delayed time response but little else to the ‘model’. The ocean surface energy transfer, particularly the wind driven evaporation (latent heat flux) was ignored. Second, the effect of a ‘doubling’ of the atmospheric CO2 concentration on an ‘equilibrium average climate’ was discussed as though it applied to planet earth. The mathematical warming artifacts created by the equilibrium model were presented as though they were real. On planet earth, the changes in LWIR are far too small to have any effect on surface temperature. Third, the weather station temperature was substituted for the surface or skin temperature. The flux terms interact with the surface. The weather or meteorological surface air temperature (MSAT) is measured in a ventilated enclosure located 1.5 to 2 m above the ground. This was a fundamental ‘bait and switch’ change made to the observables that were ‘predicted’ by the ‘model’ without any change to the model calculations. How did the ‘blackbody surface’ turn into a weather station? Furthermore, one of the real causes of climate change, the Atlantic Multi-decadal Oscillation (AMO) was clearly visible in the temperature plots shown by Hansen et al, but they chose to ignore reality and called these temperature variations ‘noise’. The only change that has been made to the basic equilibrium climate ‘model’ since 1981 was the addition of ‘efficacies’ to the radiative forcing terms by Hansen et al in 2005 [Hansen et al, 2005].
Since the start of the industrial revolution around 1800, the atmospheric concentration of CO2 has increased from about 280 to 400 ppm. This has produced a decrease in the LWIR flux at TOA of approximately 2 W m-2 with a similar increase in the downward LWIR flux to the surface [Harde, 2017]. At present, the average CO2 concentration is increasing by about 2.4 ppm per year, which corresponds to a change in LWIR flux near 0.034 W m-2 per year. The effect of an increase in CO2 concentration on surface temperature has to be determined by calculating the effect of the increase in LWIR flux on the change in heat content of the surface thermal reservoir after a thermal cycle with and without the change in flux. This is simply too small to measure.
The decrease in LWIR flux at TOA has been turned into a ‘radiative forcing’ and an elaborate climate modeling ritual has been developed to describe the effect of a hypothetical ‘CO2 doubling’ on a fictional equilibrium average climate [Ramaswamy et al, 2019; IPCC, 2013; Hansen, 2005]. In order to understand what really happens on planet earth, the ‘radiative forcing’ has to be converted back into a change in the rate of heating at different levels in atmosphere [Feldman et al. 2008]. For CO2, the ‘radiative forcing’ is a wavelength specific decrease in the LWIR flux in the P and R branches of the main CO2 emission band at TOA, produced by absorption at lower levels in the atmosphere. This results in a slight warming in the troposphere and a cooling in the stratosphere. (There is also a smaller effect for the CO2 overtone bands). For a ‘CO2 doubling’, the maximum warming rate in the troposphere is less than 0.1 K per day [Iacono et al, 2008]. This is simply dissipated by the normal convective motion in the troposphere. There is a very small increase in emission from the H2O band and a small increase in the gravitational potential energy. The lapse rate is not a mathematical function, it is a real vertical motion of the air in the troposphere – upwards and downwards. At an average lapse rate of -6.5 K km-1 a temperature increase of 0.1 K is produced by a descent of 15 m. This is equivalent to riding an elevator down about 4 floors. The dissipation of the radiative forcing is illustrated schematically in Figure 1 (attached). The slight heating effect is illustrated in Figure 2 (attached).
In addition, the LWIR flux in the atmosphere is produced by many thousands of overlapping molecular lines. In the lower troposphere, these are pressure broadened and overlap to produce a quasi-continuum within the main H2O and CO2 absorption emission bands. About half of the downward LWIR flux reaching the surface from the troposphere originates from within the first 100 m layer above the surface and almost all of the downward LWIR flux originates from within the first 2 km layer. Any ‘radiative forcing’ at TOA from a decrease in LWIR flux cannot couple to the surface and cause any kind of temperature change [Clark, 2013].
The global warming in the climate models has been created by ‘tuning’ the models to match the ‘global average temperature anomaly’ such as the HadCRUT4 temperature series from the UK Met. Office [HadCRUT4, 2019]. The climate warming has been produced by a combination of the warming phase of the Atlantic Multi-decadal Oscillation (AMO) and various ‘adjustments’ to the temperature record [Andrews, 2017a; 2017b; 2017c; D’Aleo, 2010; NOAA, AMO, 2019,]. The HadCRUT4 climate series was used by Otto et al [2013] to create a pseudoscientific equilibrium climate sensitivity (ECR) and transient climate response (TCR) using the correlation between HadCRUT4 and a set of contrived ‘radiative forcings’. In reality, the downward LWIR component of the forcings from the lower troposphere to the surface cannot couple below the ocean surface. They are absorbed within the first 100 micron layer and fully mixed with the much larger and more variable wind driven evaporation. The two cannot be separated and analyzed independently of each other. Figure 3a (attached) shows the HadCRUT4 data used by Otto et al and Figure 3b shows the radiative forcings. Figure 3c shows the HadCRut4 data set from Figure 3a overlapped with the AMO. From 1850 to 1970, there is a good match between the two, including both the nominal 60 year oscillation and the short term ‘fingerprint’ variations. After 1970 there is an offset of approximately 0.3 C. This requires further investigation, but is probably related to ‘adjustments’ during the climate averaging process. The correlation coefficient between the two data sets is 0.8. The linear slope is the temperature recovery from the Little Ice Age. Figure 3d shows the tree ring reconstruction of the AMO from 1567 by Gray et al [Gray et al, 2004; Gray.NOAA, 2021]. The instrument record from 1850 is also shown. The variations in the AMO have no relationship to changes in the atmospheric CO2 concentration.
The increase in the surface temperature of the N. Atlantic Ocean is transported over land by the prevailing weather systems and coupled to the weather station record through the diurnal convection transition temperature. The land surface temperature is reset each day by the local temperature at which the land and air temperatures equalize. Changes in this transition temperature are larger than any possible changes that can be produced by the observed increase in atmospheric CO2 concentration. Temperature changes produced by downslope winds and ‘blocking’ high pressure systems can easily reach 10 C over course of a few days.
The forcings, feedbacks and climate sensitivities found in the climate models can be traced back to the mathematical artifacts created by the original M&W model. There is no equilibrium average climate that can be perturbed by an increase in atmospheric CO2 concentration.
References
Andrews, R., 2017a, Energy Matters Sept 14, 2017, ‘Adjusting Measurements to Match the Models – Part 3: Lower Troposphere Satellite Temperatures’. http://euanmearns.com/adjusting-measurements-to-match-the-models-part-3-lower-troposphere-satellite-temperatures/#more-19464
Andrews, R., 2017b, Energy Matters Aug 2, 2017, ‘Making the Measurements Match the Models – Part 2: Sea Surface Temperatures’. http://euanmearns.com/making-the-measurements-match-the-models-part-2-sea-surface-temperatures/
Andrews, R., 2017c, Energy Matters July 27, 2017, ‘Adjusting Measurements to Match the Models – Part 1: Surface Air Temperatures’. http://euanmearns.com/adjusting-measurements-to-match-the-models-part-1-surface-air-temperatures/
CERES 2011, CERES OLR Image, March 8 2011, Aqua Mission (EOS/PM-1), https://earth.esa.int/web/eoportal/satellite-missions/a/aqua
Clark, R., 2013, Energy and Environment 24(3, 4) 319-340 (2013), ‘A dynamic coupled thermal reservoir approach to atmospheric energy transfer Part I: Concepts’.
https://doi.org/10.1260/0958-305X.24.3-4.319
Energy and Environment 24(3, 4) 341-359 (2013), ‘A dynamic coupled thermal reservoir approach to atmospheric energy transfer Part II: Applications’.
https://doi.org/10.1260/0958-305X.24.3-4.341
D’Aleo, J. ‘Progressive Enhancement of Global Temperature Trends’, Science and Public Policy Institute, July 2010. http://scienceandpublicpolicy.org/science-papers/originals/progressive-enhancement
Feldman D.R., Liou K.N., Shia R.L. and Yung Y.L., J. Geophys. Res. 113 D1118 pp1-14 (2008), ‘On the information content of the thermal IR cooling rate profile from satellite instrument measurements’. https://doi.org/10.1029/2007JD009041
Fourier, B. J. B., Annales de Chimie et de Physique, 27, pp. 136–167 (1824), ‘Remarques générales sur les températures du globe terrestre et des espaces planétaires’. https://gallica.bnf.fr/ark:/12148/bpt6k65708960/f142.image# English translation:
http://fourier1824.geologist-1011.mobi/
Gray, S. T.; L. J. Graumlich, J. L. Betancourt and G. T. Pederson, Geophys. Res. Letts, 31 L12205, pp1-4 (2004) doi:10.1029/2004GL019932, ‘A tree-ring based reconstruction of the Atlantic Multi-decadal Oscillation since 1567 A.D.’. http://www.riversimulator.org/Resources/ClimateDocs/GrayAMO2004.pdf
Gray.NOAA, 2021, Gray, S.T., et al. 2004, Atlantic Multi-decadal Oscillation (AMO) Index Reconstruction, IGBP PAGES/World Data, Center for Paleoclimatology, Data Contribution Series #2004-062, NOAA/NGDC Paleoclimatology Program, Boulder CO, USA.
ftp://ftp.ncdc.noaa.gov/pub/data/paleo/treering/reconstructions/amo-gray2004.txt
HadCRUT4, 2019, https://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.6.0.0.annual_ns_avg.txt
Harde, H., Int. J. Atmos. Sci.9251034 (2017), ‘Radiation Transfer Calculations and Assessment of Global Warming by CO2’. https://doi.org/10.1155/2017/9251034
Hansen, J. et al., (45 authors), J. Geophys Research 110 D18104 pp1-45 (2005), ‘Efficacy of climate forcings’. https://pubs.giss.nasa.gov/docs/2005/2005_Hansen_ha01110v.pdf
Hansen, J.; D. Johnson, A. Lacis, S. Lebedeff, P. Lee, D. Rind and G. Russell Science 213 957-956 (1981), ‘Climate impact of increasing carbon dioxide’.
https://pubs.giss.nasa.gov/docs/1981/1981_Hansen_ha04600x.pdf
Iacono, M. J.; J. S. Delamere, E. J. Mlawer, M. W. Shephard, S. A. Clough, and W. D. Collins, J. Geophys. Res., 113, D13103pp 1-8, (2008), ‘Radiative forcing by long-lived greenhouse gases: Calculations with the AER radiative transfer models’. https://doi.org/10.1029/2008JD009944
IPCC, 2013: Myhre, G., D. Shindell, F.-M. Bréon, W. Collins, J. Fuglestvedt, J. Huang, D. Koch, J.-F. Lamarque, D. Lee, B. Mendoza, T. Nakajima, A. Robock, G. Stephens, T. Takemura and H. Zhang, ‘Anthropogenic and Natural Radiative Forcing’. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, Chapter 8, Radiative Forcing1535 pp, doi:10.1017/CBO9781107415324. http://www.climatechange2013.org/report/full-report/
Knutti, R. and G. C. Hegerl, Nature Geoscience 1 735-743 (2008), ‘The equilibrium sensitivity of the Earth’s temperature to radiation changes’. https://www.nature.com/articles/ngeo337
Manabe, S. and R. T. Wetherald, J. Atmos. Sci. 32(1) 3-15 (1975), ‘The effects of doubling the CO2 concentration in the climate of a general circulation model’. https://journals.ametsoc.org/view/journals/atsc/32/1/1520-0469_1975_032_0003_teodtc_2_0_co_2.xml?tab_body=pdf
Manabe, S. and R. T. Wetherald, J. Atmos. Sci., 24 241-249 (1967), ‘Thermal equilibrium of the atmosphere with a given distribution of relative humidity’. http://www.gfdl.noaa.gov/bibliography/related_files/sm6701.pdf
NOAA, AMO, 2019 https://www.esrl.noaa.gov/psd/data/correlation/amon.us.long.mean.data
Otto, A., F. E. L. Otto, O. Boucher, J. Church, G. Hegerl, P. M. Forster, N. P. Gillett, J. Gregory, G. C. Johnson, R Knutti, N. Lewis, U. Lohmann, J. Marotzke, G. Myhre, D. Shindell, B. Stevens and M. R. Allen, Nature Geoscience, 6 (6). 415 – 416 (2013). ISSN 1752-0894, ‘Energy budget constraints on climate response’. http://eprints.whiterose.ac.uk/76064/7/ngeo1836(1)_with_coversheet.pdf
Otto, A., F. E. L. Otto, O. Boucher, J. Church, G. Hegerl, P. M. Forster, N. P. Gillett, J. Gregory, G. C. Johnson, R Knutti, N. Lewis, U. Lohmann, J. Marotzke, G. Myhre, D. Shindell, B. Stevens and M. R. Allen, Nature Geoscience, 6 (6). 415 – 416 (2013). ISSN 1752-0894, ‘Energy budget constraints on climate response’, Supplementary Material. content.springer.com/esm/art%3A10.1038%2Fngeo1836/MediaObjects/41561_2013_BFngeo1836_MOESM299_ESM.pdf
Pouillet, M., in: Scientific Memoirs selected from the Transactions of Foreign Academies of Science and Learned Societies, edited by Richard Taylor, 4 (1837), pp. 44-90. ‘Memoir on the solar heat, on the radiating and absorbing powers of the atmospheric air and on the temperature of space’
http://nsdl.library.cornell.edu/websites/wiki/index.php/PALE_ClassicArticles/archives/classic_articles/issue1_global_warming/n2-Poulliet_1837corrected.pdf
Ramanathan, V. and J. A. Coakley, Rev. Geophysics and Space Physics 16(4)465-489 (1978), ‘Climate modeling through radiative convective models’. https://doi.org/10.1029/RG016i004p00465;
Ramaswamy, V.; W. Collins, J. Haywood, J. Lean, N. Mahowald, G. Myhre, V. Naik, K. P. Shine, B. Soden, G. Stenchikov and T. Storelvmo, Meteorological Monographs Volume 59 Chapter 14 (2019), ‘Radiative Forcing of Climate: The Historical Evolution of the Radiative Forcing Concept, the Forcing Agents and their Quantification, and Applications’. https://doi.org/10.1175/AMSMONOGRAPHS-D-19-0001.1
Taylor, F. W., Elementary Climate Physics, Oxford University Press, Oxford, 2006, Chapter 7
Roy,
Very nice treatise!
The only thing I would add is that LWIR from the atmosphere toward the Earth is merely a reflection of LWIR radiating from the Earth. The atmosphere by itself is not a heat generator. It can only reflect what it receives. When the Earth radiates LWIR away it cools. When the atmosphere reflects part of that back the Earth re-warms part way back to its starting point. And around and around it goes. The net effect is that the Earth cools because it never gets back as much as it radiates. The only question is then – how much does the Earth cool? If it doesn’t cool as much at night because the CO2 LWIR reflection slows down the loss to space then we would see Tmin go up, not Tmax. And that seems to be what is driving the so-called “global average temperature” to go up.
But Tmin going up is mostly beneficial so the CAGW alarmists attempt to sow alarm by claiming that it is Tmax going up instead. We are all going to fry under a blanket of CO2. In other words, propagation of a fraud!
Tim
Yes, Tmin is increasing the most. Yet, the climastrology alarmists, and those riding on their coattails with ecological prophesies, commonly use the global average from an RCP 8.5 scenario, rather than deal with the Tmin and Tmax independently. The average looks scarier than Tmin, while it is the Tmax that is most likely to be a biological stressor, and thus should be the focus.
“It does not cool as much at night”
<- Or maybe the apparent increase in minimum night temperature simply reflects higher urban heat effect from so many land surface weather stations located too close to human habitation?
Mark,
That could very well be the case. But in that case we would see Tmax going up as well and, at least in the US, Tmax isn’t going up, Tmin is.
Thanks Roy for another excellent contribution. Re-blogged your post only on my website.
All Climate models are predicated on the basic assumption that CO2 ,as a greenhouse gas, traps heat . The quantum is the issue.
The scientifically classic way to test an hypothesis or assumption is to run a controlled experiment.
Despite the early assertion by the IPCC that you can’t use this method in climate science, which seems to have been accepted by the whole scientific community which has resulted in the reliance on modelling to resolve issues , such as climate sensitivity. With the obvious variations in output of these tools owing to different assumptions as to the “unknowns”
Back to basics. IMHO you can run a controlled experiment to determine the influence of CO2 on climate, or, more specifically such trials have been run, albeit unknowingly!
Consider the temperature records from the Giles weather station in remote Western Australia set up as a manned station (and still is) in 1956 where I would contend , the only thing to have changed in a climate influencing sense would be the rise in CO2.
Temperature records , available on the BOM website show NO evidence of heat trapping !
No rise in minimum temperatures, no reduction in diurnal spread.
With the only variable input identifiable being the universal rise n CO2 over the 65 years of the record, Is this not a controlled experiment? And aren’ the observations a valid contribution to the science?
The amplification of CO2 induced warming by water vapour partly obscures the fact that any warming could, in theory, produce more water vapour and subsequent warming. But there is no evidence that such amplification actually happens and as is common in climate science, there are other possible mechanisms acting in the opposite direction. For example, More water vapour could lead to more convection, carrying water vapour to higher altitudes where condensation releases latent heat to space. More water vapour could produce more clouds with all the cooling possibilities that flow from that.
Also common in climate science is the tendency for warming scenarios to be favoured by the climate scientists. This bias has existed for decades and is evident in the climate models that exhibit the same bias. For how long will this charade continue?
Climate models are now predicting high end levels of warming that are not credible. It is increasingly difficult to explain actual temperatures without resorting to unrealistic cooling factors. If the real climate continues to cool and models continue to become ever hotter, something will have to give.
It would seem that the obvious way to improve the models is to remove the positive feedbacks. These seldom exist in nature for good reason and the incredible stability of our climate is all the proof we need. Nature has a whole range of tools for tweaking our climate. Carbon dioxide is just one of these. Like most things in nature its effect is limited. Its absorption bands are saturated and further CO2 emissions will make little difference. The science is clear. It is the determination of climate scientists to avoid the obvious that is the problem.
Even if there is some positive feedback it probably only exists within a narrow confine. E.g. a non-linear feedback mechanism that decreases the amount of feedback as the driving input (i.e. the system output) goes up.
>>
The amplification of CO2 induced warming by water vapour . . . .
<<
What’s interesting is if this is true, then pan evaporation should be increasing. In fact, pan evaporation is decreasing. It’s called the “pan evaporation paradox.” Of course, climatologists are writing papers to show that there is no paradox–nothing to see here, move along. Still . . . .
Jim
Lindsay Moore:
There have been other “unknown trials” on the climatic effect of changing CO2 levels which show just the opposite, that it appears to have no effect.
https://www.osf.io/f8d62/
Lindsay Moore:
You suggest that a “controlled” experiment to observe the effect of CO2 in the atmosphere may have been run unknowingly, and that it supports .the CO2 hypothesis.
There have been other “unknown experiments” run that have also addressed the effect of CO2 in the atmosphere, but instead they have shown no climatic effect.
http://www.scholink.org/ojs/php/se/.
Apparently this is a bit too simple for most people to digest
Put simply a “controlled experiment carried out over 65 years at a site where the only variable was the CO2 level (280 to 400ppm) showed no evidence of any extra heat being “trapped”
A simple and unambiguous explanation why climate models fail!!
ie the most basic assumption that the EGE traps measurable heat is not supported.
How simple is that?
Lindsay Moore:
CO2 was NOT the only variable. Every VEI4 or larger volcanic eruption affects temperatures, because their SO2 aerosol pollution reflects sunlight, cooling the earth to various degrees.
I fully agree. WV is also not increasing over land.
“ one would expect the same feedback on initial warming due to a random fluctuation of the amount of water vapour itself” or an el nino, or recovery form volcano etc. Any warming could have triggered a runaway positive feedback.
The fact is positive feedbacks are very rare because they are self destructive. If anything the act of water vapour in the tropics acts as an upper limit to global temperature.
CO2 is not only safe, but beneficial, we should try to get it to 1000 ppm.
Excellent article, a very enjoyable read.
The more data we get, the more evident it becomes that sensitivity is 2C at the very most. My own favourite back-of-the-envelope is to take the trend UAH (0.0137444 per year, currently) and match the start and end trendline delta against the Mauna Loa delta (0.581C per 80ppm). Ratioing up to 280ppm this puts sensitivity at 2.03C.
But we must remember – according to the theory – the lower troposphere warms at a faster rate than the surface. A round number stab at the equivalent surface warming then would be 0.5C per 80ppm. Therefore we get 1.75C warming for a doubling of CO2. And that’s based on a period which must have included a rebound effect from the fall in global temperatures after WW2 to the end of the 70’s, so this is likely an overestimate.
Realistically, we are never going to get to 560ppm anyway – regardless of Green Deals, subsidies and the like – due to the inherent incentives in the capitalist, free market system (or what’s left of it) to maximise energy use productivity.
Some climate models assume that relative humidity remains constant if the atmosphere is warmed by CO2.
But since warm air can hold more water vapor than cold air, a constant relative humidity results in an increase in absolute humidity. The extra water vapor must come from somewhere, most likely from increased evaporation from a body of water (ocean, lake, pond, etc.).
Due to the high heat of vaporization of water, if we assume a volume of air in contact with a body of water, if the air is warmed by 1 C, and the relative humidity remains constant, the heat of vaporization results in cooling the air by about 0.5 to 0.7 C (depending on the initial temperature and humidity).
This negative feedback is often overlooked by climate models, which take into account “amplification” due to IR absorption by additional water vapor, but neglect the heat required to force more water vapor into the atmosphere. Failure to take into account this negative feedback would lead to over-estimating the climate “sensitivity”.
Steve
You said, “… the heat of vaporization results in cooling the air …”
I thought that it was the surface of the body of water that was cooled as the molecules with the most kinetic energy were removed, taking the kinetic energy with them.
Too many words for a simple explanation (thanks to Richard Feynman). If the observations don’t support the theory then the theory is wrong. No amount of mucking with models based on a wrong theory or their parameters will get past that basic fact.
>>
If the observations don’t support the theory then the theory is wrong.
<<
Actually, Feynman was talking about laws, but his statement also applies to theories
Jim
I believe his actual word was “guess”.
And “guess” would annoy those naive scientific neophytes who believe laws are proven theories.
Jim
True enough, but in the lecture I remember Feynman says, “Science starts with a guess.” The class laughed at that but Feynman went on to say that no matter how it’s derived or what you call it, it’s really just a “guess” about how the world works which must then be tested.
It do believe it’s time to establish the definitions:
The words “fact”, “theory”, “hypothesis” and “law” have very specific definitions in science:
———-
Hypothesis: A tentative explanation of an empirical observation that can be tested. It is merely an educated guess.
—–
Working hypothesis: A conjecture which has little empirical validation. A working hypothesis is used to guide investigation of the phenomenon being investigated.
Scientific hypothesis: In order for a working hypothesis to be a scientific hypothesis, it must be testable, falsifiable and it must be able to definitively assign cause to observed effects.
Null hypothesis: Also known as nullus resultarum. In the case of climate science, the null hypothesis should be that CO2 does not cause global warming.
A Type I error occurs when the null hypothesis is rejected erroneously when it is in fact true.
A Type II error occurs if the null hypothesis is not rejected when it is in fact false.
—–
Fact: An empirical observation that has been confirmed so many times that scientists can accept it as true without having to retest its validity each time they experience whatever phenomenon they’ve empirically observed.
Law: A mathematically rigorous description of how some aspect of the natural world behaves.
Theory: An explanation of an empirical observation which is rigorously substantiated by tested hypotheses, facts and laws.
Laws describe how things behave in the natural world, whereas theories explain why they behave the way they do.
For instance, we have the law of gravity which describes how an object will behave in a gravitational field, but we’re still looking for a gravitational theory which fits into quantum mechanics and the Standard Model and explains why objects behave the way they do in a gravitational field.
Mheh… ‘It’ = ‘I’. Time to get a new keyboard for my laptop… this one’s keys are getting overly-sensitive to even the slightest touch… especially the ‘down arrow’ key, which I use a lot.
>>
Law: A mathematically rigorous description of how some aspect of the natural world behaves.
<<
This is not exactly true. A law (or principle–they are often interchangeable terms in science) can also be a statement. It need not be a mathematical description. For example, the following laws (principles) are from geology:
The law of faunal succession
The law of original horizontality
The law of superpostion
The law of cross-cutting relationships
Also, a law need not be anything more than a simple guess. Whether it really describes the “natural world” depends on experiment.
>>
. . . we have the law of gravity which describes how an object will behave in a gravitational field . . . .
<<
Einstein was able to revise Newton’s law so it is invariant WRT inertial and accelerating frames of reference. It also corrects Newton’s law for the bending of light and the precession of Mercury’s orbit (which also occurs for the orbits of Venus and the Earth, but to a much lesser degree). The speculation that there is a quantum gravity theory is just that–speculation.
Also notice that Special Relativity corrects Maxwell’s equations. SR makes Maxwell’s equations invariant WRT inertial frames of reference. However, QED (Quantum Electrodynamics) combines quantum theory with SR and replaces Maxwell’s equations. And Maxwell’s equations were based on older laws by Gauss, Ampere, Coulomb, and Faraday.
The problem with QED are the infinities that have to be “re-normalized” out of the equations–a problem that annoyed and annoys many, including Feynman.
Jim
I was taught in my engineering courses that a law is always able to be described by mathematics – e.g. Gauss’ Law, the laws of thermodynamics, Coulombs Law, the three laws of motion, and Hookes Law. There are some things that are called Principles but they are still described mathematically – e.g. Bernoulli’s Principle, Archimedes Principle.
LOL’s definition: “A mathematically rigorous description . . . .” is not exactly true. There are many laws that aren’t defined by “mathematically rigorous descriptions.” However, your statement: “. . . a law is always able to be described by mathematics . . . .” is not as strong. I’m not exactly sure what you mean by it.
If you mean that every law can be described by a mathematical expression or series of mathematical expressions then I disagree. I would like to see the mathematical expressions represented by those four geological laws I referred to previously.
Or if you mean that every law can be described by a numeric value or a mathematical range of numeric values, then that may be true.
Then again, you may be referring to something I haven’t thought of.
Jim
Let’s just take one, the law of faunal succession. That’s an observation that may or may not be true. It’s no different than the observation that the sun rises in the east. Neither of them is a “Law”. The difference is that when and where the sunrise occurs is a matter of mathematics, namely orbital mechanics. So the “Law”: of Sunrise can be mapped out mathematically. The “Law” of faunal succession is nothing more than a historical observation. Just like the observation that “no government lasts forever” or the “Law” of Generations – one generation follows another. Those observations can’t be proven mathematically and someday they may not even be true.
>>
That’s an observation that may or may not be true.
. . .
Those observations can’t be proven mathematically and someday they may not even be true.
<<
I believe you’ll find that these statements apply to every theory and law in science–with the possible exception of “climate science.” (It’s why I consider “climate science” an oxymoron.)
As Feynman said in the video we were discussing earlier, some ideas can last for centuries until they are shown to be incorrect–like Newton’s Law of Gravity.
And the fact that you can “prove” something mathematically doesn’t mean you’ve proven it scientifically–that’s something that can’t be done.
Jim
Gauss’ Law is not likely to ever become untrue – unless the fundamental makeup of the universe changes. In which case no one will care because the human race likely won’t survive. Same for the Laws of thermodynamics and the laws of motion.
And since when was Newton’s Law of Gravity proven wrong? It has been superseded by Einstein’s General Relativity but Newton’s Law still works unless you are concerned with extreme conditions (e.g. black holes or neutron stars).
If you can prove it mathematically then you *have* proven it scientifically. The tunneling effect of electrons across an energy barrier (e.g. in a transistor) was shown mathematically using quantum mechanics – thus proving what empirical observations had seen.
Results derived from observations are subjective, and therefore are subject to being wrong. Being able to describe reality in terms of math, e.g. Gauss’ Law, is not subjective.
>>
Gauss’ Law is not likely to ever become untrue – unless the fundamental makeup of the universe changes. In which case no one will care because the human race likely won’t survive.
<<
Let’s see: Maxwell’s Equations include Gauss’s law, Gauss’s law for magnetism, Faraday’s law with Maxwell’s modification, and Ampere’s law with Maxwell’s modification. Although Maxwell’s Equations correctly predicted electromagnetic waves before they were discovered (one of those subjective observations, no doubt), they have been a thorn in the side of physics ever since.
First there was the problem of the speed-of-light the equations contained. Attempts to remove it led to modifications of the arbitrary constant in Coulomb’s law. This led to two metric systems: CGS (centimeters-grams-seconds) where the constant’s magnitude is 1 (with appropriate units), and MKS (meters-kilograms-seconds) where the constant has the value of 1/(4*pi*e0). This didn’t work, because the speed-of-light, though hidden, was still there, c = 1/sqrt(e0*u0).
Next came the ether theory. This did work, and it had the added benefit of giving something for EM waves to wave. Unfortunately, Michelson-Morely 1887 ruined the ether theory (with another lousy subjective observation apparently). Attempts to repair the ether theory ran into problems with things like stellar aberration. Apparently observations, like Airy’s water-filled telescopes (darn those subjective observations), didn’t support these modifications.
Lorentz’s transformations solved the invariant problem of Maxwell’s Equations, but it took Einstein’s Special Relativity to explain Lorentz’s transformations and explain things like stellar aberration.
So far, so good, but Maxwell’s Equations don’t explain photon–photon scattering, “nonclassical light,” quantum entanglement, the photoelectric effect, Planck’s law, the Duane–Hunt law, single-photon light detectors, Casmir effect, and so on.
It appears that Maxwell’s Equations are a classical approximation of QED (Quantum Electrodynamics). So what exactly isn’t correct in Maxwell’s Equations? And gee, the human race is still here.
>>
Same for the Laws of thermodynamics and the laws of motion.
<<
I once heard a physicists say that he didn’t think the laws of thermodynamics would ever be found to be incorrect–but he was hedging his bet.
Newton’s second law is F = dp/dt, that is, force is equal to the rate-of-change of momentum WRT time. Linear momentum is define as: p = m*v. Using the classical assumption that mass is a constant, we get the familiar expression for force: F = m*a. However, if you stick the Lorentz expression for mass into the equation, you get an extremely messy expression for relativistic force. Now which expression do you think most physicists would use? F = m*a? Or the messy but more correct expression for relativistic force? It should be obvious they would use the less precise F = m*a knowing that is some cases it might give a wrong value.
>>
And since when was Newton’s Law of Gravity proven wrong? It has been superseded by Einstein’s General Relativity but Newton’s Law still works unless you are concerned with extreme conditions (e.g. black holes or neutron stars).
<<
Or Mercury’s orbit and bending light. Newton’s gravity force law requires the speed-of-gravity to be infinite. Is the speed-of-gravity infinite? Attempts to utilize Newton’s law with a finite speed always fails. If gravity travels at an infinite speed, then there can’t be gravity waves with a finite speed.
>>
If you can prove it mathematically then you *have* proven it scientifically.
<<
Nonsense. There is not a single mathematic system. Mathematics is a axiomatic system. That means you prove theorems based on a certain number of axioms. Change those axioms, and you change the system. In geometry, there are postulates. Change those postulates, and you change the geometry. In Euclidian geometry, parallel lines never meet. However, on a sphere, the analog to a straight line is a great circle. Great circles (not co-located) will intersect exactly twice. In the algebra you learned in grade school and high school, 1 + 1 = 2. In Boolean algebra, 1 + 1 = 1. In fact, in Boolean algebra, 1 + 1 + 1 + . . . + 1 = 1. There are different algebras like there are different geometries. So which geometry does the Universe belong to?
>>
Results derived from observations are subjective, and therefore are subject to being wrong. Being able to describe reality in terms of math, e.g. Gauss’ Law, is not subjective.
<<
I guess Galileo’s move from Aristotle’s thinking about how the world works to doing experiments to show how the world actually works is lost on you. Observations ARE science. Mathematics is an approximation. I remember my professors reminding me that the Ideal Gas Law only applies to ideal gases. Many mathematical formulas in science are idealized. One must be careful in claiming mathematical proof where it only applies to our assumptions.
Jim
Well stated.
The math has to be checked against reality. For example If I have a plot of land which is 10 meters by 10 meters. I can mathematically calculate the area as 10+10 and claim it is 20 because I “did the math”.
The only immutable laws are those that are essentially definitions. But even those are subject to change should other definitions be found useful.
An example is the laws of conservation of energy and momentum which can be mathematically shown to be laws of invariability over time and space. However, the real meaning of that mathematics is that any deviation in the laws of the universe over time can be expressed as an energy conservation law and that any variability in space can be expressed as a conservation of momentum law. So the conservation laws are preserved by “making up” new kinds of energy or momentum.
Even then it’s not clear that the equations hold if the variations are non-continuous. Quantum mechanics exhibits some discontinuity and even chaotic behaviors.
You jumped from Gauss’ Law to Maxwell’s equations – i.e. you are guilty of equivocation, an argumentative fallacy.
Gauss’ Law has never been modified or disproved. It is very simple.
And it is not the same for the laws of thermodynamics. You saying so doesn’t make it so.
Newton’s Law of Gravity is still true except on the edges. Why does a planet bend light? You use that as an excuse for Newton’s Law of Gravity being wrong but give no explanation showing how gravity doesn’t affect light. As for instantaneous speed of light – why does that impact Newton’s Law? Newton’s Law doesn’t say *when* the forces imping on each mass, there is no “time” in the equation. That is a subjective opinion from you, it isn’t implicit in the math.
Not all math is axiomatic. Geometry is. Algebra is not. A + B = C doesn’t depend on any assumptions or axioms.
Observations are not science, MEASUREMENTS are.
Strange. I learned algebra as axiomatic. You know those funny little things like the associative, ditributive and commutative properties of addition and multiplication
>>
Not all math is axiomatic. Geometry is. Algebra is not. A + B = C doesn’t depend on any assumptions or axioms.
<<
Now you are making silly statements. Basic algebra depends on at least five axioms–look it up.
>>
You jumped from Gauss’ Law to Maxwell’s equations – i.e. you are guilty of equivocation, an argumentative fallacy.
Gauss’ Law has never been modified or disproved. It is very simple.
<<
That may be true if Maxwell’s Equations didn’t include Gauss’s law as is. So now you are saying we can’t combine laws? That’s news to me.
>>
Observations are not science, MEASUREMENTS are.
<<
I consider measurements to be observations. I don’t understand your distinction with these terms.
>>
And it is not the same for the laws of thermodynamics. You saying so doesn’t make it so.
<<
?
>>
Newton’s Law of Gravity is still true except on the edges.
<<
Newton’s law has edges?
>>
Why does a planet bend light?
<<
Why does it?
>>
You use that as an excuse for Newton’s Law of Gravity being wrong but give no explanation showing how gravity doesn’t affect light.
<<
?
>>
As for instantaneous speed of light – why does that impact Newton’s Law?
<<
I guess you’re not reading what I wrote. I said speed-of-gravity was infinite with Newton’s law. I said nothing about the speed-of-light.
>>
Newton’s Law doesn’t say *when* the forces imping on each mass, there is no “time” in the equation. That is a subjective opinion from you, it isn’t implicit in the math.
<<
Actually it does. Pick up any book on orbital mechanics. Newton’s law is all about time and when. Unfortunately, there aren’t enough integrals available to completely solve the two-body problem. One of those problems is the loss of where an object is in its orbit. The solution to this is Kepler’s Equation, but that equation–being a trigonometric equation–can’t be solved precisely.
Jim
“I consider therefore models in which the feedback on water vapour is negligible (and negative if you include clouds) as much more realistic.” As Willis points out in an adjacent article, and has for a while.
I’m a truck driver and have no PhD. But my opinion is the first and foremost problem with the climate models most publicized is that they are formulated, maintained, and reported on by dishonest hacks with a bias towards showing warming who treat their own work and it’s results as the climate gospel.