We’ve seen the “If 99 doctors said…” argument, or facsimiles, used often by global warming enthusiasts in recent months. George Clooney used it when interviewed at the Britannia Awards. (See the Open Letter to Lewis Black and George Clooney.) James Cameron used it in the trailer for the upcoming ShowTime series “Years of Living Dangerously”. (Refer to the open letter to Mr. Cameron and the other executive producers of that show.) And on The Daily Show, Jon Stewart included a clip of Dan Weiss of the Center for American Progress using it (See the Open Letter to Jon Stewart.)
I responded to those arguments and discussed many other topics in the posts linked above, with links to more-detailed explanations and examples…and, of course, with links to my ebooks.
The following is something I wrote for my upcoming book with the working title The Oceans Ate My Global Warming (or another possible title CO2 is Not a Control Knob). I thought you might use for it when you see the “If 99 doctors said…” argument again.
# # #
Imagine you’re running a persistent slight fever. You visit a new clinic. The nurse takes your vitals and enters them into a computer program. A short time after the computer model completes its simulations, the doctor arrives, advises you of the computer-diagnosed ailment, and prescribes controversial high-cost medications and treatment.
You’re not comfortable with the service, diagnosis, prescription or treatment, so you check out online the computer model used by the clinic. It is proclaimed to be wonderful by its programmers. But, the more you research, the more you discover the model’s defects. It can’t simulate circulation, respiration, digestion, and other basic bodily functions. There are numerous research papers exposing the flaws in the model, but they are hard to find because of all of the other papers written by the model programmers extolling its virtues.
Of course, you would not accept the computer-based medical diagnosis from a model that cannot simulate basic bodily functions and processes. But that’s the position we’re faced with climate science.
We need a second opinion for the slight warming the Earth had experienced. Unfortunately, it is not likely to be coming anytime soon, not until there are changes to the political agendas that drive climate science funding.
# # #
Enjoy your Super Bowl Sunday…for those celebrating. For everyone else, enjoy your day.
Olavi says:
February 3, 2014 at 5:17 am
” I wrote this 21 days after 5 veins bypass surgery. ”
___________________
Good to see you are still with us.
If 99 doctors told you the cause of the pains radiating down your left arm was due to miasmas in the air and treatment was to draw 6oz of blood, then attach leeches to your left arm every day for a week, would you?
If it were 1750, you probably would, but that would not mean the 99 doctors were right.
We know that now because these days disease is determined by observation and empirical evidence, not what the group thinks.
Of course that is not entirely true. Doctor group-think insisted the primary cause of peptic ulcers was stress… with no empirical evidence… rejecting the cause being bacterial right up to just a few years ago, until by experiment empirical evidence showed a bacterium was indeed the culprit. There are numerous other examples where the medical profession looks like a point of reference to train ‘settled science’ consensus climate scientists.
So I personally would not trust the say-so of 99 doctors any further than I could spit them.
The fact that politicians and air-brain celebs unquestioningly would, just shows what a bunch of divvies they are.
Now, REFINING the theory, estimating the EXACT amount of warming we will see from doubling
is possible in TWO WAYS
A) run a controlled earth experiment where you double C02 and hold everything else constant
( haha)
B) run a model.
You might also try to narrow down estimate on sensitivity. Those are observational studies
Sensitivity is the problem, isn’t it? I agree with most of what you say (of course) — the issue is indeed physics. Physics predicts a straight-up warming of between 1 and 1.5 C warming for a doubling of CO_2. This prediction — physics based or not — isn’t the best one in the entire Universe because the greenhouse effect cannot be effectively approximated by a single slab theory — it involves a staggering synthesis of ideas some of which I find a bit dubious — level broadening due to increased (but infinitesimal) partial pressure especially in the wings of the absorption bands, change in the height of the thick layer where the atmosphere finally becomes transparent to CO_2-mediated radiative loss, variation in the DALR itself, lifting of the tropopause — none of which AFAIK can be or have been actually observed to occur as the concentration of CO_2 has increased by 1/3 of the distance to the goal line. But let’s go with it. Let’s say that if CO_2 goes to 600 ppm, global temperatures will rise by 1.25 C plus or minus maybe 0.5C to allow for our ignorance even about the physics, because even the pure CO_2-only GHE predictions are largely speculative and contingent on our understanding of the LINEAR responses of a highly nonlinear system to small perturbations.
However, “narrowing down sensitivity” is not a distinct activity from “running a model”. That’s because there is no such thing as climate sensitivity. It is not an input into climate models. It is a claim made about feedback-based amplification of CO_2-only warming DERIVED FROM climate models, although even calling it that is bending things a bit. In Hansen’s original papers he didn’t have particularly detailed climate models — they were really pretty crude, understandably given that my cell phone probably has more processing power than he had available to run the models back then. Climate sensitivity was at that time basically a fudge factor based on an assumption of pure positive feedback in an ill-understood coupling between CO_2 and water vapor. It has, sadly, persisted in being used in the precise sense that Hansen used it — a simple linear multiplication of the CO_2-only warming expected by the occult cause of a highly nonlinear and presumptive coupling between water vapor and CO_2 concentration, with water vapor being the much more powerful greenhouse gas.
It is this latter power that even now lets Hansen continue to predict 5 C “climate sensitivity” — an augmentation of the CO_2-only warming by two parts water vapor warming to every part of CO_2-only warming — with something approximating a straight face, and since 5 C is comparable with the warming that pulled us out of the Wisconsin glaciation and sufficient to restore conditions not seen on Earth in 60 million or so years when the continents were in completely different positions, sure, he can predict that the oceans do whatever he likes in response. He can even make his infamous “boiling oceans” remarks and subtly imply that we could tip the Earth right over to Be Like Venus with runaway global warming. Personally, I think this is shameless, unscientific, and the moral equivalent of shouting fire in a theater that is not, in fact, on fire, because you think that it might one day catch on fire and that would be terrible, wouldn’t it?
The existing GCMs do not really predict climate sensitivity — they contain various parameters for quantities that we cannot currently measure (such as the role and effect of aerosols and decadal oscillations on the climate) that effective fit it to a desired value associated with past observations of warming, and then turn around and produce a single, highly contingent number as if it is a “physics based” prediction. It is this number that is at issue here, since the GCMs that predict a climate sensitivity of 5 C by 2100 not unreasonably have predicted a rise in temperature of over 0.5 C in the first decade and a half of the 21st century — one 1/7th of the distance to the goal line. How much global warming has actually been observed over that interval? Would that be zero? It would.
Reasonable people — and really, I am not religious here, I understand the physics pretty well, I spend half my time on this list schooling people who want to claim that there is no such thing as the GHE or that it violates the laws of thermodynamics or other such rot, and I freely admit that the hypothesis of catastrophic anthropogenic global warming is a legitimate scientific hypothesis that could be true or false — could look at this, consider the difficult of solving coupled Navier-Stokes equations at mediocre granularity across century time scales with significant parts of the underlying actual physics (such as the precise numbers and mechanisms associated with feedback, heat transport and reflection by clouds, the correct numbers to use for aerosols, the correct coupling to the vast oceanic reservoir) and conclude that if a given GCM robustly predicts 0.6 C of warming across a 15 year interval in which no warming is observed (while CO_2 inexorably ratchets on up) that at least that GCM might be wrong. It might be broken. It might be a bad model.
Bad models are hardly a surprise in physics in much simpler problems than this, after all. If one considers the mere probability that a physical model this complex would turn out to not be robustly predictive I’d assert that it is more likely not to be robustly predictive than to be robustly predictive, even before seeing actual results. This is a hard problem and I’m not at all surprised that many GCMs get answers in bad disagreement with nearly everything they are nominally supposed to predict. The best that can be said of them is that yes, they produce future patterns that do look like climates of a possible future Earth. There is little reason, however, to trust them to produce patterns that are likely to be the climate of this particular Earth.
What truly reasonable people might do when confronted with this sort of disagreement between their model and reality — what appears to be a serious failure of the model almost immediately when compared to observations over a substantial part of the interval the model was supposed to span — would be to go back to the drawing board, beginning by acknowledging that the model is not working. I’d even permit the weasel-phrase probably is not working to let people save face, although in science this should never be necessary as it is understood.
This is what would happen if one built a model for predicting hurricanes, and predicted a steady increase in hurricane frequency and violence that should be easily observable over a 15 year span and observed no such increase. This is what would happen if one built a model for the expected peak of the sunspot count in a given solar cycle that said that it would be much higher than the count in the previous cycle and it turned out to be much lower. It is what should always happen when one builds a model intended to predict something, even something as mundane as the probable time evolution of the prevalence of alcoholism in urban communities or as exotic as the precise date of the coming of the rapture you foretell on the basis of patterns you read in the stars. Modelling is one of my professional areas of strength, and successful models talk, bullshit walks. Models that fail literally have negative value to the extent that they convince people to alter a pattern of investment or activity on the basis of the failed prediction. They are risky!
Now I’m just going to throw this out there. If I were king of the universe, or if I “owned” one of those failing models, and were not heavily invested in the model making one prediction versus another, I would take my model (tuned to a stretch in the second half of the 20th century where there was nominally rapid, CO_2 linked warming even though there was a nearly precisely identical stretch of warming in the first half of the 20th century where CO_2 was irrelevant and my model would fail badly) and I would retune it semi-empirically until it was capable of fitting/predicting both halves of the 20th century using constrained inputs that track as many of the variables as we have semi-reliable measures of — solar activity, CO_2 concentration, likely aerosol contribution, phase of the decadal oscillations. This would probably not be very easy, and it certainly would not be possible to use CO_2, as Bob Tisdale points out, as the One True Control Knob. It would not be very easy because frankly, we have no idea why the Little Ice Age happened, and we have even less of an idea of why, or how, the Earth has systematically recovered since.
We do not have any predictive theory capable of explaining the warming of the first half of the 20th century or somehow differentiating it from the warming of the second half of the 20th century. And yeah, the data we have to input into our putative predictive model across this broad a training set sucks — it wasn’t until world war II that weather prediction became big business and airplanes capable of flying at the top of the troposphere or into the stratosphere were developed and were used to systematically sample the weather to the extent that crude models could be used to predict it in time for things like the D-Day landing, which was contingent on having a prediction of decent flying weather for at least a few days in a row and was postponed several times because that condition failed to materialize.
Still, difficult or not, that would be my primary chore — fixing the model I “owned” so that it worked, at the very least, across the 34 to 50 years that we have have not-so-terrible measurements. Including, of course, the last 15 years where it was failing.
Now here’s a simple exercise. Let’s supposed that I tune my model so that it both matches the 70’s through the 90’s and matches the observations through 2014 where before it predicted 0.6 C surplus warming by 2014. Note well — retune it to the actual observational data, without personal prejudice as to what “climate sensitivity” should really be or what the result should really be. Indeed, I have to make the observed climate the centroid of the span of model results due to the usual Monte Carlo perturbations of the inputs, as any other alternative is naked confirmation bias running amok — I have to presume that I do not know the answer, that Nature is the answer, and that my model needs to correspond to Nature and not the other way around.
What do you think is going to happen to climate sensitivity predicted by my now unbiased model?
a) It’s going to plummet. Indeed, given the stellar absence of any possibility of explaining the absence of warming by means of things like egregious increases in volcanic aerosols, my entire assumptions about aerosols in my previous version of the model are almost certainly wrong, badly wrong. This is going to force the model to de facto ascribe far, far more of the warming of the 80’s and 90’s to natural factors (in agreement with the observation that natural warming in the first half of the 20th century managed to happen at the same rate without CO_2 so natural variability CAN be substantial, and indeed might even be expected on the basis of fairly simple assumptions concerning a likely “rebound” from the LIA’s 9000 year minimum temperatures). It is going to make the feedbacks due to water vapor diminish to next to nothing, possibly even force them to be negative.
b) It’s going to leave them unchanged. Even retuned to fit the data, my model is likely to still produce 5 C warming by 2100 because hey, that’s the “physics”.
c) It’s going to make them even worse. All of that warming is still going on. My model is going to make the heat disappear into the “deep ocean” and then reappear, like a rabbit from a hat, to cause a huge jump in global temperature all at once. In fact, I truly expect the next ENSO event to bump global temperatures by 0.7 C to “catch up” to my original model predictions and get us back on track for even more than 5 C warming by 2100.
Personally, I’d bet on a). I’d be on a) even before I did other things, like compare the predictions of my model with the distinct GCMs belonging to Joe and Tommy and Sally from the same initial conditions, for a much simpler problem such as a pure untipped “water world” and note that the four models don’t end up with predictions that are even near to one another, so that at least three out of the four must be wrong. I’d bet on four out of four, wouldn’t you?
So Steve, with all due respect that’s what this is all about. Constraining the climate sensitivity, which is derived from the models, not from some elementary principle. The only thing that can constrain the sensitivity is enforcing a requirement that the models agree with reality. It cannot be done a posteriori or a priori independent of the models, and unless and until one acknowledges that the GCMs do not generally agree with each other (and hence cannot all be right) and do not generally agree with observation (and hence are unlikely to be right) and fixes the GCMs until these two things are no longer true, the best that can be said about CAGW is that the current evidence does not support the hypothesis of catastrophic warming.
That, at least, is unambiguously true. No observed warming for 15 years is not a track to catastrophe. No significant SLR (indeed, no significant variation of the rate of SLR over 140 years of records) is not a track to catastrophe. No discernible change in the frequency or violence of storms is not a track to catastrophe. The failure of the Arctic to become “ice free” is not a catastrophe. The increase in Antarctic sea ice to make the overall sea ice level of the planet exceed the 30 year mean is not a catastrophe. Be honest — aside from bullshit arguments made in papers desperate to ascribe any marginal variation in ecosystem to human derived climate change, there isn’t a single shred of actual evidence for any negative sequellae due to global climate change that can be positively attributed to human activity, much as one can find localities where we have indeed damaged the ecology and perhaps even the local weather — like the UHI and urban zone surrounding every city, like the deforestation of the tropics.
Humans did not cause the MWP, the LIA, the subsequent warming through at least the 1950s. They might be responsible for some fraction of the warming observed in the 30 year stretch from the 70’s to roughly 2000. Humans cannot be responsible for any warming from 2000 (if not earlier) to the present because there hasn’t been any, and it is almost certain that they were not, in fact, the principle cause of the warming observed over the 80s and 90s which so closely mirrored the warming of the first half of the 20th century that unless you know how to look for specific features in the graphs, the two graphs are indistinguishable from one another when presented on the same relative scale but without the time axis being labelled.
A reasonable person can, indeed, doubt the prediction of CAGW, and not place themselves in opposition with any of the laws of physics, especially when those predictions are without exception derived from models that are in serious disagreement with observation in nearly every dimension they might be expected to predict. And yet, nobody seems inclined to fix the models!
As they say, What’s Up With That?
rgb
Hats off Dr.Brown. A fantastic response, worthy of being elevated as a post by itself.
I have just had one of my friends tell me that I am wrong as she works at Uni (Exeter) with a Climate Scientist who has just been admitted a fellow of the Royal Society, & he believes in the 97% claim! I pointed out to her that Einstien said “A scientific theory can be undone by a single fact!”, & that when President of the Royal Society, Lord Kelvin said, “Heavier than air flying machines are impossible!”. Lengthy discussion ensued with mutual friend online, a true believer that Agenda 21 is nothing more than the benign UN’s environmental blueprint for the 21st Century! Clueless, absolutely clueless!
Venter:
re your post at February 3, 2014 at 7:57 am.
YES! I have suggested the same in a post on Tips&Notes.
Robert Brown’s post needs to be an article if only to enable simple link to it in future threads.
Richard
Nice one Robert Brown. Here is something very recent on water vapor indicating things might not be worse than we thought!
http://dx.doi.org/10.1002/jgrd.50772
C. I. Garfinkel, D. W. Waugh, L. D. Oman, L. Wang, M. M. Hurwitz. Temperature trends in the tropical upper troposphere and lower stratosphere: Connections with sea surface temperatures and implications for water vapor and ozone. Journal of Geophysical Research: Atmospheres, 2013; 118 (17): 9658 DOI: 10.1002/jgrd.50772
If climate activists were doctors:
Your temperature has clearly increased by .5F in the last 24 hours. We have no idea what is causing it and in the absence of a model that predicts it accurately we are going to assume it will continue to increase at the same rate. You will die of hyperthermia when it gets to 109F. If we do nothing you will be dead in 10 days. So we are going to treat it right now aggressively with drugs that have unknown side effects.
Patient declines treatment and gets better anyway because the body is usually quite good at repairing itself.
It’s only a pause. You will die one day. Possibly from a high temperature. Or cold. Or drowning. Or getting hit by an asteroid. Or something.
If doctors were climate activists:
We need to do a lot more tests before we recommend treatment. How can we give an accurate diagnosis and prognosis unless we have a good handle on the underlying cause.
Right now your temperature is not dangerously high but we will monitor it because we can’t predict it. Most patients presenting with a slightly elevated temperature recover spontaneously.
Dr. Brown says:
This prediction — physics based or not — isn’t the best one in the entire Universe because the greenhouse effect cannot be effectively approximated by a single slab theory — it involves a staggering synthesis of ideas some of which I find a bit dubious — level broadening due to increased (but infinitesimal) partial pressure especially in the wings of the absorption bands, change in the height of the thick layer where the atmosphere finally becomes transparent to CO_2-mediated radiative loss, variation in the DALR itself, lifting of the tropopause — none of which AFAIK can be or have been actually observed to occur as the concentration of CO_2 has increased by 1/3 of the distance to the goal line.
Dr. Brown, why partial pressure and not just the pressure? Any links I can trust? One case will double with a doubling of co2 and the other will barely budge the broadening so there is very little increase in the wings. Over the years I have never come across a definitive paper or text online showing this to be true and right there is why I doubt the AGW story-line. Might also be why we see no warming because the 1980-2000 increase was natural as the 1920-1940 increase.
For instance, I now have a very detailed 1000 foot horizontal IR spectrum at sea level with 5.7 mm precipitable water vapor thanks to an infrared astronomer. Shows the 100% opacity between 645 and 698 wave numbers (≈ 14.33 – 15.5 μm) and I hold that opaque is opaque regardless of any further increase in that band. You can see the pressure broadening due to the 103 kPa pressure. Use a program such as MODTRAN or HITRAN at 10 hPa and you see the toothpick lines with little broadening at low pressure. That’s what I would expect, pressure broadening near the surface. But the AGW story-line goes that if the concentration of co2 doubles you will also get a large increase of broadening not due to the absolute pressure as in the example above but now in addition due to the increase in the partial pressure as if that is some double counting of broadening and I’ve never found an explicit definition explaining this double counting.
Which one? Pressure, partial pressure, some mixture of both pressures? Still foggy on that picky point.
Dr. Brown, why partial pressure and not just the pressure? Any links I can trust? One case will double with a doubling of co2 and the other will barely budge the broadening so there is very little increase in the wings. Over the years I have never come across a definitive paper or text online showing this to be true and right there is why I doubt the AGW story-line. Might also be why we see no warming because the 1980-2000 increase was natural as the 1920-1940 increase.
I’m a bit foggy there myself. I did spend a few years doing quantum optics, and the broadening of an atomic or molecular line comes from many things. There is doppler broadening, basically a doppler shift detuning of the line due to the relative motion of the molecule, that typically depends on temperature. There is collision broadening, broadening caused by the interruption of the phase of a quantum dipole in the process of emitting or absorbing by a collision, typically a collision that in and of itself is far too weak to actually excite a quantum level. Collision broadening should be related directly to the mean free time between collisions, which should depend only on the density and temperature of the gas in question: density determines the mean spacing between molecules, temperature determines the mean speed of the molecules. There would be some small variation with the size of the molecules in question (and hence their collision cross section) but all molecules in the atmosphere are approximately the same size — a couple or three angstroms — compared to intermolecular spacing that is much, much larger. Petty goes over all of this quite clearly.
I can’t really see any good reason in Petty (or think of one myself) to think that there will be any systematic change in CO_2 line broadening at all at any height due to an increase in its partial pressure, although there most definitely would be one if the absolute pressure were increased. Indeed he has a figure that clearly indicates that the reason CO_2 is such a good absorber is that at 1 atmosphere! it is pressure broadened out to where it reduces the mean free path of IR photons in whole bands of to order of a meter. That’s a pretty damn short mean free path for a dilute molecule. Clearly he is referring to real pressure, not partial pressure.
So I can’t say that I completely understand why CO_2 increases are supposed to produce such a pronounced increase in already saturated, already broadened bands. I keep hearing people say that it will increase absorption in the “wings” of the absorption bands, but I don’t ever hear a coherent explanation of why it would do any such thing at constant temperature and constant absolute pressure (and there is always going to be a tiny increase in line broadening for higher temperatures). Also, rather obviously the atmosphere is remarkably stable in both pressure and temperature (and varies considerably over time in both dimensions by an amount that greatly exceeds any possible variation with CO_2 concentration). If there were any instability or positive feedback here we’d see it. Instead we see the opposite, if anything.
If anyone listening in can post a coherent explanation of this, I’d be pleased to learn it. It isn’t that I doubt it; it is that I don’t understand it and am loathe to believe anything (especially something as subtle as this) without some pretty solid analysis. I don’t find it in Petty, and Petty’s book seems as though it would be just precisely the place one SHOULD find it if it is a real issue. I gotta say, his figure 9.13 — while utterly convincing to anyone that doubts that CO_2 is a powerful GHG — doesn’t provide much reason to expect any sort of variation in the line width with partial pressure changes, and to be honest it doesn’t provide much reason to think that changes larger by several orders of magnitude due to ordinary weather-based air pressure and temperature changes are relevant. As far as I know, none of the GCMs actually dynamically shift the absorptivity of GHGs locally with air temperature and/or pressure, although that’s a place where the effects could even be large and asymmetric (large compared to anything I can think of associated with variation of the concentration of CO_2).
Note that increasing CO_2 concentration will reduce the mean free path of molecules up and down through the air column, so it will reduce it high up in the troposphere where the absolute density is dropping to where IR photons in the absorption bands can escape without additional collisions/absorptions. This should make the atmosphere slightly more opaque up there and should indeed “lift” the region where this energy loss occurs to higher/colder temperatures. But at the same time, the lines are narrowing at those higher altitudes and the broad lines of lower altitudes are rapidly shrinking back to comparatively short lines. This serious compromises the coverage of the bands and actually compensates against any reduction of mean free path there. So again, it is by no means particularly clear to me that this is an important first order effect compared to all sorts of far more important things happening lower in the troposphere with e.g. clouds.
rgb
Once upon a time 99% of doctors smoked Camels.
Perhaps Mr Clooney can afford to ask 99 doctors. I can´t!
“I can’t really see any good reason in Petty (or think of one myself) to think that there will be any systematic change in CO_2 line broadening at all at any height due to an increase in its partial pressure, although there most definitely would be one if the absolute pressure were increased. ”
Same here. That claim that it is partial pressure has always bugged me too… greatly. You deserve a big compliment Dr. Brown, incredible… you are seeing further into all of my questions without me even asking! I’ve been a physics enthusiast for about four decades now so I’m at least familiar with nearly any area you lay out but a master of none.
“So I can’t say that I completely understand why CO_2 increases are supposed to produce such a pronounced increase in already saturated, already broadened bands. I keep hearing people say that it will increase absorption in the “wings” of the absorption bands, but I don’t ever hear a coherent explanation of why it would do any such thing at constant temperature and constant absolute pressure …”
Absolutely! That is exactly where I have been stuck for about three years now.
Yes you have factors such as the peak radiation (Wein) sliding to lower frequencies as the temperature drops with altitude, yes you have an opening of the lines as pressure drops with altitude, and yes you have water vapor dropping out with altitude and many more processes, but where exactly are you going to get this 3.7 to 5.5 W/m² to raise the surface 1-1.5°C by just by going from 0.0003 to 0.0006 fraction of co2 in the wings? There is only 8.2 W/m² at 216K in the wings to begin with. I can never find something on this that seems to make any sense, lots of assertions.
I think the central opaque band has something to do with the answer. Totally opaque gas bands are one area where I can see that you can apply the Planck integration (banded SB) as if it were a solid ‘surface’ and there the emissivity would be one without question either from the surface up or from above the TOA down. It seems temperature differences beween those two edges would rule inbetween.
If I were to pick one series of top-posts that absolutely needs to be thoroughly fleshed out… it would be in the radiation transfer area, deeply, in detail, down to the line and bands level of investigation without what seems AGW excuses, for that is where all of my questions seem to collect.
I could go on hours on this topic and just having you say you don’t totally understand a few things in this area lifts me up, maybe it’s not only me with such questions.
Indeed, I have direct knowledge of a case where supposed experts attached to a CT scan facility were sure that a lump on a person’s liver was malignant. The person’s internal medicine specialist was a strong skeptic, pointing to the apparent general health of the individual and other factors.
So he had a surgeon look in the person’s body with a laparascope (an optical viewing tube). The lump was a “hemangioma” (tissue and blood vessels, not uncommon, likely there since birth).
The lump had been discovered with ultrasound, but before doppler ultrasound was available (which might have detected flow) and before MRI imagers were readily available (an MRI detects water, whereas a CT scanner measures density. Perhaps today’s CT scanners are sensitive enough to see small differences in density, as they can distinguish tissue from infected gunk in the sinuses. One of the limitations is image enhancement – some people may be old enough to remember enhancement of pictures from the moon, which at least at that time tended to eliminate detail.
So there’s a real case as an analogy – experts didn’t understand limitations of their technology but were sure of themselves, skeptic was right, advances in the science might shorten the debate.
[snip – more slayers junk – take it elsewhere -Anthony]
“If 99 doctors said … . ” Reminds me of the TV ad when I was a kid that said, “More doctors smoke Camels than any other cigarette.”
In response to Robert Brown on February 3, 2014 at 7:44 am who said in part:
What do you think is going to happen to climate sensitivity predicted by my now unbiased model?
a) It’s going to plummet.””
_________
I agree rgb, but only IF you assume that CO2 is a significant driver of temperature, which I doubt. I wrote in 2013:
“IF you wanted to stick with the ECS concept, then you would have to (as a minimum) delete the phony aerosol data, drop ECS to ~~1/10 of its current values, add some natural variation to account for the global cooling circa 1940-1975, and run the models. The results would probably project modest global warming that is no threat to humanity or the environment…”
However, I am increasingly convinced that my 2008 climate heresy is essentially correct – that global temperature drives atmospheric CO2 much more than CO2 drives global temperature (I accept that human activities might also drive CO2). As I said below:
“I think you would agree that the use of “CO2 sensitivity to temperature” instead of “climate sensitivity to CO2” (ECS) would require a major re-write of the models.”
**********
http://wattsupwiththat.com/2013/09/28/models-fail-land-versus-sea-surface-warming-rates/#comment-1432696
Reposted below regarding evidence of aerosol fudging of climate models, from DV Hoyt, for Pamela:
Best personal regards, Allan
Please also see
http://wattsupwiththat.com/2013/09/27/reactions-to-ipcc-ar5-summary-for-policy-makers/#comment-1431798
and
http://wattsupwiththat.com/2013/09/19/uh-oh-its-models-all-the-way-down/#comment-1421394
[excerpt]
…– the (climate) models have probably “put the cart before the horse” – we know that the only clear signal in the data is that CO2 LAGS temperature (in time) at all measured time scales, from a lag of about 9 months in the modern database to about 800 years in the ice core records – so the concept of “climate sensitivity to CO2” (ECS) may be incorrect, and the reality may be “CO2 sensitivity to temperature”
I think you would agree that the use of “CO2 sensitivity to temperature” instead of “climate sensitivity to CO2” (ECS) would require a major re-write of the models.
If you wanted to stick with the ECS concept, then you would have to (as a minimum) delete the phony aerosol data, drop ECS to ~~1/10 of its current values, add some natural variation to account for the global cooling circa 1940-1975, and run the models. The results would probably project modest global warming that is no threat to humanity or the environment, and we know that just would not do. Based on past performance, the IPCC’s role is to cause fear due to alleged catastrophic global warming, even if this threat is entirely false, which is increasingly probable.
Meanwhile, back at the aerosols:
You may ask why the IPCC does NOT use the aerosol historic data in their models, but rather uses assumed values (different for each model and much different from the historic data) to fudge their models (Oops! I guess I gave away the answer – I should not have used the word “fudge”, I should have said “hindcast”).
.
[excerpt from]
http://wattsupwiththat.com/2013/09/15/one-step-forward-two-steps-back/#comment-1417805
Parties interested in the fabrication of aerosol data to force-hindcast climate models (in order for the models to force-fit the cooling from ~1940 to ~1975, in order to compensate for the models’ highly excessive estimates of ECS (sensitivity)) may find this 2006 conversation with D.V. Hoyt of interest:
http://www.climateaudit.org/?p=755
Douglas Hoyt, responding to Allan MacRae:
“July 22nd, 2006 at 5:37 am
Measurements of aerosols did not begin in the 1970s. There were measurements before then, but not so well organized. However, there were a number of pyrheliometric measurements made and it is possible to extract aerosol information from them by the method described in:
Hoyt, D. V., 1979. The apparent atmospheric transmission using the pyrheliometric ratioing techniques. Appl. Optics, 18, 2530-2531.
The pyrheliometric ratioing technique is very insensitive to any changes in calibration of the instruments and very sensitive to aerosol changes.
Here are three papers using the technique:
Hoyt, D. V. and C. Frohlich, 1983. Atmospheric transmission at Davos, Switzerland, 1909-1979. Climatic Change, 5, 61-72.
Hoyt, D. V., C. P. Turner, and R. D. Evans, 1980. Trends in atmospheric transmission at three locations in the United States from 1940 to 1977. Mon. Wea. Rev., 108, 1430-1439.
Hoyt, D. V., 1979. Pyrheliometric and circumsolar sky radiation measurements by the Smithsonian Astrophysical Observatory from 1923 to 1954. Tellus, 31, 217-229.
In none of these studies were any long-term trends found in aerosols, although volcanic events show up quite clearly. There are other studies from Belgium, Ireland, and Hawaii that reach the same conclusions. It is significant that Davos shows no trend whereas the IPCC models show it in the area where the greatest changes in aerosols were occurring.
There are earlier aerosol studies by Hand and in other in Monthly Weather Review going back to the 1880s and these studies also show no trends.
So when MacRae (#321) says: “I suspect that both the climate computer models and the input assumptions are not only inadequate, but in some cases key data is completely fabricated – for example, the alleged aerosol data that forces models to show cooling from ~1940 to ~1975. Isn’t it true that there was little or no quality aerosol data collected during 1940-1975, and the modelers simply invented data to force their models to history-match; then they claimed that their models actually reproduced past climate change quite well; and then they claimed they could therefore understand climate systems well enough to confidently predict future catastrophic warming?”, he close to the truth.”
_____________________________________________________________________
Douglas Hoyt:
July 22nd, 2006 at 10:37 am
MacRae:
Re #328 “Are you the same D.V. Hoyt who wrote the three referenced papers?”
Hoyt: Yes
.
MacRae: “Can you please briefly describe the pyrheliometric technique, and how the historic data samples are obtained?”
Hoyt:
“The technique uses pyrheliometers to look at the sun on clear days. Measurements are made at air mass 5, 4, 3, and 2. The ratios 4/5, 3/4, and 2/3 are found and averaged. The number gives a relative measure of atmospheric transmission and is insensitive to water vapor amount, ozone, solar extraterrestrial irradiance changes, etc. It is also insensitive to any changes in the calibration of the instruments. The ratioing minimizes the spurious responses leaving only the responses to aerosols.
I have data for about 30 locations worldwide going back to the turn of the century.
Preliminary analysis shows no trend anywhere, except maybe Japan.
There is no funding to do complete checks.”
Jim says:
February 2, 2014 at 2:39 pm “I don’t recall any such ads BTW.”
Jim: I certainly remember the ads. Of course, I am probably older than you, too. It is interesting that pretty much all the politicians were taking money from big tobacco back then, and Al Gore’s family owned a tobacco farm. There is lots of blame to go around on the tobacco thing. Lots of people were involved in that gravy train.
http://www.nytimes.com/2008/10/07/business/media/07adco.html?_r=0
“””””…..Hoyt:
“The technique uses pyrheliometers to look at the sun on clear days. Measurements are made at air mass 5, 4, 3, and 2. The ratios 4/5, 3/4, and 2/3 are found and averaged. The number gives a relative measure of atmospheric transmission and is insensitive to water vapor amount, ozone, solar extraterrestrial irradiance changes, etc. It is also insensitive to any changes in the calibration of the instruments. The ratioing minimizes the spurious responses leaving only the responses to aerosols……”””””
So why no air mass 1 measurement, sort of as a baseline ? And precisely what is being measured; if in fact water vapor, ozone, and solar irradiance don’t matter. We do know for sure, that water content in the atmosphere does significantly affect the solar spectrum energy that reaches the ground.
Now just exactly what do “aerosols” do to disturb incoming solar spectrum energy, that H2O, O3, and CO2 don’t do ? Remember you say on clear days, so no clouds are being nucleated by these aerosols, or anything else. Does the measurement, also eliminate the blue sky, Mie and Raleigh scattering ??
“””””…..RS says:
February 3, 2014 at 11:34 am
Once upon a time 99% of doctors smoked Camels……”””””
Wow; the doc has a vet practice on the side.
Much prefer to smoke salmon myself; camels are just too ornery for me.
“””””””……._Jim says:
February 2, 2014 at 1:56 pm
george e. smith says February 2, 2014 at 12:38 pm
…
Well there’s not a hell of a lot left in the 15 micron band by the time the “sunlight” as you call it, gets from the sun to this planet.
My favorite black body radiation graph, says that less than 1% of the sun’s radiant energy is longer than 4 microns wavelength, at which point the solar spectral irradiance (at earth orbit) is about 0.3% of its [value] at the peak (0.5 micron). At 15 microns the spectral irradiance is 0.003%.
And I suppose that the atmospheric CO2 is going to eat all of that before it reaches the surface.
The sun is not a good source of 15 micron radiant energy. Some people still claim we can feel it, as “heat”.
I think you are misinterpreting a lot regrading the Planck curve and the amount of LWIR one feels from the sun …..”””””
Well _Jim , I didn’t interpret ANYTHING regarding the Planck curve; simply recited the numbers right off the chart; so how does one misinterpret nothing ??
But don’t keep us in suspense. Exactly what is it I said about the solar; near black body spectrum, that you disagree with. Please give us YOUR numbers, that refute mine
So my ground level air mass zero chart of solar spectral irradiance gives 2066 W/m^2/micron peak at 460 nm, yielding a 1353 W/m^2 TSI (old value) The chart shows the air mass one (surface value at about 2/3 of that, or 1377 W/m^2/micron, and they tabulate the peak for air mass two as 1215 W/m^2/micron, at 500 nm, with a TSI of 1322 W/m^2 for that spectrum.
So taking that 1215 W/m^2 for the 500nm peak and extrapolating out to 30 times the peak or 15 microns, we get 3E-5 of the peak or 36.45 W/m^2/micron, but we now have 30 times the number of microns spectrum width, so that gives approximately 1.1 W/m^2 surface irradiance for a solar sourced spectrum centered at 15 microns, at ground level.
So who wants to claim they can detect (feel) the warmth from 1.1 W/m^2 of LWIR energy centered at about 15 microns, and covering about 7.5 microns (half the peak) out 120 microns (8xpeak) which contains about 98% of the energy in a BB spectrum.
And that presumes that your body even registers 15 micron radiation as “heat”.
Gareth;
You have a compendium of observational (vs. computer-simulated) evidence that you find more convincing than the Pause? Do tell.