by Judith Curry
On the importance of expertise from other fields for COVD19 and climate change.
This post is motivated by a tweet from Steve McIntyre, with comment from Ken Rice:
Here is the link to Annan’s post Dumb and Dumber, its actually quite good. The money quote:
“All these people exhorting amateurs to “stay in their lane” and not muddy the waters by providing analyses and articles about the COVID-19 pandemic would have an easier job of it if it wasn’t for the supposed experts churning out dross on an industrial scale.”
Shortly after spotting this twitter exchange, I spotted a link to a new paper entitled (tweeted by Oxford Philosophy) entitled Epistemic Trespassing. Excerpts:
“Epistemic trespassers are thinkers who have competence or expertise to make good judgments in one field, but move to another field where they lack competence—and pass judgment nevertheless. We should doubt that trespassers are reliable judges in fields where they are outsiders.” In other words, stay in your lane.
“Trespassing is a significant problem in an age of expertise and punditry, but it’s not new. In Plato’s Apology, Socrates tells us he tracked down citizens in Athens who had reputations for being skilled. He met politicians, poets, and craftsmen and tested their mettle. As Socrates says, he ‘found those who had the highest reputation were nearly the most deficient’ . Socrates diagnosed the problem: because these men had been so successful in their particular crafts, each one ‘thought himself very wise in most important pursuits, and this error of theirs overshadowed the wisdom they had’ . Puffed up by their achievements in one domain, the successful Athenians trespassed on matters about which they were ignorant.”
“First, trespassing is a widespread problem that crops up especially in the practice of interdisciplinary research, as opposed to what we might call ‘single-discipline’ research. Second, reflecting on trespassing should lead us to have greater intellectual modesty, in the sense that we will have good reason to be far less confident we have the right answers to many important questions.”
“Epistemic trespassing of the sort I’ve noted is easy to recognize. Experts drift over a highly-visible boundary line and into a domain where they lack either the relevant evidence or the skills to interpret the evidence well. But they keep talking nonetheless. Experts on a public stage are cast in the role of the ‘public intellectual’ or ‘celebrity academic’. They may find trespassing all but impossible to resist. Microphones are switched on, TV cameras zoom in, and ‘sound bites’ come forth, coaxed out of the commentators by journalists. So what do you have to say about philosophy, Neil deGrasse Tyson? And what about arguments for the existence of God, Professor Dawkins? I don’t think trespassing is exclusively a problem for scholars in the limelight, however, and one of my goals here is to explain why ordinary researchers often risk trespassing, too.”
“But first we must understand what the epistemological problem with trespassing is. There is not only one problem. Consider three types of problematic trespassing cases, where two different fields share a particular question:
- (a) Experts in one field lack another field’s evidence and skills;
- (b) Experts in one field lack evidence from another field but have its skills;
- (c) Experts in one field have evidence from another field but lack its skills.
“I will examine three strategies to justify acts of trespassing and thereby preserve rational confidence in trespassers’ answers to hybridized questions. Again, we are assuming some trespassers are experts in one field but encroach on another field.
- (D1) I am trespassing on another field, but that field does not feature any relevant evidence or skills that bear on my view about p;
- (D2) I am trespassing on another field, but my own field’s evidence conclusively establishes that p is true;
- (D3) I am trespassing on another field, but my own field’s skills successfully ‘transfer’ to the other field.”
“I suspect we must trespass to answer most important questions. Perhaps this means we should never trespass alone. Instead, we must rely on the expertise of others. What we need, to extend the trespassing metaphor, is an ‘easement’ or ‘right of way’ for travel beyond our fields’ boundaries. The right of safe passage could be secured by our collaboration with cross-field experts. Imagine your colleague is a representative source of evidence, skills, and potential criticism from another field. Even if you don’t have direct knowledge of that field, if your colleague tests out your answer to a hybridized question and tells you it sounds right to her, then your view is appar- ently more reasonable than it would have been otherwise. Trespassers may gain reasonable beliefs by engaging in certain kinds of discussion with cross-field colleagues.”
While this paper raises some interesting issues, its main goal seems to be protecting the turf of academic subfields.
COVID19 and climate change
Complex issues such as COVID19 and climate change, with massive policy implications, introduce a whole host of additional issues related to epistemic trespassing. I’ve written many blog posts on expertise [link].
Here is a pet peeve of mine: many academics who label themselves as ‘climate scientists’, even though their degrees might be in economics, biology, whatever, have no compunction about speaking publicly, and responding to reporters’ queries, on climate topics well outside their expertise. They use their status as ‘climate scientist’ to expound on aspects of climate science, economics, policy, whatever, that they know next to nothing about.
The flip side of this coin is the dismissal by academics of the likes of Steve McIntyre, who has brought much needed expertise in statistics and data probity to the field of paleoclimate.
Moving on to COVID19, the problems with COVID19 projections were summarized in a number of posts in the most recent CoV Discussion Thread:
- Don’t believe the COVID models – that’s not what they’ for [link]
- Mathematical models to characterize early epidemic growth: A review. [link]
- On the predictability of infectious disease outbreaks [link]
- Dramatic reduction in COVID disaster projections [link]
- America’s most influential COV model just revised its estimates downward – but not all models agree [link]
- How can COVID models get it so wrong? [link]
James Annan has another good blog post: Model calibration, nowcasting and operational prediction of the COVID19 pandemic
“Turns out that calibrating a simple model with observational data is much the same whether it’s paleoclimate or epidemics. The maths and the methods are much the same. In fact this one is a particularly easy one as the model is embarrassingly linear (once you take the logarithm of the time series).”
“The basic concept is to use the model to invert the time series of reported deaths back through the time series of underlying infections in order to discover the model parameters such as the famous reproductive rate R. It’s actually rather simple and I am still bemused by the fact that none of the experts (in the UK at least) are doing this. I mean what on earth are mathematical epidemiologists actually for, if not this sort of thing? They should have been all over this like a rash. The exponential trend in the data is a key diagnostic of the epidemic and the experts didn’t even bother with the most elementary calibration of this in their predictions that our entire policy is based on. It’s absolutely nuts. It’s as if someone ran a simulation with a climate model and presented the prediction without any basic check on whether it reproduced the recent warming. You’d get laughed out of the room if you tried that at any conference I was attending. By me if no-one else (no, really, I wouldn’t be the only one).”
While I haven’t dug into COVID models at all, with regards to model calibration and nowcasting my experience in weather forecasting is even more relevant here than climate modeling. For operational weather prediction (say for hurricanes), there are several components to calibration of the forecasts. You can calibrate the model inputs and/or the model outputs. You can calibrate the model based on historical forecasts for previous epidemics (compared with observations). You can also calibrate the model based on recent error statistics (in model inputs and/or outputs). The experiences/outcomes of each country impacted by COVID provides data to be used in calibration.
It appears that the weather/climate community has much to offer in terms of COVID modeling. Fortunately, weather and climate scientists haven’t been ‘staying in their lane,’ we’ve seen a number of COVID analyses from this community (including a few at Climate Etc.
The postnormal science group (Funtowicz, Ravetz, van der Sluijs et al.) have written an article PostNormal Pandemics.
“Despite the truly historic mobilization of science, our knowledge in crucial areas is still swamped by ignorance, especially on the sources of the virus but also on its progress and future outcomes. The expertise employed in COVID-19 policy advice builds on speculative assumptions on the virus itself, and how far it’s possible to control and predict how people behave.
Known unknowns include the real prevalence of the virus in the population; the role of asymptomatic cases in the rapid spread of the virus; the degree to which humans develop immunity; the dominant exposure pathways; the disease’s seasonal behaviour; the time to deliver global availability of an effective vaccine or cure; and the nonlinear response of individuals and collectives to the social distancing interventions in the complex system of communities interconnected across multiple scales, with many tipping points, and hysteresis loops (implying that society may not be able to rebound to the state it was in before the coronavirus interventions took place). These deep uncertainties make quantitative predictions speculative and unreliable.
Instead, following a pattern well known to PNS practitioners, predictions which purportedly “jarred the U.S. and the U.K. to action” can only be obtained by mathematical models that produce crisp numbers, even though these numbers have been obtained at the cost of artificially compressing the associated uncertainty. “There is no number-answer to your question,” explodes an angry medical expert to the politician trying to force a number out of him.
It would be much more effective to run our societies under the assumption that our resources should not be allocated according to a strategy of prediction and control.
More data (even ‘reliable data’) and better predictive models cannot resolve the ‘distribution of sacrifice’ which involves, among other things, the arbitration of dilemmas that appear at every scale. This cannot be delivered by artificial intelligence, algorithms and models alone. We need to pursue an adaptability based on preserving diversity and flexible management.”