(I’m trying to get back into the habit of posting once a week or more, so do forgive me if the articles seem a little rusty or unoriginal while I get back into the swing of things!)
Something that I think may be an effect is that in order for collective expertise to work, individual experts have to rely on their own subjective judgment calls. A naive layperson can probably beat the typical expert by taking the average views of all experts taken together (as their errors may cancel out), but when the typical expert adopts that as their rule, the field becomes degenerate.
Of course an expert can be heterodox in their public arguments and take the average in their pragmatic bets, and I suspect almost all experts do this to a degree, but I also expect they also entangle these to a degree.
I'm reminded of the famous "prediction" about the possible death toll of COVID in the US by some high-level epidemiologist in the British government, putting the number at over 2 million in the very early days of the outbreak (I think it's the one referenced here by Dr. Neil Ferguson? https://www.cato.org/blog/how-one-model-simulated-22-million-us-deaths-covid-19). The death toll as of now is 1.1 million (https://covid.cdc.gov/covid-data-tracker/#datatracker-home). This paper ended up driving a lot of policy decisions based on the credentials of the author, but I read the paper, and if I remember correctly there weren't any reference to confidence intervals, error bars, or anything. It seemed to be literally plugging early, unreliable numbers into a spreadsheet they had already and taking the number that came out at face-value without even considering much more. It bothered me much more than it being wrong that so few people in the establishment seemed to recognize how fake it looked from the start, like they didn't understand how to read data like this. Are epidemiologists actually trained in more statistics than plugging numbers into a formula?
I appreciate that, when it's is early in an unknown situation, it is good to know the worst-case, and this is unlikely to actually happen in the end, and the usefulness of models is often greater than the predictions they calculate, like knowing that the transmission rate of the disease dominates the early shape of the disease spread leads to actionable recommendations, like social distancing and masks. But when you then see this big publication of the nonsense numbers like "2 million people are going to die!" and prominent mathematicians on Twitter arguing about what the exponent is for exponential growth of the disease and no one stepping in to say, "guys, it's not exponential; it's sigmoidal. There's going to be an inflection point where the disease spread slows down, and until we hit that, we can't predict where it will be because it involves terms like, 'percentage of the population that will eventually get infected,' so maybe we should all calm down a bit, make the best decisions we can in the uncertainty, and save your 5-year, 3 decimal point predictions for your next novel."
All of which is probably why I'm not in charge, because people like to believe their leaders are super-humanly knowledgeable and in command of the situation. Until they find out the truth, then they get pissed and storm the Bastille. And better leadership would involve showing confidence on how to navigate uncertainty, which is harder. But it all speaks to me of a failure in the education of our upper-class and poor selection of second-generation, reverted-to-the-mean, hereditary elites that worries me a lot. At least in other systems of hereditary nobility there was supposed to be a senschal doing the actual management of a noble's property because they understood the noble was probably a fool.
Sorry to rant on someone else's comments section. This is a topic I think a lot about, too.
Well, this article seemed a little rusty and unoriginal, but I enjoyed it anyway!
Seriously though, I really like the idea that prediction making and information gathering are separate skills. I'm sure there are plenty of predictors who succeed primarily on the back of the info that others have gathered, seems reasonable to value experts who succeed primarily on the back of the info they unearth.
Great post, but whilst it's certainly a good thing outsiders can fairly easily discern whether a study will replicate or not, that implies something along the lines of researchers themselves being dishonest/unintelligent or something, for taking research in their fields seriously or pretending to. Which is certainly quite scary.
Good point - although a researcher may be in a bit of a sticky situation if they carry out a well-designed study and get a p-value below 0.05 that's still fairly high. They've done everything right, achieved a significant result, and may still come to the view that it probably wouldn't replicate.
I've been thinking about this a lot recently. A question I have a lot is, to what extent is expertise useful in a year and to what extent is it useful now. Like I think many experts are good at solving problems today, but figuring out which problems in their field will need to be solved in a year is kind of a different problem.
Experts have been useless since time immemorial. Pharaoh called his magicians, Nebuchadnezzar called his astrologers and necromancers and demonologists and snake charmers and wizards and wise men, and even Eve consulted with the snake.
Something that I think may be an effect is that in order for collective expertise to work, individual experts have to rely on their own subjective judgment calls. A naive layperson can probably beat the typical expert by taking the average views of all experts taken together (as their errors may cancel out), but when the typical expert adopts that as their rule, the field becomes degenerate.
Of course an expert can be heterodox in their public arguments and take the average in their pragmatic bets, and I suspect almost all experts do this to a degree, but I also expect they also entangle these to a degree.
I'm reminded of the famous "prediction" about the possible death toll of COVID in the US by some high-level epidemiologist in the British government, putting the number at over 2 million in the very early days of the outbreak (I think it's the one referenced here by Dr. Neil Ferguson? https://www.cato.org/blog/how-one-model-simulated-22-million-us-deaths-covid-19). The death toll as of now is 1.1 million (https://covid.cdc.gov/covid-data-tracker/#datatracker-home). This paper ended up driving a lot of policy decisions based on the credentials of the author, but I read the paper, and if I remember correctly there weren't any reference to confidence intervals, error bars, or anything. It seemed to be literally plugging early, unreliable numbers into a spreadsheet they had already and taking the number that came out at face-value without even considering much more. It bothered me much more than it being wrong that so few people in the establishment seemed to recognize how fake it looked from the start, like they didn't understand how to read data like this. Are epidemiologists actually trained in more statistics than plugging numbers into a formula?
I appreciate that, when it's is early in an unknown situation, it is good to know the worst-case, and this is unlikely to actually happen in the end, and the usefulness of models is often greater than the predictions they calculate, like knowing that the transmission rate of the disease dominates the early shape of the disease spread leads to actionable recommendations, like social distancing and masks. But when you then see this big publication of the nonsense numbers like "2 million people are going to die!" and prominent mathematicians on Twitter arguing about what the exponent is for exponential growth of the disease and no one stepping in to say, "guys, it's not exponential; it's sigmoidal. There's going to be an inflection point where the disease spread slows down, and until we hit that, we can't predict where it will be because it involves terms like, 'percentage of the population that will eventually get infected,' so maybe we should all calm down a bit, make the best decisions we can in the uncertainty, and save your 5-year, 3 decimal point predictions for your next novel."
All of which is probably why I'm not in charge, because people like to believe their leaders are super-humanly knowledgeable and in command of the situation. Until they find out the truth, then they get pissed and storm the Bastille. And better leadership would involve showing confidence on how to navigate uncertainty, which is harder. But it all speaks to me of a failure in the education of our upper-class and poor selection of second-generation, reverted-to-the-mean, hereditary elites that worries me a lot. At least in other systems of hereditary nobility there was supposed to be a senschal doing the actual management of a noble's property because they understood the noble was probably a fool.
Sorry to rant on someone else's comments section. This is a topic I think a lot about, too.
Well, this article seemed a little rusty and unoriginal, but I enjoyed it anyway!
Seriously though, I really like the idea that prediction making and information gathering are separate skills. I'm sure there are plenty of predictors who succeed primarily on the back of the info that others have gathered, seems reasonable to value experts who succeed primarily on the back of the info they unearth.
Great post, but whilst it's certainly a good thing outsiders can fairly easily discern whether a study will replicate or not, that implies something along the lines of researchers themselves being dishonest/unintelligent or something, for taking research in their fields seriously or pretending to. Which is certainly quite scary.
Good point - although a researcher may be in a bit of a sticky situation if they carry out a well-designed study and get a p-value below 0.05 that's still fairly high. They've done everything right, achieved a significant result, and may still come to the view that it probably wouldn't replicate.
Very glad to have you regularly posting again!
Thank you!
This is very convincing
Thanks!
I got 16 out of 21
I've been thinking about this a lot recently. A question I have a lot is, to what extent is expertise useful in a year and to what extent is it useful now. Like I think many experts are good at solving problems today, but figuring out which problems in their field will need to be solved in a year is kind of a different problem.
Experts have been useless since time immemorial. Pharaoh called his magicians, Nebuchadnezzar called his astrologers and necromancers and demonologists and snake charmers and wizards and wise men, and even Eve consulted with the snake.
And yet, they're still around.