This is the third part of a series looking at random interesting studies. Part one is here, and part two is here. I read these papers fairly quickly, so do let me know if I’ve made any errors that you can spot.
On political violence
There are a few empirical findings that I very much hope are true. The world is more convenient if cracking down on free speech actually causes extreme views to fester. It would be great if the legalisation of cannabis causes rates of psychosis to fall, and the libertarian case aligns with the utilitarian case. And it would be a real shame if terrorism actually led to an increase in support for the cause the terrorists are supporting. But alas, sometimes you stumble upon a Mick Jagger finding that reminds you that You Can’t Always Get What You Want.
Such is the case with Krause and Matsunaga (2023). They find that support for the AfD (the radical right party in Germany) seems to shoot up after a right-wing terrorist attack. This seems contrary to some of the previous evidence on political violence: Wasow (2020) famously found that non-violent civil rights protests increased Democratic vote share, whereas violent protests backfired and increased Republican vote share.
So, what’s going on with this new study, and can we trust the results? The study design is pretty simple. In June and July 2019, pollsters were doing the fieldwork on the Eurobarometer surveys in Germany. On June 2nd of the same year, the pro-refugee German politician Walter Lübcke was assassinated. Two weeks later, on June 16th, the German police publicly announced that they had arrested a far-right extremist based on DNA found on Lübcke’s clothing.
Now you have the data to conduct a nifty quasi-experiment: how do the attitudes differ when you compare people who were interviewed before the arrest was announced to those who were interviewed afterwards1?
The answer is that people became more negative about immigration, and support for the AfD increased. Now, the effect sizes involved here are fairly small: the reduction in support for immigration was only 0.17 standard deviations. To put it more clearly: when people were asked to rate themselves from 1 to 4 on their view of immigration (with 1 being ‘very positive’ and 4 being ‘very negative’), the assassination seemed to increase the number given by an average of 0.143.
Looking at opinion polls, they found that support for the AfD increased by 1.2 percentage points, and had remained relatively stable prior to the assassination (see the figure below).
The effect here is mainly caused by centre-right voters shifting to the far-right in the aftermath of the assassination - there’s no significant effect among left-wing voters or socially liberal voters. That’s slightly surprising to me: we know that voters for the left-wing party Die Linke have often switched to the far-right in the past, but that doesn’t seem to be what’s going on here. It’s the Merkel fans who were switching to the AfD.
The authors suggest that the mechanism is that the salience of immigration (the amount it gets talked about in the media and by political parties) increases in the aftermath of terrorist attacks. This increase in salience generally benefits the AfD — voters might trust the centre-right party on the economy, but a lot of them trust the radical right to actually reduce immigration.
There also might be a sort of positive radical-flank effect thing going on. The theory behind the radical-flank effect is basically that if you have a group of nutters and a group of ‘moderates’ campaigning on the same issue, the nutters make the moderates look better by comparison.
So, in this example: Lübcke gets murdered, the AfD leaders call for extremely harsh punishment for the murderer, and some voters now take the view that the AfD are actually the reasonable ones for taking the ‘anti-immigration but also extremely anti-murder’ view.
The study design looks pretty legit, although I’m ever-so-slightly worried about the p-values2 we’re looking at (or, indeed, not looking at). As far as I can tell, the authors specify that the p-values involved are below 0.05, but won’t give any more detail than that. This makes me slightly suspicious (as discussed in a previous piece here), and I’m definitely not about to suddenly completely change my view from ‘political violence is probably mostly counterproductive’. But still, it is interesting.
On discrimination
Do employers discriminate against women? The answer, as is often the case when thinking about social science questions, is ‘it depends’. But more specifically, according to Schaerer et al. (2023), the answer is that discrimination against women for jobs historically held by men has been non-existent since 2009. Yes, non-existent!
This was a pre-registered meta-analysis that looked at 85 audit studies and 361,645 individual job applications. As a reminder: ‘pre-registered’ means that they specified exactly how they would do the study in advance (meaning they couldn’t fiddle with methodology to get the result they wanted after the data came in), and an audit study is one where you send two identical resumes to employers that only differ in that they either have a male or female name on the top.
This was a global study that focused mainly on developed countries, as you can see in the figure below. For obvious reasons, I don’t think we can draw any conclusions about discrimination in Sub-saharan Africa, most of South America, or most of Asia here!
Okay, so what do they find exactly? Well, the average odds of a male applicant receiving a callback was 0.91 times the odds of an equally qualified female applicant (the 95% confidence interval here ranges from 0.86 to 0.97), suggesting that there is a mild bias in favour of female applicants, with a very reasonable p-value of 0.003.
However: when control variables were introduced (examples include the presence of moderators in the study and a measure of study complexity), the p-value increases a non-significant 0.883.
I’ll be honest and tell you that I’m not sure exactly how to think about this, and the issue of deciding how and why to include controls in a meta-analysis seems a bit tricky. For now, I’m going to just roll with the authors’ claim that we should infer that the finding that employers discriminate against men isn’t statistically significant here after relevant controls are introduced.
Here’s the figure showing the changes in discrimination over time in each category of job:
Discrimination against men in stereotypically female jobs specifically really does seem to be something that’s going on, and the authors find that this is robust and stable over the years. But for all other categories, there doesn’t seem to be any significant effect when introducing controls relating to study design. You can see the lists of stereotypically female jobs, stereotypically male jobs, and ‘gender-balanced’ jobs used below.
The study also looks at something else: what did people forecast was going to happen here? Could either scientists or ordinary people predict that these would be the results of this meta-analysis? The answer, perhaps unsurprisingly, is no.
Well, maybe ‘no’ is a bit harsh. They correctly predicted that discrimination against women would fall over time. They also correctly predicted that men would be discriminated against in stereotypically female jobs. The most interesting result, though, is that they massively underestimated the amount that discrimination against women would decrease by. The figure below gives you the gist of the results.
Academics thought that in the 70s/80s there would be lots and lots (henceforth: 2x lots) of discrimination against women (for jobs that weren’t in the ‘stereotypically female’ category), and that this would fall over the years to lots of discrimination. Ordinary people thought there would be 3x lots of discrimination, and it would fall to 2x lots. In fact, it seems more like we started with some discrimination, and now we have no discrimination.
Okay, what should we think of this study? I’m generally impressed. It was pre-registered, which is great. It had a load of experts red-teaming the pre-registered methodology, to make sure they hadn’t structured the study in such a way that they would find a certain result. The p-values are all impressively low.
I guess the one big problem, which the authors acknowledge, is the fact that they haven't included many studies that were published before 2000. And they also mention that if you remove just one large study, the time trend becomes insignificant. So maybe we’re dealing with a case where discrimination against women in the labour market was never a thing, even for stereotypically male jobs. I think that conclusion is probably too hasty. The study in question was massive, and there’s no reason just to drop it from the meta-analysis.
What’s the takeaway here? You can sort of morph the findings of this study to show whatever you want it to show. You could say that this shows that discrimination against women in the labour force isn’t really much of a thing anymore (and perhaps never was).You could also say that this proves that feminist activism was probably successful in reducing discrimination against women, and we need more of it to help in places where women are still discriminated against (remember, we’re only looking at developed countries here).
Or, if you’re social science-pilled like me, you can just say that we should be funding more meta-analyses like these: pre-registered, very carefully designed, with input from external experts who can point out problems with the study.
(I should note that the study also does a time-series analysis where they just look at how support for right-wing parties is correlated with right-wing terrorist attacks, but I focus on the quasi-experimental evidence because I find it more compelling.)
As a reminder, a p-value is the chance that we would get a result at least this extreme if there were no effect. A p-value lower than 0.05 is the standard threshold for a significant effect, but if a p-value is only just below 0.05, we have reason to be suspicious! In fact, I tend not to be particularly confident in a paper unless the p-value is significantly below 0.05.
Interesting! The study on gender discrimination - is it focusing on call back rates only? There are some recent studies I looked that discussed other ways gender preferences impact women's wage outcomes - https://www.nominalnews.com/p/marriage-preferences-and-gender-outcomes. It may be that easy to measure forms of discrimination are now no longer present but other forms might still be present - I also wrote about studies in real estate discrimination where explicit discrimination is no longer prevalent but implicit discrimination is extremely large.