Thursday, September 25, 2014

Is the number of female Royal Society Fellowships a red herring?

Personally I'm in no doubt that conscious and unconscious, personal and systematic biases hinder women's progress in science on many levels. However, the recent outcry over the number of women selected by the Royal Society for university fellowships is (probably) a red herring.

Lets break it down statistically. We'll be looking for significant statistical deviation, at the 0.05 level, and doing 1-tailed tests (only looking at whether women are under-represented, I doubt if anyone including myself much cares about the possibility of men being under-represented)

Here is the table of application and acceptance rates since 2010 from the +The Royal Society



The key ingredients here, in assessing whether the Royal Society shows any bias, are the number of women accepted, and the percentage of female applicants.

Doing a 1-tailed binomial significance test for the number of women accepted each year, based on the number expected give the percentage of applicants, we get the following (I've used the numbers above, though other sources give 43 for the number of awards in 2014 - I don't know why that discrepancy exists):

P_2014 = 0.0164
P_2013 = 0.437
P_2012 = 0.533
P_2011 = 0.440
P_2010 = 0.972

(p values calculated using http://graphpad.com/quickcalcs/binomial1.cfm)

Clearly 2014 is very different from the other years, where the number of awards to female scientists was pretty much as expected (in fact we almost saw a significant over-representation in 2010, but as I said before, who cares? I think we can live with that!)

So it looks like 2014 saw a sudden emergence of bias in the Royal Society's selections. But we need to consider the fact that we're performing this significance test 5 times. If you look at anything for long enough it's going to do something 'odd', and no doubt generate Twitter outrage. The standard way to account for this in statistics is a correction for multiple hypothesis testing. The best standard method is probably the Holm-Bonferroni method, which says that to find any significant results here we need to take the p-value we deem to be significant (0.05) and divide it by the number of tests (5). So we're left with a new significance level of 0.01, just below what we see in 2014. I'd say the best interpretation of this years acceptance rate then is that it's suspicious, but not sufficiently low to be definite evidence of bias.

Of course, we should be asking questions about why only 20% of applicants were female in the first place. The Royal Society, like almost every scientific body, could probably be doing more to encourage female applicants. But given the applications we saw, there is no firm statistical evidence of biased selection. Of course, if we saw the same sort of results next year that would change everything - but given the outcry I think we can rest assured that that won't be allowed to happen. I'm also a little concerned about that 40/43 discrepancy which might make this result just significant - thats the absurd binary nature of significance tests for you. If you want to do and compare a Bayesian analysis then please do and I'll happily post the results.

Update 1: Luca Borger points out that accepting so few women applicants this time is unlikely to mean more apply next year. And Ben Sheldon points to the fact that fewer women were selected at every stage of the process. The problem with assessing bias against women in science is that the overall effect, which is undoubtably large, comes from many cumulative and feedback effects like this, so the outcry against the Royal Society may be useful in combating this, even if in isolation their selection does not appear statistically biased.

Update 2: David Sumpter shows how the cumulative effect of many individually non-significant biases can lead to huge system-wide imbalances.

No comments:

Post a Comment