Before we get into this properly, there’s a bit of background information I need to share with so you’ll understand, fully, how and why this article came to be written, what – exactly – I’m trying to achieve here.
The origins of this article lie in a three-way twitter conversation between myself [@Unity_MoT), Earwicga [@earwicga] and a policy officer from Coventry Rape Crisis Centre (@CRASAC) regarding the accuracy and reliability of the kind of rape statistics that are frequently cited by both the media and by rape-awareness campaigns and campaigners.
The original working title for this article was going to be ‘Rape Statistics for Feminists’.
This is not because feminists are necessarily any more or less prone to using dodgy rape statistics than anyone else but they are – or so it seems to me – somewhat more likely to have their use of statistical evidence questioned/challenged and their arguments dismissed out of hand should any prove to be of dubious provenance for reasons that are, perhaps, most clearly articulated in Orwell’s classic political essay ‘Notes on Nationalism‘*
*Orwell’s extended definition of ‘nationalism’ is, I would argue, readily applicable to any strong idelogical world-view – including a number of strands of feminist ideology – hence the relevance of his essay despite the fact that it pre-dates modern feminism.
As things stand, I decided not to use that title in end for the simple reason that, after crunching a fair amount of data, it became perfectly apparent that the ‘hot statistic’ of the moment, which suggests that a woman is raped in the UK every ten minutes, is not only wrong but significantly underestimates the actual scale of the problem (rape), which is all just a little embarrassing when the source of that statistic turns out to be the Department of Health.
Why are rape statistics so problematic?
There are a number of difficulties that commonly arise when dealing with rape statistics and these need to be taken into consideration before we move on to the data, if only because some of these problems provide important caveats for the accuracy of that data.
By far the biggest problem is under-reporting. It is very difficult to obtain high quality data on the prevalance of rape because the vast majority of rapes are never reported to the police, other authorities or even, in many cases, to friends and family.
The 2007/8 British Crime Survey estimated that only 11% of victims of a serious sexual assualt reported that assualt to the police, while 40% had never told anyone that they had been assaulted. With very few exceptions, the statistics we’ll be dealing with are estimates and, therefore, subject to a margin of error than can, itself, be difficult to estimate – so the very best I can do is try to provide data based on the best currently available estimates.
The second major problem we face is that even official statistics can commonly include significant inaccuracies and ambiguities, depending on the precise source from which they’re taken.
For example, the PDF version of the British Crime Survey gives its estimates of the prevalence of rape and other types of sexual assault as a percentage, but only to 1 decimal place. Using the most recent data for rape from the 2009/10 BCS, this creates a maximum rounding error when calculating the estimated number of rape victims for the year of +/- 16%, which is equivalent to 9,000 victims over the course of that year. Fortunately, for our purposes, exactly figures for prevalence – to seven decimal places – can be obtained from the Excel tables that accompany the report, and its these I’ve used where relevant/possible, to obtain the most accurate picture possible.
So far as the offical data is concerned, there are further issue with quality and/or availability of data covering the prevalence of repeat victimisation and the prevalence of rape where the victim is under 16 years of age – BCS data covers only ages 16-64 for men and 16-59 for women – although I have done my best to allow/account for these issues.
As regards ambiguities, it can often be difficult, when reading official reports, precisely which statistics are being used or cited.
For example…
In March 2010, the Department of Health’s Alberti Taskforce reported that ‘around… 2000 women are raped every week’, citing data taken from the 2008/9 BCS.
By November 2010, and following a change of government, the Department of Health is now claiming that ‘A woman is raped every ten minutes’, the equivalent of around 1,000-1,100 rapes a week, citing data taken from the 2009/10 BCS.
Has the prevalence of rape really fallen by 50% in the space of a year, which is what these two otherwise conflicing assertions suggest?
No – in fact the BCS figures for the prevalence of rape actually rose in 2009/10 by 0.017% over the 2008/9 figure. What has changed is the basis of the estimates cited by the Department of Health. The higher Alberti Taskforce figure is based on an estimate of the number of incidents of rape, which takes into account the prevalence of repeat victimisation, where the lower figure cited by the present government is based solely on the estimated number of victims but fails to take into account repeat victimisation.
Of the two figures, the Alberti Taskforce figure is the better estimate for all that it was presented somewhat ambiguously – one could be forgiven for thinking that it indicates that just over 100,000 individual women are raped each year when the number of individual victims is significantly lower – while the more recent figure provides a rather gross underestimate of the actual prevalence of rape because it fails to acknowledge the issue of repeat victimisation.
What the current Department of Health campaign should say is ‘a women raped every 6 minutes‘ before going on to add that just over a third of those women can expect to be raped more than once in the next 12 months, if nothing is done to help them.
Thirdly, bad statistics can readily enter the public discourse on rape as a consequence of either poor quality, biased, research or the use of anecdotal ‘evidence’ taken from an unverified, and often unverifiable source.
An area of the overall rape ‘debate’ where you’ll find both issues at work is the always thorny question of ‘false’ rape allegations, a field in which even the published research ‘evidence’ is often of poor quality and subject to considerable biases. Rumney (2006), a systematic review of the evidence for the prevalence of false rape allegations, includes a tabulated list of published studies ranging, in terms of publication date, from 1968 to 2005. Taken at face value (not a good idea in some case) the list provides estimates for the scale of false allegations ranging from a low of 1-1.5% (Theilade and Thomsen, 1986)to high of 90% (Stewart, 1981).
In terms of the kind of specific problems that can easily arise, Kanin (1994) is a good example of the kind of biases that are evident in research which purports to show high rates of false allegations. Kanin ‘studied’ the incidence of false rape allegations in one small urban community between 1978 and 1987, using on data taken from police reports, and arrived at a figure for false allegations of 41%. Kanin’s ‘research’ was systematically dismantled in 2007/8 by Dr David Lisak of the University of Massachusetts who, amongst other things, noted that Kanin had made no effort to systematise his own evaluation of the police reports or assess their reliability – he simply took the police’s word as ‘gospel’.
On the other side of the debate feminist author Susan Brownmiller claimed, in her 1975 book ‘Against Our Will: Men, Women and Rape’, that only 2% of rape allegations are false, citing evidence allegedly provided by female police officers from a New York City rape squad. As estimates go, Brownmiller’s is much closer to the truth than Kanin’s (by a considerable distance as the current best estimate is 3%) but unfortunately it is also almost certainly anecdotal and, therefore, best avoided as a piece of evidence, even if Rumney (2006) did find a couple of small studies that did back up the 2% claim despite lacking confidence in their methodology.
And last, but by no means least, it must be remembered that data collection and other statistical methods may change – and hopefully improve – over time, affecting both the accuracy of statistics and the extent to which its reasonable to allow for a margin of error.
If you look at official estimates of the prevalence of rape from ten years ago, when attempts to assess this scale of this particularly problem were very much in their infancy, then you’ll find that the statistical evidence given was subject to very wide margins of error. In one report, dating to 2001/2, the annual number of rape victims was estimated at 47,000 with a margin of error of +/- 20,000, i.e. the actual number could have been anything from 27,000 to 67,000 which is tantamount to ‘I’m fucked if I know’.
Things have improved considerably over the last ten years and, based on the research done for this article, I would estimate that the margin of error for unrounded BCS data on the prevalence of rape and other sexual offences is likely to be no more than 2% below the given prevalence figure but, perhaps as much as 10% above the given figure, when one takes into account the BCS’s current survey response rates and differentials in those response rates between key ethnic groups.
Anything over a 10% hike in the BCS figure on the assumption of under-reporting would, today, need to be backed up with additional evidence in order to be considered reliable.
Hopefully, by now, you’ve got the idea that getting hold of accurate and reliable rape statistics is a complex business and one that is both fraught with difficulties and burden with important caveats which you be unwise to ignore.
As exercises go, my own trawl through the available statistical evidence reminded me of comment I received from Ben Goldacre after I written an article on cancer statistics to the effect that there are no right answers, only different degree of wrong.
This is, I think, also certainly true of rape statistics, which is why the best I can say of the data that follows is that it may not be right, but I have done my level best to keep the amount of wrong down to a minimum, given the data I have to work with.
The Statistics
Annual Number of Female Rape Victims (age 16-59) – 55,117, giving an annual prevalence of 3.05 victims per 1000 women.
Prevalence/Number of Victims based on 6 yr average taken from British Crime Survey (2004/5 to 2009/10). Margin of Error on survey data approximately 1.2%. The important caveat here is the variations in response rate in different ethnic groups (93% white, 86% Black African/African-Caribbean, 78% Asian) which suggests higher risk of under-reporting to BCS in BME respondents. As a ballpark figure, estimates in the range ot 54,000 to 60,000 victims, annually, can be stood up when under-reporting is taken into account.
Annual Number of Rapes (women 16-59) – c. 94,000
Data from 2002 Home Office report indicates an average of 1.7 rapes per per victim when prevalence of repeat victimisation is taken into account, although the accuracy of this estimate is somewhat more uncertain. As a ballpark figure, 90,000-105,000 can be stood up on available evidence provided that the scale and extent of repeat victimisation is referenced.
Annual Number of Women (16-59) subject to repeat victimisation (rape) – c. 20,000 (c. 1 in 3 rape victims), giving an annual prevalence of 1 victim per 1000 women.
Figure extrapolated from BCS data using 6 yr average on repeat/multiple victimisation from 2007/8, cross-referenced with data on partner violence trends from 2001-2008. Ballpark figures up to 25,000 would not be unreasonable
Annual Number of Rapes attributable to repeat victimisation (women 16-59) – c. 59,000
Figure extrapolated from BCS data using 6 yr averages. On average a woman subjected to sexual violence as part of a pattern of repeat victimisation over the course of a year will be raped on 3 occasions, be subjected to 2 further attempted rapes resulting in a serious sexual assault and 3 more attempted serious sexual assaults.
Annual Number of Rapes reported to Police (women 16-59) – 8,487
6yr average from police recorded crime data. Figure is around one third lower than widely reported figures for number of rapes reported to police due to exclusion of data for women/girls under 16.
Annual reporting rate/percentage for rape (women 16-59, by number of victims) – 3 out of 20 (c 15%)
6yr average using BCS and Police Recorded Crime Data
Annual reporting rate/percentage for rape (women 16-59, by number of rape inc. repeat victimisation) – 1 in 11 (c. 9%)
Based on estimate of annual number of rapes (given above) and 6yr average for Police Recorded Crime data.
Annual Number of Female Rape Victims (under 16) – c. 10,500
Home Office estimates that 16% of rape victims are under 16 – estimate extrapolated from adult data using this figure. No BCS data on under 16s.
Note – In regards to the prevalence of rape for women under 16, it was not possible to arrive at an estimate of the annual number of rapes due the limited amount of data available on under-reporting and repeat victimisation and uncertainties over the scope of the Home Office’s estimate of the percentage of under-age victims and its relationship to the data on child sexual abuse and paedophilia.
Annual Number of Rapes reported to Police (women under 16) – c. 4300
Annual reporting rate/percentage for rape (women under 16, , by number of victims) – 41 in 100 (c. 41%)
Figures based on extrapolated figures above and 6yr average from Police Recorded Crime Data. As to how accurate these estimates may be when out into the wider context of child sexual abuse, your guess is as good as mine. Sorry.
Note on usage of statistics.
There is a significant body of evidence which demonstrates that the optimal method of citing statistics for clarity and ease of understanding is the use of natural frequencies (e.g. 1 in 100, 7 out of 50, etc.).
These have been given wherever they could be reliable calculated.
12 thoughts on “Rape Statistics – What can we rely on?”