Vous êtes sur la page 1sur 22

BRTT.J.

CRIMINOL

VOL 37 NO. 4 AUTUMN 1997

QUESTIONING THE MEASUREMENT OF THE 'FEAR OF CRIME'


Findings from a Major Methodological Study
STEPHEN FARRALL.JON BANNISTER, JASON DITTON and ELIZABETH GILCHRIST*

Research upon the fear of crime has grown substantially in recent years. From its very inception, this field has relied almost exclusively upon quantitative surveys, which have suggested that the fear of crime is a prevalent social problem. However, doubts about the nature of the instruments used to investigate this phenomenon have cumulatively raised the possibility that the fear of crime has been significantly misrepresented. Dealing with the epistemological, conceptual, operational and technical critiques of quantitative surveys in general and of fear ofcrime surveys in particular, this article suggests that our understanding of the fear of crime is a product of the way it has been researched rather than the way itis.Asthe aim of the research project under which this data was collected was to develop and design new quantitative questions, the article ends with some possible solutions to the epistemological, conceptual, operational and technical problems discussed which may improve future quantitative research in this field. The validity of a measuring instrument may be defined as the extent to which differences in scores on it reflect the true differences among individuals . . . radier than constant or random error. (Selltiz et aL 1976: 169, quoted in Brewer and Hunter 1989: 129) Recent years have witnessed an increased interest in the fear of crime from both academics and policy makers. A plethora of studies, including several sweeps of the British Crime Survey (see, inter alia, Hough and Mayhew 1983; Chambers and Tombs 1984; Hough and Mayhew 1985; Mayhew et al. 1989; Maxfield 1987; Skogan 1990; Payne 1992; Kinsey and Anderson 1992) have concluded that the fear of crime impinges upon the well-being of a large proportion of the population. For example, Chambers and Tombs (1984: 29) reviewing the 1982 British Crime Survey Scotland, found that 'more than half of the respondents (58 per cent) said that at some time in the past they had been concerned about the possibility of being a victim of crime'. More recently, Hough (1995: 25) found that, when asked how safe they felt when walking alone in their area after dark, some 36 per cent of those surveyed said that they felt 'a bit unsafe' or

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

* Stephen Farrall, Research Officer, Centre for Criminological Research, Oxford University, Jon Bannister, Lecturer in Social Policy, University of Glasgow, Jason Ditton, Professor of Criminology, Faculty of Law, Sheffield University, and Director, Scottish Centre for Criminology, and Elizabeth Gilchrist, Lecturer, School of Psychology, University of Birmingham. The research reported in this paper was supported by an E5RC grant (no. L210 25 2007) under the council's Crime and Social Order research programme. Versions of this paper were presented at the Faculty of Law, Sheffield University-, and i t the British Criminology Conference, Loughborough University, July 1995. The authors are grateful to the Editor, two anonymous reviewers and Tony Jefferson for their invaluable comments on an earlier draft of this paper. We also extend our appreciation to those who made comments on the paper while attending the above seminars and conferences.

658

STEPHEN FARRALL, JON BANNISTER, JASON DITTON AND ELIZABETH GILCHR1ST

'very unsafe'. It appears that the fear of crime is a social phenomenon of striking dimensions. However, several commentators have raised doubts surrounding the validity of the instruments used to generate these findings (see, inter alia, Bernard 1992; Bowling 1993; Fattah 1993; Schneider 1981; Skogan 1981; and Zauberman 1985). A range of methodological problems have been identified which cumulatively raise the possibility that the incidence of the fear of crime has been significantly misrepresented. Building upon these methodological insights, this article employs both quantitative and qualitative data sets to show that the reported incidence of the fear of crime is partly dependent upon the nature of the measurement instrument rather than a true reflection of'social reality'. This article is constructed in the following fashion. In the first section, we review methodological problems associated with the quantitative investigation of the fear of crime. These observations are drawn from critiques of quantitative surveys in general and fear of crime surveys as they currently exist. Building upon these observations, in the second section we review the arguments and evidence that suggests that methodological triangulation is an appropriate response to the identified problems. We shall utilize this technique (methodological triangulation), being sensitive to epistemological issues, in order to identify the methodological problems and to assess the extent to which the reported incidence of the fear of crime is dependent upon the instruments currently used to measure it. In the third section, we oudine the hypothetical methodological problems that are encountered with a quantitative crime survey, and discuss at length those actually observed in our data set. We conclude that the fear of crime is over-counted, and end with suggestions for future crime and fear of crime surveys in the light of our data. We start, however, with an outline of the research methodology.

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

The selection of areas

Four sites for sampling were selected along two dichotomies (outlying/inner city and poor/affluent). The two inner city areas chosen both fall within two miles of Glasgow city centre (Scotland). The outlying areas were both approximately five miles out from the centre of the city. Both of the areas selected as 'poor' have been identified by Strathclyde Regional Council as being Priority One Areas (the most deprived) under their 'Social Strategy for the Nineties'. These types of areas the Regional Council describe as being '. . . large areas of severe multiple deprivation . . .' (SRC 1994: 2). Unemployment rates for these two areas stood at (respectively) 39 per cent and 41 per cent; overcrowded households at 6 per cent and 8 per cent; non-elderly illnesses at 15 per cent and 21 per cent; long-term illnesses at 19 per cent and 27 per cent and households lacking amenities at 0.2 per cent and 0.3 per cent (all statistics from SRC 1994). Data for the two areas selected as 'affluent' are generally harder to find, and the data reported here refer to only one of these areas. The unemployment rate for this area was 5 per cent, with overcrowding at 0.5 per cent, non-elderly illnesses at 3 per cent and long-term illnesses at 8 per cent (Muir 1994). Data for the other 'affluent' area are even harder to find as it is neither a priority area nor does it form a distinct geographical area. 659

QUESTIONING THE MEASUREMENT OF THE 'FEAR OF CRIME1

The selection of individuals

The purpose of the first sweep of interviews (carried out from October to November 1994) was to identify individuals in each area in terms of their fear of crime and perceived risk of victimization. Hence these interviews were very short. Starting points for sampling were chosen and researchers used the 'random walk' method, calling at every fifth house. At each house, the first person over the age of 16 years who answered the door was interviewed, except where some key groups were hard to find (such as young people). In one area, the caf6 in a local community centre was also used to sample from. The times of day at which we were actively recruiting in each area were varied so as to facilitate access to key groups which may otherwise be hard to find (such as those working or at school). Approximately 40 people were interviewed in each area (N= 167) and were asked if they would be prepared to take part in a longer interview if selected. The response rate for agreeing to be recontacted for the longer interview was 90 per cent. Names, addresses and telephone numbers were collected for the purposes of recontacting. Each individual interviewed at the first stage was classified according to their fear and risk ratings. Four groups were produced: high fear/low risk; low fear/low risk; high fear/high risk and low fear/high risk. In each area a total of 16 people was re-interviewed (four from each fear/risk group). Within each fear/risk group there was an equal gender split and, where possible, a range of ages. Thus the qualitative data set contains 64 interviews.

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

Some Methodological Concerns

In this section we outline briefly some methodological concerns that have been levelled at quantitative surveys designed to explore the fear of crime. The criticisms raised address issues of epistemology (e.g., the difficulties encountered when attempting to turn social processes into quantifiable events), conceptualization (e.g., what is meant by the term the 'fear of crime'), operationalization (e.g., the design and wording of survey questions) and technique (e.g., factors governing the quality of data generated). While specifically associated with crime surveys, some of these remarks resonate with more general debates within the methodological literature.
Epistemological and conceptual concerns

At the epistemological level, Bryman (1984: 78) notes that surveys have been criticized for their tendency to 'view events from the outside', and for imposing empirical concerns 'upon social reality with little reference to the meaning of the observations to the subject of investigation'. Similarly, Bowling (1993: 241-43) accuses crime survey methodology of attempting to convert a social process into a series of quantifiable 'moments' which do not adequately reflect the experiences or feelings of those interviewed. Surveys are thus described as being 'static' and of reducing experience to a 'decontextualized snapshot' (1993: 232). This means that ongoing experiences or variation in the range and strengths of emotions experienced are rarely adequately captured by the survey tool.
660

STEPHEN FARRALL, JON BANNISTER, JASON DITTON AND ELIZABETH CILCHRIST

Conceptual concerns about the fear of crime cannot be wholly divorced from the operational concerns (discussed below). These concerns include defining what fear is, its nature and its occurrence in individuals' everyday lives. Many of these conceptual issues have not been successfully resolved as yet. As Ferraro and LaGrange note (1987: 71) 'the phrase "fear of crime" has acquired so many divergent meanings that its current utility is negligible'.
Operational concerns

A further set of methodological problems concerns the operationalization of concepts. Ferraro and LaGrange (1987) make a distinction between 'formless' and 'concrete' fears. 'Concrete' fears relate to particular offences (e.g. burglaries). 'Formless' fears do not refer to any particular crime, but instead ask, for example, more generally 'How safe do you feel walking around alone in this area after dark?'. Even where specific mention is made of a particular crime (e.g. vandalism), the operationalization is still incomplete because temporal, spatial and social contexts are not addressed. 'Where' the respondent is, 'when' the respondent is there, and 'with whom' they are with are rarely addressed. Clearly, with time, space and social context playing such key roles in the realization of specific fears of crime (Bannister 1993), to leave aside these issues is seriously to damage our understanding of the fear of crime. The operationalization of fear of crime questions in quantitative tools, which require easy to follow, short questions with precise predefined reference points, and hence do not take account of temporal, spatial and social contexts, has seriously limited both the conceptual development of this field and the quality of the data generated. One of the most common tools in thisfieldis the closed question using a Likert scale to measure the extent of the fear of crime (e.g. see any one of the questions in the quantitative interview tool in the Appendix). With such a tool it is hard for respondents to make truly qualitative distinctions between their feelings about particular crimes at particular times and places. Respondents may well simply report generalized levels of fear of crime, which may not adequately represent their actual emotions on any one occasion. Even where just one crime is concerned, a respondent's feelings about that crime may vary greatly with numerous other variables (e.g. social, geographical and temporal dimensions), none of which is addressed. In short, a simplistic, numerical, answer to a general dosed question cannot hope to represent the breadth of experience and feelings about crime experienced by most people.
Technical concerns

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

Several technical problems have been identified with the quantitative exploration of the fear of crime. Fattah (1993: 53) and Bernard (1987: 66) both report differences in results when survey questions are asked in 'dosed' as opposed to 'open' forms. Closed questions, it is argued, sensitize and direct respondents toward the set of answers offered, and the empirical evidence supports this. A Harris poll (Harris and Assodates 1975), for example, employed a 'dosed' question to evaluate whether an elderly population was concerned about crime. They found that 23 per cent of the sample
661

QUESTIONING THE MEASUREMENT OF THE 'FEAR OF CRIME'

considered crime to be a serious personal problem. Yin (1982), employing an 'open' question, found that only 1 per cent of a comparable elderly population considered crime to be a serious personal problem (reported in Fattah 1993:53-4). Itwould appear that attempts to measure the extent of the fear of crime are grossly sensitive to the nature of the question asked. Closed questions, the most common tool in this field, greatly overestimate the incidence of the fear of crime. Closed questions are the staple of many crime surveys. Considering just these two studies (Harris and Associates and Yin), one can conclude that differences in the nature of the dependent variable (in this case open or closed), may result in differences in fear levels which could be as great as a factor of 23. It almost seems that any fear of crime survey will tell us little about the fear of crime, but a lot about the nature of the questions used. Both Fattah (1993: 52) and Hale (1993: 9, 1996) question the assumption that anybody is actually in a position to be able to make an accurate self assessment of their own crime fear level. In a similar vein, Fattah (1993: 58), Schneider (1981: 831), Zauberman (1985: 35) and Block and Block (1984: 144) among others, have questioned the ability of respondents to recall accurately their experiences of crime and victimization. Yet, accurate recordings of respondents' fear levels and recall of past victimizations are central to this field of study. We are, after all, talking about the dependent variable and a major independent variable. Policy assumes this to be unproblematic, although research shows it to be highly problematic. This is, therefore, a major problem. Taken together, these criticisms suggest that crime surveys ignore the meaning of events for respondents; turn 'processes' into 'events'; neglect that the fear of crime can be a multi-faceted phenomenon; poorly conceptualize the fear of crime; ignore important contextual variables (such as time and space); greatly influence the reported incidence of the fear of crime and rely too heavily on respondents' recall. From our review of the literature it is evident that the fear of crime is only partially understood at best. This has led to a 'vicious circle' where poor conceptualization of the fear of crime (which has portrayed this emotion as static and unidimensional) has created methodological insensitivity, which has perpetuated poor operationalization. This poor operationalization is beset by technical inaccuracies which only serve ultimately to muddy the conceptual waters. Where technical innovations have been made (e.g. referring directly to a type of crime rather than just crime in general or to general feelings of unsafety when at home alone at night), these have not improved matters greatly. Overall, this situation has arisen from a limited understanding of the phenomena at hand and from the use of a blunt instrument. These observations are particularly significant because, as Hale (1993: 5, 1996) and Fattah and Sacco (1989: 210-11) note, the survey is the dominant instrument in research on the fear of crime.

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

Assessing Validity through Methodological Triangidation

Given the predominantly quantitative nature of research in this field, some researchers have made calls for methodological triangulation in an attempt to rely less upon the survey tool (in particular, Bowling 1993: 246). However, this questions the nature and validity of the comparative results generated by quantitative and qualitative research
662

STEPHEN FARRALL, JON BANNISTER, JASON DITTON AND ELIZABETH GILCHRIST

methodologies, and the extent to which they are compatible on epistemological grounds. Bryman (1984: 77) notes that quantitative and qualitative methodologies have different views about what passes as warrantable knowledge. Quantitative surveys exhibit a 'preoccupation with operational definitions, objectivity, replicability, causality, and the like' (Bryman 1984: 77). Unstructured interviews and qualitative research in general, is committed to 'seeing the social world from the point of view of the actor' (Bryman 1984: 77). Hence, the two epistemologies, broadly positivism and naturalism, produce quite different forms of knowledge, one measuring, and the other interpreting, reality. Assessing compatibility, when undertaken, rarely elevates above the level of assessing whether or not both yield similar research findings. One of the most thorough reports of such an undertaking is by Belson (1986), who attempted to assess the validity of quantitative tools by re-interviewing qualitatively respondents who had completed a quantitative interview. In one instance, respondents were asked about their chocolate consumption during the previous week using a quantitative tool, and were then immediately re-interviewed in depth by a second researcher. Belson (1986: 64) reports that: ... the number of bars, etc. claimed in the first interview was about afifthlarger than the total number finally agreed in the intensive interview (which is interpreted as being nearer the truth). Belson, (1986: 226), reporting another study (aimed at refining the use of semantic differential scales), states that: About half the first [quantitative] interview ratings (51 per cent) did not stand up to the intensive [qualitative] challenge to them. In a similar vein, Fielding and Fielding (1986: 74-8) also report discovering different findings as the result of using different methodologies from research on police attitudes to recruiting black police officers. They show (1986:76-8) that police officers gave quite different answers at the quantitative stage of the research, to those given at the qualitative interviews stage. In short, the meaning of (and therefore the response to) a particular research question is altered by the nature of the interview being undertaken. Research directly relating to the fear of crime and methodological considerations is presented by Maguire and Corbett (1987: 42-4). They show that self reported ratings of the impact of offences are affected by the nature of the interview undertaken. They conclude that in-depth interviews produce three times as many 'badly affected' respondents as do survey instruments. Similar research findings are reported by King (1992), who shows that men, traditionally not seen as affected by crime in quantitative surveys, report assault as a traumatic event when interviewed qualitatively. Thus, from Belson's validity checks, the research reported by Fielding and Fielding, that reported by Maguire and Corbett and the evidence from King's research on male victims of assault, it is clear that surveys and in-depth interviews do not produce compatible results. Indeed, it seems that a quantitative tool produces results at a quite general level, while a qualitative tool produces responses which are more context sensitive and specific in their focus. The comments above pose a serious problem for those working in the fear of crime field. To develop improved quantitative research methodologies and hence start to
663

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

QUESTIONING THE MEASUREMENT OF THE 'FEAR OF CRIME'

answer some of the criticisms from Hale, Bowling, Fattah and Sacco et al., researchers working in this field must do more than just triangulate their data by incorporating qualitative techniques alongside quantitative research. In short, before triangulation is undertaken, researchers must first understand where (and why) the discrepancies between quantitative and qualitative data lie, and then develop ways of reconciling these differences. Identifying why discrepancies between quantitative and qualitative data sets exist, and the extent to which methodological triangulation may help to assess the validity of measuring tools and aid in efforts to improve them, is the task of the rest of this article.
Methodological triangulation: our position

At an epistemological level, we are concerned with the validity of the knowledge of incidence of the fear of crime generated through survey research. At the technical, conceptual and operational levels, we shall employ qualitative interview data, in order to evaluate the validity of quantitative research on the incidence of the fear of crime. Our epistemological stance is therefore primarily positivistic in that we are concerned with the empirical measurement of the incidence of the fear of crime (rather than how the fear of crime is socially constructed). We are concerned with the operationalization of concepts and other technical issues, such as (for example) recall, and therefore will not take a critical stance as to whether or not this knowledge is warrantable, but about how valid this approach is in a methodological sense. Our data are the answers given by 64 respondents about their fear of crime at interviews of both a quantitative and a qualitative nature (the instruments are described in full in the Appendix). The quantitative interviews were completed before the qualitative interviews.' The discovery that respondents gave different answers to very similar (or identical) questions depending upon the nature of the interview being undertaken was not made until after the interviewing was completed. As such this observation was not actively sought by the researchers, but occurred 'naturally' during the individual interviews themselves. We are writing about this because we were not expecting it to occur, rather than because we were.

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

A Typology of Methodological Problems: Hypothetical and Observed

From the literature, it is dear that quantitative and qualitative research methodologies do not produce the same 'findings'. Where this has occurred in our data set we have adopted the term 'mismatch'. In this section we offer a typology of possible explanations for these mismatches, noting for each the source of the mismatch (i.e. epistemological, conceptual, operational or technical). We presume that observed mismatches are the direct result of the different methodologies employed, rather than changes in respondent experiences or attitudes between interviews. On average the time that elapsed between the two interviews with each respondent was about one month. For the purposes of this paper, the length of interval
1 This allow* us to make a substantial case about quantitative-qualitative mismatches, but because in all of the interviews the quantitative data collection preceded tlie qualitative data collection, we cannot address mismatches lliat may have been generated liad the interview order been reversed, this would, of course, represent that next step in the investigation of die issues raised here.

664

STEPHEN FARRALL, JON BANNISTER, JASON DITTON AND ELIZABETH GILCHRIST

between the quantitative and qualitative interviews is unimportant, as, within quantitative epistemology, a one-hour interval is the same as a one-year interval. Further, each respondent was asked, on both occasions, to report their victimizations and fear of crime, for the same prior time period. Some of these problems are immanent in research into other areas of social life,2 to which the following explanations will be pertinent.
The different epistemological focus of the interviews

Quantitative instruments appear to measure feelings on a very general level, while qualitative interviews allow for these feelings to be expressed more discursively. Hence social and geographical contexts may be invoked in qualitative interviews which may produce mismatching with quantitative data in certain cases. For example, during one quantitative interview the respondent said that he was particularly worried about being robbed or assaulted (4 on a 1-5 scale where 1 represented not being worried at all ever, and 5 meant worrying a lot, all the time). In the later qualitative interview, when asked if he worried about robbery all the time, he said 'No, it's when I go out at night and see a group of people, that's when it starts coming in to your head, but if you're walking around during the day you don't think about it really at all, or when you're sitting at home you don't think "Oh, if I go out tonight, what'll happen?", you know'. In this (qualitative) sense he was a 3 at night, a 5 when he saw a group of people, and a 1 at home during die day. This was found to be the second most common type of mismatch (19 per cent, n=22) and contained a high number of'serious' mismatches (see below for issues surrounding the rating of mismatch seriousness).
The measurement of formless or concrete fears

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

This is an operational problem. 'Formless' (as opposed to 'concrete') fears may be being measured in some instances (Ferraro and LaGrange 1987). This particularly applies to questions such as 'How safe do you feel walking around alone in your neighbourhood after dark?' and 'How much do you worry about being a victim of any crime?'. These produce different results to 'concrete' questions. For example, after the section of the quantitative interview concerned with concrete fears, one respondent was rated as expressing a low level of fear. Later she was (quantitatively) asked about how much she worried about a range of urban or life worries (including 'crime'). At this point she described herself as worrying 'a lot' about crime, thereby contradicting her earlier reply. So, she was a 1 when asked quantitatively about particular crimes, but a 5 when asked qualitatively about 'crime' in general. This type was identified as being the cause of 11 per cent (n= 13) of mismatches.
The nature of open/closed questions

Recall that open/dosed questions are alleged to produce different results (Yin 1982: 242; Fattah 1993: 53; Bernard 1992: 66). Hitherto this explanation has been offered to explain differences in results between one quantitative data set and another. For us,
1

See for example the McKeganey-Powcr debate in Addiction, McKeganey (1995) and Power (1006).

665

QUESTIONING THE MEASUREMENT OF THE 'FEAR OF CRIME'

this explanation has implications for the comparison of our quantitative and qualitative data sets. For example, when one respondent was asked how much he worried about being robbed or assaulted at the quantitative interview, he replied by placing himself in the middle of the 1-5 scale. At the qualitative interview, when asked again about how much he worried about this type of offence, he said 'No, no, 'cause as I say, since I got done the first time [during the late 1970s], I'm very careful'. So he was a 5, is now a 1, and personally averages himself as a 3. This appears to be a particularly important type of mismatch. It accounts for a large number of them (n=46, 40 per cent) and contains a very high number of 'catastrophic' and 'serious' mismatches. Other possible explanations point to the respondent as being a source of mismatch.
A genuine change in fear level
Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

An epistemological problem. Given that the data were collected for each respondent during two time separated interviews, it could be that the changes observed are as a result of a genuine change in fear levels. For example, one respondent who had recently suffered a vicious assault when first interviewed, reported being very fearful about this type of offence. At the qualitative interview he said that these feelings were due then to the recency of the event and that he now felt less worried. This type accounted for 8 per cent (n=9) of the mismatches observed.
The meaning of the word worry' is variously interpreted by the respondent

This is a conceptual problem. During the quantitative interview, each respondent was probably aware that the word 'worry' in the question could be used as a surrogate for other words or feelings, and may have given a response for one or more of these other feelings, rather than for 'worry'. In other words, 'worry' means different things to different people. At the qualitative interview this 'worry' was sometimes redefined by the respondent, hence creating a mismatch. The following extract comes from an interview with one respondent who had appeared quite worried at the quantitative interview, but who gave answers which directly countered this interpretation during the later qualitative interview. When asked to expand in his own words about how he felt about crime he said 'I think I have an awareness of it and I don't think I invite it but at the same time I do not worry about it'. This type was found to account for 14 per cent (n=16) of mismatches and contained a large number of 'serious' mismatches.
The interpretation of the question by the respondent

Bryman (1984: 81) recalls Herbert Gans's observation that surveys can 'only report what people say they do and feel and not what a researcher has seen them say, do and feel' (emphasis added). In some cases, questions posed as being about actualities may be interpreted as being hypothetical by the respondent This applies particularly to quantitative interviews because, due to their very nature, the interviewer is unable to probe discursively whether or not the respondent actually does X or just thinks that under certain conditions they would do X. One respondent clearly interpreted the
666

STEPHEN FARRALL, JON BANNISTER, JASON DITTON AND ELIZABETH GILCHRIST

following question during the quantitative interview as being hypothetical. When asked how much she worried about somebody 'vandalizing her home or something outside it' she placed herself at 5, the most worried end of the scale yet, at the qualitative interview she said, when asked about it again, 'It would worry me if it happened*. Here, in reality, is a 1 who might be a 5. This type accounted for 4 per cent (n=4) of the mismatches observed.
Memory decay

This technical problem refers to forgetting victimizations, or when they occurred, at the time of the interview. The following example of memory decay refers to forgetting an incident of vandalism. At the qualitative interview the respondent, when reminded that he had earlier claimed to have had two experiences of this sort, said: 'I can't remember the other time, I don't know why I said two.' A single victimization masquerading innocently as a multiple one (or vice versa}). This type accounted for 4 per cent (n=4) of the mismatches observed.
'Careless' replies

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

A further technical problem. Some respondents may answer questions without a great deal of attention in order to get the interview over and done with (Belson 1986: 342). This also includes respondents giving an opinion rather than their opinion (p. 233). In this data set there were no mismatches which could be explained this way, but both sources are obviously difficult to trace.
Concealment by the respondent

This refers to a general unwillingness on the part of the respondent to admit to being a victim or to being fearful of crime. No examples of this were discovered. However, this is not unexpected, and would be untraceable unless respondents actually say 'I didn't tell you that earlier because men don't cry' (for example). Finally, a number of potential problems exist with the interviewers themselves. This includes the impact of the interviewer's age, gender or class upon reporting of particularly sensitive topics, or unintentional changes in question wording. Again, no instances of this were discovered in the data set and again tracing them would be difficult.
Discussion

In this section, we will present analysis of the data as it relates to the typology above. We shall show that mismatches identified cannot be explained by reference to any particular demographic variable.3 We shall focus in greater depth upon the nature of
3 The mismatch data have been analysed controlling Tor gender, age, area (wliidi can be seen as standing for socio-economic group within U>e confines of this dau set); self reported Tear, and risk of victimization scaling. None oTtliese appear to influence die incidence of mismatching (see Tables 6-8, p. 073). In oilier words, mismatches would appear lo be evenly spread acroa all of diese groups and cannot be explained by reference to any particular demographic variable.

667

QUESTIONING THE MEASUREMENT OF THE 'FEAR OF CRIME'

the mismatches (in terms of 'where' they are located and 'what' was being discussed when they were produced). We will then discuss the level of severity and the extent of mismatch. Table 1 shows the overall numbers of mismatches. It can be seen that a large number of them occur as 'one offs' (n=20) or with just one other mismatch (n= 14). Thus, some 34 of the 64 cases exhibit a low frequency of mismatch. However, some 15 cases exhibit three or more mismatches. Taken together, the whole data set contains within it 114 instances of mismatch.
The nature of the 'mismatches'

Given that the research had a number of questions on the fear of crime in both quantitative and qualitative interviews, the mismatches that occurred could have done so at any one of a number of points, both within either quantitative and qualitative interviews with a respondent, or between the quantitative and qualitative interviews. Table 2 indicates that of the 114 mismatches identified, the vast majority (n=98, 86 per cent) were located between the quantitative and qualitative interviews. For example, one respondent when asked 'How safe do you feel walking alone in this area after dark?' chose the answer 'Very unsafe'. However, at the qualitative interview she said 'I know exactly the areas where I could go for a walk and know reasonably well I'd be reasonably safe, not one hundred per cent, but reasonably safe'. A smaller number of mismatches (n= 13) occurred internally during the quantitative interview. For example, one respondent was rated as being high on an aggregate worry
TABLE 1 The overall incidence of mismatches

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

Number of mismatches within a case 0 1 2 3 4 3 6 7 Total cases Total mismatches

Number of cases

Total

15 20 14 4 4 5 1 1 64

0 20 28 12 16 25 6 7 114

TABLE 2

The location' ofthe mismatches Total 98 13 2 1 114

Location of mismatch Quantitative and qualitative interviews Internal to the quantitative interviews Internal to the qualitative interviews New worry introduced at qualitative interview Total

668

STEPHEN FARRALL.JON BANNISTER, JASON DITTON AND ELIZABETH CrLCHRIST

level, but answered 'very little' when asked 'How much do you worry about becoming a victim of crime?' during the same interview. Two further cases of internal mismatches involved respondents contradicting themselves during the qualitative interview. Quite clearly the vast majority of the mismatches occur between the quantitative interview and the qualitative interview. This indicates, together with the fact that most mismatches are complex qualifications of earlier precise quantifications, that it is methodology, rather than respondent change, which is the source of most mismatches.4 This is confirmed when we consider the subject of mismatches. Looking at Table 3, there is clear evidence that the 'worry about crimes' category is by far the largest, which is disproportionate as both interviews were concerned equally (at dependent variable level) with crime and with worry about crime. Examples of worry about crime mismatches are readily found. For example, one respondent placed himself at the mid-point on the five-point scale concerning worry about vandalism during the quantitative interview. However at the qualitative interview he claimed that he was 'not worried about that at all'. When the interviewer (who had conducted both interviews) probed him further on this discrepancy, the respondent replied 'I think I might have thought you meant would I be worried if somebody had done that'. This clearly draws out the interpretative process which is performed by the respondent during the quantitative interview, and shows how questions based on actual worry levels (as designed and worded) are interpreted as being hypothetical by some respondents. A surprising number of previous victimizations were not recalled, despite the obvious salience of the event for the respondent. One respondent recalled at the qualitative interview a particularly vicious burglary during which the burglars 'took a lot of our stuff and wrecked the place, ripped the mattress and ripped the settee and urinated and took stuff out of the cupboard and smashed it'. This incident was not reported during the earlier quantitative interview, despite the researcher asking at that time directly about the number of times the respondent had been burgled. The non-'fear of crime' mismatches refer in the main to a question in the quantitative interview which asked respondents to indicate how much diey worried about a range of urban or life worries. The existence of such mismatches confirms that such problems can exist in other areas of social research.

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

TABLE 3 The 'reference' of the mismatches


Reference of mismatch Worry about crimes Risk of victimization Recall of past victimizations Non-'fear of crime1 Total Total 89 14 6 5 114

* And, as such, legitimate! our earlier daim dial 'the length of interval between the quantitative and qualitative interview! is unimportant, as a one-hour interval is epistemologkally the same as a one-year interval'.

669

QUESTIONING THE MEASUREMENT OF THE 'FEAR OF CRIME'

Levels of 'mismatch' seriousness

As mentioned above, not all mismatches were rated the same in terms of their severity. For example, a respondent who claims not to worry 'at all' about burglary at the quantitative interview but at the qualitative interview indicates that they worry 'occasionally', is vastly different to a respondent who makes the same claim at the quantitative interview but during the qualitative interview indicates that they worry 'all the time' and 'cannot go out leaving the home unattended' for fear of burglary. Bearing this, and other possibilities in mind, a three-point scale of mismatch seriousness was developed (ranging from 'catastrophic' to 'mild') in order to rate these differences.5 The 'catastrophic' level includes: responses which were direct contradictions, such as reporting very high levels of fear during quantitative interviews, but low levels during the qualitative interview (or vice versa); illogical statements (such as worrying 'a lot' about having one's car stolen but not actually having a car); inabilities to make an estimation of fear or risk at one interview but not at the other and memory decay/recall mismatches of significant victimizations. The 'serious' level includes: qualifications and expansions of previous answers (such as new words used to describe fear or worry, or social and geographical contexts being brought into play); less severe cases of inabilities to estimate worry or risk between the quantitative interview and the qualitative interview, and memory decay/recall of the less severe victimizations. The 'mild' level includes: minor discrepancies or slight shifts in emphasis, and memory decay/recall problems of a minor nature. Table 4 shows the distribution of the mismatch scales. Some 36 per cent (n=41) of the mismatches were scaled at the 'catastrophic' level. The second level (n=56) accounts for some 49 per cent of the mismatches with the remaining 15 per cent (n= 17) falling into the 'mild' level. Clearly, then, mismatch is a serious problem.
Mismatch seriousness and mismatch explanation

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

Each mismatch was coded not just for its seriousness but also for possible explanation of cause. This entailed a great deal of interpretation on the part of the researchers. Each real mismatch was recorded and notes made about preceding comments that could have explained it.
TABLE 4 The seriousness of the mismatches Total 41 56 17 114

Mismatch seriousness 'Catastrophic' 'Serious' 'Mild' Total

8 A Uircc-poiiu scale fcatastropliic'-'serious'-'niild') was developed rather tlian a more crude dkhotomous 'high-low' split. The scaling of muinatdtes was validated by a member of the research team wlto had not initially classified them and who classified a sub-sample 'blind'. Ninety per cent oftlie blind classifications were to the same point on die scale.

670

STEPHEN FARRALL.JON BANNISTER,JASON DITTON AND ELIZABETH CILCHRIST

Table 5 crosstabulates the seriousness of mismatches by the explanation given for each. As can be seen from the table, the four most common mismatches were judged to be the results of using an 'open' as opposed to a 'dosed' question (n=46,40 per cent); the differences in the epistemological focus of the interview (n=22, 19 per cent); the measuring of'formless' as opposed to 'concrete' fears (n= 13, 11 per cent) and the use of the word 'worry' as a surrogate for other words (n= 16, 14 per cent). The majority of the 'catastrophic' mismatches are between 'open' and 'dosed' questions (n=21). Of the 46 mismatches that were explained by this explanation, 38 were in the two highest levels ('catastrophic' and 'serious'), suggesting not just a common problem but also an important one. Note that the explanations which locate the source of error with the respondents in some way (interpretation, genuine change and memory decay), taken together, only account for 15 per cent (n= 17) of the mismatches. Itwould appear that the majority of the mismatches are located in explanations which point to methodology as their source (the epistemological focus of the interviews, open/closed questions and the distinction between 'formless' and 'concrete' fears, 71 percent, n=81). If one takes poor conceptualization as the root cause of the various interpretations of worry (conceptualization being logically prior to design and implementation of the research instrument) then this Figurerisesto 85 per cent (n=97) of mismatches. At anotfier level, Yin (1982) cites evidence to suggest that not only are there differences between 'open' and 'closed' questions in terms of their answers, but also that these discrepandes follow a uniform direction. 'Open' questions produce lower reported rates of fear of crime; 'dosed' questions, higher rates. The data under consideration here were also analysed for this. Not all mismatches applied to 'worry' about crime,6 but, of the 40 'open' and 'dosed' mismatches that did relate to worry about crime, 37 confirmed Yin's finding that 'dosed' questions generate higher levels of fear. Clearly it is the method which produces these mismatches rather than the respondents.
The importance of these findings

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

We feel that there are good reasons to view these findings as important. They are important because they highlight the range of the epistemological, conceptual,
TABLE Type of mismatch Epistemological interview focus 'Formless' or 'concrete' fears 'Open' or 'dosed' questions "Worry' variously interpreted Genuine change Interpretation of question by R Memory decay Total

5 Mismatch scale by mismatch type


'Serious' 16 6 17 10 4 2 1 56 'Mild' 3 2 8 1 2 1 17 Total 22 13 46 16 9 4 4 114

'Catastrophic' 3 5 21 5 3 2 2 41

Five niuiiutdies applied to risk assessments and one to worries about having a fire at home.

671

QUESTIONING THE MEASUREMENT OF THE 'FEAR OF CRIME1

operational and technical problems which plague the quantitative investigation of the incidence of the fear of crime and which indicate cumulatively that the fear of crime is significantly misrepresented. There are four of these. First, as noted above, the vast majority of the mismatches were located between quantitative and qualitative instruments. Without wishing to suggest that one measure is any more valid than another (as Kaplan 1964: 202 notes, this is a fallacy), it suggests that problems exist with measurement somewhere, even if we are not exactly sure where. However, the fact that the vast majority of mismatches with the 'open' and 'dosed' question distinction follow the same pattern as noted previously suggests that quantitative measures consistently overestimate levels of worry. If this is correct, then questions asked during quantitative research need to be re-thought. Given also that the majority of these mismatches were 'serious' or 'catastrophic', then this would suggest not just that these mismatches exist (and therefore overestimate worry levels), but also that they may be exaggerating these worry levels substantially. Secondly, we must consider the extent of the problem (in terms of how validly it would measure the fear of crime) if this study were to be conducted on a larger scale. Taking only those mismatches relating to worry about crime (n=89, Table 3), we estimate that some 17 per cent of the responses do not concur with one another. 7 This is not, however, a particularly high figure, as it must be remembered that the mismatching was not recognized until after the field work had been completed. Had the respondents been more systematically interviewed with this methodological consideration in mind, the extent of mismatch could be much larger. Thirdly, a number of the mismatches concerned the various interpretations of the word 'worry'. All of the following were offered as word substitutes for 'worry' by respondents (either in direct response to being asked to offer new words, or unasked): distress; anger; shock; annoyance; thinking about crime, and consciousness or awareness of the possibility of crime. Interestingly, one respondent was rated during the quantitative interview as being consistently a very low worrier. At the qualitative interview she made a distinction between vxrrrying about and thinking about crime. The researcher then invited her to answer the same question (as used at the quantitative interview) but with the word 'think' in place of 'worry'. The respondent changed her original score of 1 to 'a 4 or a 5'. While this is by no means a mismatch, it shows how the use of language impacts upon the results gained. While 'think' and 'worry' clearly meant different things to this respondent, the general fact that some respondents were able to offer different words to describe how they felt about crime, reinforces the assertion that the 'fear of crime' field may be plagued by poor conceptualization and subsequent poor operationalization. If these words are used interchangeably by both the academic community (Fattah 1993: 45-6), and by respondents, it suggests that the 'fear of crime' can mean many different things to different people, which introduces great uncertainty exactly where certainty has been taken for granted.

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

7 This figure was calculated as follow: [64 interview] x [4 Offence Types] x [2 Dimensions (fear and risk)]512.00. 89 as a percentage of 512-17.38. It is, unfortunately, impossible to calculate similar rates for just the qualitative data, as they are, by definition, non-numerical.

672

STEPHEN FARRALL, JON BANNISTER, JASON DITTON AND ELIZABETH CILCHRIST

Finally, another issue concerns people's worries across time and space. Worries or fears about crime have been conceptualized as enduring for individuals over time, with individuals' worry levels holding constant. Because the interviews in this research were undertaken with a time gap (albeit short) between them, we are able to make some brief comments upon changes in worry levels as they were accounted for by the respondents themselves. The subtle shifts of feelings over time are missed when purely quantitative methods are used to measure worry about crime. This has two effects. First, we cannot adequately relate previous experiences of crime to current feelings about crime, and second, our conceptualization is thus weakened. We need to understand changes in worry levels as much as we need to understand worry levels themselves. In short, the fear of crime has been conceptualized without reference to time, space and social context. Hence it is seen as a static and enduring feature of an individual's life. Even if the fear of crime had not been conceptualized in such a manner, the quantitative investigation of the fear of crime would still have been hampered by its being epistemologically locked into measuring the fear of crime in a manner that is unlikely to elucidate variations in fear of crime levels across time and space.

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

General Statements

From the analysis presented, a number of general statements can be made. The vast majority of cases exhibit instances of mismatching (n=49 out of 64), but these are mainly restricted to one or two instances per case (Table 1). Mismatching cases are spread evenly across each geographical area (Table 6) and gender (Table 7). Age would also appear to be unrelated (Table 8). In terms of the nature of mismatch, most would appear to occur between the quantitative and qualitative interviews (Table 2). Most of the mismatches refer to 'worries' about crime (Table 3). In terms of the seriousness of mismatch, there is a bias towards the more serious end of the scale given that 36 per cent (n=41) of the mismatches are 'catastrophic' and a mere 15 per cent (n=17) 'mild' (Table 4). Comparing the seriousness of mismatch to the explanation for it, it is apparent that most explanations relate to methodological concerns (rather than to errors associated with the respondent), and that the 'open' and 'closed' question distinction is the most frequent, and is skewed heavily towards the serious end of the scale (Table 5). This suggests that poor question design may be at the root of the problem. The use of the word 'worry' as a surrogate for other emotions or words was another of the most frequent types, and was also markedly skewed towards the more serious end of the scale (Table 5). In short, mismatches stem from methodological issues, are more likely to be focused over issues concerning worry, are more likely to be serious than minor and appear to be best explained by reference to methodological explanations.

Methodological suggestions for future research

We conclude by noting that there are problems with both the measurement techniques used, and the conceptual tools employed by researchers. Our words echo Hale, who writes that 'theoretical and empirical chaos has been the order of the day in the studies
673

QUESTIONING THE MEASUREMENT OF THE 'FEAR OF CRIME'

TABLE 6 Area Outer-city poor Inner-city poor Inner-city rich Outer-city rich Total

Non-mismatches and mismatches by area Mismatches 10 14 12 13 49 Non-mismatches 6 2 4 3 15 Total 16 16 16 16 64

TABLE Area Outer-city poor Inner-city poor Inner-city rich Outer-city rich Total

7 Mismatches by gender and area


Males 6 6 7 7 26 Females 4 8 5 6 23 Total 10 14 12 13 49

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

TABLE 8 Age range 16-39 40-64 +65 Total

Mismatches by age range Non-mismatching 7 3 5 15 Total 31 23 10 64

Mismatching 24 20 5 49

of the fear of crime' (1993: 13). However, we are also in a position to suggest a number of issues to be considered in the future.8 1. First, more attempts at validating measures of the fear of crime need to be undertaken. This means not just research aimed solely at this task, but also that such validation techniques should be incorporated into future crime surveys. The fear of crime could fruitfully be measured as a multi-faceted phenomenon. This will allow for a greater conceptual understanding of the fear of crime. This will mean asking questions that in future in addition measure emotional, cognitive and affective elements of it. From our initial research, we suggest that one avenue researchers may explore is the extent to which a respondent thinks about a certain crime, how afraid of it they are, and how angry they feel when they think of it happening to them. Given that levels of fear of crime are unstable over time, change in levels of fear, and in emotional responses to particular events (such as victimization) must be

2.

3.

* This artide and a further article in Urn journal (Gildirist H aL, forthcoming) are the lira of what u anticipated to be a series of publications from this research project. Later ones will deal with many of the issues raised herein.

674

STEPHEN FARRALL, JON BANNISTER, JASON DITTON AND ELIZABETH GILCHRIST

addressed. These could be measured using questions such as 'How did you feel immediately you knew you had been victimized?' and 'How do you feel about that victimization now?'. Time elapsed between victimization and stated feelings should be used to scale the salience of the latter. Similarly, respondents can be asked questions about how they felt before a nominated victimization as a way of measuring the relationship between fear and victimization more accurately. 4. In order to measure on-going as opposed to intermittent fears of victimization we suggest using questions which ask about crime experiences in ways other than the usual 'last 12 months'. Questions can be phrased in a number of ways, but it is important that the respondent understands that they are being asked about a single event or about a series of events that may have taken place at any time, and are being asked to explain what actually took place, and how this affected their feelings about crime, both then and later. These statements can subsequently be coded for quantitative analysis thus: 'on-going racist graffiti in neighbourhood, R worries for children's safety', 'assault, more than 5 years ago, made R less fearful', and so on. 5. 'Formless' fear of crime questions could attempt to measure why an individual feels unsafe while walking around in their neighbourhood after dark, after asking the general question. This can be achieved by seeking levels of agreement to a battery of statements such as 'I feel unsafe walking around here after dark because I may be attacked'. This principle could even be extended to 'concrete' fear questions. 6. The social, temporal and geographical aspects of the fear of crime could be measured by asking respondents if any of the specific locales in the area in which they live are 'unsafe' during the day or night and why this is so. For example, 'Are the shops nearby unsafe during the day?' with (for example) dosed responses such as 'Yes, because there is never anyone there' and 'Yes, because of the people that go there'. Such offered responses have been developed from the analysis of our qualitative data. 7. Questions should be developed which describe a fictitious everyday event (not necessarily crime-related) and which subsequently ask the respondent to express their feelings and actions if they were to be in that situation. This would help contextualize people's feelings about crime in their everyday lives. This approach was first adopted by van der Wurff et al. (1989) and in the light of their quantitative data and our own qualitative data this appears to be a fruitful approach.

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

Conclusion.

At the start of this article we reviewed the calls made by some authors for methodological triangulation in an attempt to rely less heavily upon the survey tool. While this is an admirable ideal, it perhaps assumes that the combination of qualitative and quantitative data is straightforward. Making sense of data is never straight forward, making sense of quantitative and qualitative data which contradict one another is considerably harder. Given that crime surveys will always require some level 675

QUESTIONING THE MEASUREMENT OF THE 'FEAR OF CRIME'

of quantitative data, we have made suggestions as to how qualitative data can be used to improve the design of survey questions. These suggestions (made immediately above) incorporate qualitative data through open ended questions, more thorough codes and the use of vignettes. To this end, our use of qualitative data to triangulate survey data has proceeded at a stage earlier to that proposed by (for example) Ben Bowling (1993). In the light of the data we have to hand, this would appear to be a better solution to a very thorny problem. We are fortunate that we have been in a position to examine in great depth the fear of crime at a methodological level. At least one of us (Jason Ditton) has extensive experience of being involved in crime surveys that fall foul of all of the criticisms levelled above. Our interim conclusion is that the results of fear of crime surveys appear to be a function of the way the topic is researched, rather than the way it is. The traditional methods used are methods which seem consistently to over-emphasize the levels and extent of the fear of crime. It seems that levels of fear of crime, and, to a lesser extent, of victimization itself, have been hugely overestimated. The policy implications of this should escape no one. The political utility of the fear of crime is entirely dependent upon its being measurable. If, as we suggest, the fear of crime is not as easily measurable as was hitherto thought, then the remarkably rapid ascent of the fear of crime in a very short time during the 1980s might well in the future be surpassed only by its swifter demise.
APPENDIX

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

Instruments Used Quantitative interview On a scale of 1 to 5 (with 1 representing not being worried at all ever; and 5 meaning worrying a lot all the time) how worried are you that somebody might: Break into your house (or try to break in) and steal things, or try to, or damage things? Steal your vehicle, or things from it or off it, or do damage to it? Vandalize your house or something outside it? Rob you or assault you or threaten to do either? How many times have you been a victim of any of these crimes in the last five years? How likely (with 1 representing very unlikely; and 5 almost certain to happen) is it, do you think, that you will be a victim of any of these crimes in the next year? How safe do you feel walking alone in this area after dark? 1 =very safe; 2=fairly safe; 3=a bit unsafe; 4=very unsafe How much do you worry about: Becoming seriously ill? Losing your job or being unable to find a job? Another member of your family losing their job? Becoming involved in a road accident? Your finances? Environmental pollution? Being a victim of any crime? Having a fire in your home? Noise from your neighbours? 1 =a lot, 2=a bit, 3=not much, 4=very litde, 5=not at all

676

STEPHEN FARRALL, JON BANNISTER, JASON DITTON AND ELIZABETH GILCHRIST

Qualitative interview 1: Worry about crime You said that you did/did not worry aboutWhat do you mean by worry? [Intensity: mildly pissed offterrified] Why worry about this? How often do you worry? Constant? Intermittent? 2. Area What is it like? Safe or not? Why? [Relate to R and to specific crime] Darkness/time of dayany effect? 3. Behaviour What types of things do you do? Not do? Where do you go? How get there? Anywhere not go? Why? Clothing/dressany effect? Avoidance strategies 4. Victimization What happened? How did you feel before? and after? Who/ what helped? [Formal and informal forms of help] How feel now? (fear/risk) Why? (loss/damage/inconvenience/fear/discomfort) 5. Other Ideas about other areas Effect of others' ideasfriends/ media etc. Culpability for victimization? Domestic violence location of danger Concerncorporate crime? Health and safety issuesferry disasters, Kings Cross Underground fire? Large corporate fraud? Environmental crimes? Government deception?

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

REFERENCES

J. (1993), 'Locating Fear: Environment and Ontological Security', in H.Jones, ed., Crime And The Urban Environment. Aldershot: Avebury. BELSON, W. A. (1986), Validity in Survey Research. Aldershot: Gower. BERNARD, Y. (1992), 'North American and European Research on Fear of Crime', Applied Psychology: An International Review, 41/1: 65-75. BLOCK, C. R. and BLOCK, R. L. (1984), 'Crime Definition, Crime Measurement and Victim Surveys'./ourna/ of Social Issues, 40/1: 137-60. BOWLING, B. (1993), 'Racial Harassment and the Process of Victimization', British Journal of Criminology, 33/2: 231-50. BREWER, J. and HUNTER, A. (1989), Multimethod Research: A Synthesis of Styles. London: Sage. BRYMAN, A. (1984), T h e Debate About Quantitative and Qualitative Research: A Question of Method or Epistemology?', British Journal of Sociology, 35/1: 75-92. CHAMBERS, G. and TOMBS, J. (1984), The British Crime Survey: Scotland. London: HMSO.
BANNISTER,

677

QUESTIONING THE MEASUREMENT OF THE 'FEAR OF CRIME'

E. A- (1993), 'Research on Fear of Crime: Some Common Conceptual and Measurement Problems', in Bilsky et al., eds., Fear of Crime and Criminal Victimisation. Stuttgart: Ferdinand Enke Verlag. FATTAH, E. A. and SACCO, V. F. (1989), Crime and Victimisation of the Elderly. New York: Springer-Verlag. FERRARO, K. F. and LAGRANCE, R. (1987), The Measurement of Fear of Crime', Sociological Inquiry, 57/1: 70-101. FIELDING, N. G. and FIELDING, J. L. (1986), Linking Data, Qualitative Research Methods, vol. 4. Sage: Newbury Park. FIGGIE, H. E. (1980), The Figgie Report on The Fear of Crime: America Afraid. Part One: The General Public. Willoughby, OH: A-T-O Inc. HALE, C. (1993), 'Fear of Crime: A Review OfThe Literature'. Report to the Metropolitan Police Service Working Party on the Fear of Crime. (1996), 'Fear of Crime: A Review OfThe Literature', International Review OfVictimology, 4: 79-150. HARRIS, L. AND ASSOCIATES (197 5), Myth and Reality of Aging m America. Washington, DC: National Council on Aging. HOUGH, M. (1995), Anxiety About Crime: Findings From the 1994 British Crime Survey, Home Office Research Study No. 147. London: Home Office. HOUGH, M. and MAYHEW, P. (1983), The British Crime Survey: First Report, London: HMSO. (1985), Taking Account of Crime: Key Findingsfromthe Second British Crime Survey. London: HMSO. KAPLAN, A- (1964), The Conduct of Inquiry. San Francisco: Chandler Publishing Company. KING, M. B. (1992), 'Male Sexual Assault in the Community1, in G. C. Mezey and M. B. King, eds., Male Victims of Sexual Assault. Oxford: OUP. KINSEY, R. and ANDERSON, S. (1992), 'Crime and the Quality Of life: Public Perceptions and Experiences of Crime in Scotland', Central Research Unit Paper, Edinburgh: Scottish Office. MAGUIRE, M. and CORBETT, C. (1987), The Effects of Crime and the Work of Victim Support Schemes. Aldershot: Gower. MAYHEW, P., ELLIOT, D. and DOWDS, L. (1989), The 1988 British Crime Survey. London: HMSO. MAXFIELD, M. (1987), Explaining the Fear of Crime: Evidencefromthe 1984 British Crime Survey, Home Office Research Paper No. 41. London: Home Office. MCKEGANEY, N. (1995), 'Quantitative and Qualitative Research in the Addictions: An Unhelpful Divide', Addiction, 90/6: 749-51. MUIR, C. (1994), Personal Communication, 1991 Census Data. PAYNE, D. (1992), Crime In Scotland: Findings from the 1988 British Crime Survey, Central Research Unit Paper. Edinburgh: Scottish Office. POWER, R. (1996), 'Comment on McKeganey's Editorial", Addiction, 91/1: 146-7. SCHNEIDER, A. L. (1981), 'Methodological Problems in Victim Surveys and Their Implications for Research in Victimology'./ourna/ of Criminal Law and Criminobgy, 72/2: 818-38. SELLTIZ, C , WRIGHTSMAN, L. S. and COOK, S. W. (1976), Research Methods m Social Relations, 3/e. New York: Holt, Reinhart and Winston. SKOGAN, W. (1981), Issues in the Measurement of Victimisation, US Department of Justice, Washington, DC: US Government Printing Office. (1990), The Police and Public in England and Wales: A British Crime Survey Report, Home Office Research Study No. 117. London: HMSO.
FATTAH,

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

678

STEPHEN FARRAIX, JON BANNISTER, JASON DtTTON AND EUZABETH GILCHRIST

STRATHCLYDE REGIONAL COUNCIL

(1994), Social Strategy for the Nineties: Priority Areas, July.

Glasgow. L. andSTRiNCER, P. (1989), 'Fear of Crime in Residential Environments: Testing a Social Psychological Model', Journal of Social Psychology, 129/2: 141-60. YIN, P. (1982), 'Fear of Crime as a Problem for the Elderly', Social Problems, 30/2: 240-45. ZAUBERMAN, R. (1985), 'Sources of Information About Victims and Methodological Problems in This Field', in Research on Victimisation, vol. 22. Council of Europe.
VAN DER WURFF, A., VAN STAALDUNIEN,

Downloaded from http://bjc.oxfordjournals.org/ by guest on February 29, 2012

679

Vous aimerez peut-être aussi