Vous êtes sur la page 1sur 12

Quality & Quantity 34: 407418, 2000. 2000 Kluwer Academic Publishers. Printed in the Netherlands.

407

Note

The Methodology of Risk Perception Research


LENNART SJBERG
Center for Risk Research, Stockholm School of Economics, Box 6501, 113 83 Stockholm, Sweden

Abstract. Risk perception is not strictly a matter of sensory perception, but of attitudes and expectations. As such, it can be studied by reasonably well developed methods of attitude measurement and psychological scaling. Such measurement needs to be applied in a pragmatic fashion, however, since the discussions of fundamental measurement and requirements of scale levels appropriate for various types of statistical analysis has failed in establishing a useful basis for empirical research. The paper also discuses sampling procedures and the response rate problem. In risk perception work, there is usually a bias involving too many respondents with an above average level of education, but that variable tends to be weakly related to risk perception variables. Finally, post-modern claims and their rejection of quantitative methods are critically discussed. Key words: risk perception, research methods, measurement, attitudes.

1. Introduction Risk perception is frequently held to be crucial in the understanding and management of risk in policy contexts (Sjberg, 1987b). Conicting views about risks constitute a social and political problem of considerable magnitude in many contexts (Sjberg, 1980, 1998a). Furthermore, implied life values in various policy and regulatory decisions have been found to vary enormously (Morrall, 1986; Ramsberg & Sjberg, 1997), and although the reasons behind such variation are only partly understood, risk perception does appear to be one important factor (Ramsberg & Sjberg, 1998). Perceived risk, in turn, is surely not merely a function of probability of harm but many other factors, such as attitudes, enter (Sjberg, 1996, 2000). The present paper takes as a starting point the notion that risk perception, of the public, of experts and other special groups, is important and hence the question arises how it should be investigated. We have here a question which may seem
This is a study within CEC project RISKPERCOM (Contract FI4PCT950016), supported also by the Swedish Council for Planning and Coordination of Research (FRN), the Swedish Council for Humanistic and Social Science Research (HSFR), the Swedish Nuclear Power Inspectorate (SKI), and the Swedish Radiation Protection Institute (SSI)

408

LENNART SJBERG

simple and innocent at the surface, but it calls for opening a can of worms, or even several such cans . . .

2. Is There Such a Thing as Perceived Risk And If So, What Is It? First, can risk at all be perceived? It may be argued that it cannot, since there is nothing out there which can be called risk and which can be sensed. Hence, there is no risk perception, cp. (Brehmer, 1987)! This argument would seem to be the beginning and the end of the debate, but things are not that simple. True, risk cannot be sensed, only dangers and threats. Risk is about a future event, and future events can be imagined or construed, not sensed, unless you are a psychic or a new Nostradamus. Furthermore, risk is about both the likelihood, or probability if you prefer, of harm and the size and quality of the harmful consequences, should they occur (Drottz-Sjberg, 1991). Hence, risk is something quite different from perception in the technical sense of the word as used by students of the psychology of perception and sensation. Yet, the term caught on in the 70s, and seems to be here to stay, in spite of its basically confusing meaning. Perhaps there is no more handy word in the English language, as there is in, say, Swedish or German, and perhaps the term is attractive because it leads the thoughts to perception mechanisms; a simplistic view of risk perception as being stimulus driven. Nothing could be further from the truth, of course since there is not even a stimulus around! Risk perception is all about thoughts, beliefs and constructs (Sjberg, 1979). The point can indeed be made that risk perception is not, at least not only or mainly, a question of cognition. Now, cognition is a term just as confusing as perception. It refers to higher mental processes such as thought, memory, judgments and decision making and is supposed to be more exible, less bound by the environmental contingencies, than perception (Brunswik, 1956; Sjberg, 1971a). Maybe risk perception is a question of subjective probability, which in turn possibly is to be understood as a function of heuristics (Tversky & Kahneman, 1974)? Many researchers seem to have believed this in the 70s, on the basis of the elegant work by Tversky and Kahneman on subjective probability. However, two fatal mistakes were made. First, risk is not only a question of probability even if probability is quite important in accounting for risk. Second, and more important, the heuristics work took its basis in probability calculus problems and was based on counter-intuitive solutions to some problems of that kind. The generalizability of this work to real world problems of risk perception is highly tenuous and speculative, although there are a few cases supporting it to some extent (e.g., McNeil et al., 1982). Risk is very seldom connected with well specied calculus problems other factors enter and mathematical analysis of perceived risk is usually intractable or even irrelevant. The heuristics work attempted to establish risk perception as a cognitive phenomenon governed by certain biasing factors that could in turn

THE METHODOLOGY OF RISK PERCEPTION RESEARCH

409

be explained as necessary for dealing with problems of cognitive constraints and cognitive delusions or simply lack of education about probability and its laws. But risk perception, again, is a question of beliefs about risk, and as such it is akin to many other phenomena which have been investigated by social psychologists under the heading of attitude, see Eagley and Chaiken (1993) for an extensive review. Attitude has many determinants and ramications cognitive limitations play a minor role here. Risk perception should hence be more closely related to social than to cognitive psychology and cannot be understood by the demonstration of faulty intuitions in probability calculus, however elegantly they are demonstrated.

3. How Can You Investigate Such a Seemingly Elusive Phenomenon as Peoples Perception of Risk? The rst answer to the question of the subheading is that you can investigate it in the same way as other beliefs and attitudes. There is a wealth of research which has shown, since the rst attempts were made by Thurstone in the 20s (Thurstone, 1928), that attitudes can be measured. I even venture to assert that it is not that difcult to measure beliefs and attitudes. People can be asked to make ratings of size of perceived risk on a scale, say, from 0 (no risk) through a number of dened categories to a maximum risk, perhaps dened as an extremely large risk. Such ratings have been found to be quite useful. Other, more complicated, ways of rating subjective intensities have been suggested and used in psychophysics and related work in perception, and also with regard to constructs and beliefs (Ekman, 1961; Ekman & Bratsch, 1965; Ekman & Sjberg, 1965, 1971b; Stevens, 1957). However, these so-called ratio scaling methods have not turned out to give more useful data than the simpler category ratings and they do provoke protests from subjects who feel that they cannot give such precise estimates of their belief strengths as called for in the instructions. The discussion of scale levels must be mentioned here, since it still is brought up in discussions of appropriate data analysis, especially by statisticians. The debate started a long time ago, in the 40s, and was carried over into psychology by S. S. Stevens who argued against category scales. These scales, according to him, were biased in a nonlinear manner and should be replaced by some variation of magnitude estimation or ratio ratings, which were supposed to yield ratio scales. However, as mentioned above, the ratio rating methods gave rise to a host of difcult methodological problems (Sjberg, 1971b) (mostly just ignored in the ratio scale tradition), while Anderson managed to show, in a very extensive research program, that category scales had many desirable quantitative properties (Anderson, 1996). The measurement tradition later led to attempts at devising methods for representational measurement. Although some valuable conceptual insights came out of this work, it has resulted in few practically useful methods (Cliff, 1992; Dawes,

410

LENNART SJBERG

1994b), and the psychometric tradition, based on classical test theory (Gulliksen, 1950) prevails (Judd & McClelland, 1998). Logically speaking, category rating scales could be extremely misleading. Suppose one category interval is subjectively 1000 times larger than the others. In such a case, analyses that assume all intervals to be equal could be quite invalid. However, nothing suggests that such extreme deviations actually occur. And even if there are some mild deviations from strict linearity, they probably play only a minor role and can be safely disregarded. As a check, some simple forms of statistical analysis can always be carried out with ordinal statistics. When this is done, results typically coincide, more or less, with those based on conventional metric analyses. It is plausible that category rating scales carry more information than the simple ordinal one, in particular also some valid information about the order of the intervals. If this is true, they yield ordered metric scales and it has been shown that such scales are close approximations to interval measurement (Lehner & Noma, 1980). It can be concluded that psychological rating scales give close enough approximations to interval scales; restrictions on numerical assignment, posed by the scaling methods, are large enough to rule out widely divergent numerical representations. Hence, the data analyses will be robust with relation to the ner details of measurement. However, this argument is not widely known or tends to be buried in a wealth of more or less sophisticated measurement discussions. There is frequent criticism of quantitative approaches, ranging from the naive a 4 could just as well be a 5 to more elaborate arguments built on the logic of measurement rather than the psychology of measurement. The result of these uninformed and destructive standpoints is a ight from quantitative methods altogether. Thus, it is commonly argued that risk perception should be studied by soft methods such as depth interviews, and that surveys are unnecessary or even misleading. It sufces, apparently, to interview in an unstructured way a handful of persons to get an idea about risk perception in a population! Social scientists know, however, that this is indeed a very dangerous route to travel. The notions of soft methodology have been historically closely connected with some schools of clinical psychology and psychiatry, most notably the so-called psychodynamic schools. These are based on the work by Freud and Jung a century ago and use many variations of their basic conceptual contributions. In risk perception work, we have seen the launching of ideas about radiation phobia, see (Drottz-Sjberg & Persson, 1993) for a critical discussion. Jungian ideas appear to be attractive to some investigators of risk perception (Fritzsche, 1995, 1996). Yet, current discussions of the psychodynamic schools leave little doubt about the frailty of these theories and their accompanying methods (Crews, 1996; Macmillan, 1991; Noll, 1994). In addition to these critical comments on soft methodology, it should be stressed that people can make (approximately) valid ratings of intensity of their beliefs, while interviews are likely to be subjected to many interviewer biases. The latter is

THE METHODOLOGY OF RISK PERCEPTION RESEARCH

411

true also of telephone interviews, which are becoming increasingly popular. Furthermore, people can be prompted, in interviews, to construct negative scenarios which have little or nothing to do with their spontaneous risk perception as revealed in responses to a questionnaire eliciting risk perception ratings. Especially sensitive questions, and risks are indeed sensitive matters, are best studied by mailed surveys which can be answered anonymously (Wentland & Smith, 1993). The fact that mailed surveys are much cheaper than other methods speaks in their favor whenever the other methods have not been found to be clearly superior. The often claimed low response rate of mailed surveys can be greatly improved (Dillman, 1991). It is sometimes argued that interviewers can monitor and guide the process and correct any misunderstandings etc., and that this is an important advantage of interviews over mailed surveys (Suchman & Jordan, 1990); see Schlegloff (1990) for a critical comment. The questions themselves appear to be a major factor in accounting for answers to a survey, whether it be administered face-to-face or by mail (Schwarz et al., 1998), while interviewer variability, in standardized interviews, is a relatively minor factor (Groves, 1989). Yet, in highly unstandardized interviews of the depth interview variety there is every reason to believe that interviewers vary the questions in an uncontrolled manner and there is no guarantee that interviewers do not affect the results they get. On the other hand, quantitative rating scales are quite well behaved. In a methodological study, I investigated the properties of various response formats used for studying risk perception (Sjberg, 1994). It was found that all formats gave essentially linearly related results, but that some of them gave data which were more efcient to discriminate among hazards. Category scales with a limited number of response categories, say 5 or 7, appear to be preferable. The scale used in the psychometric paradigm (see Englander et al. (1986) for an example), with 21 categories, turned out to be inferior. Graphical rating scales also appear to be inferior. Too many categories may be confusing to people. So far the present discussion dealt only with risk perception explicitly, i.e., with instructions where people are asked to judge perceived risk. Other terms are common, such as concern, worry and safety. It would be interesting to know what are the most common ways of operationalizing risk perception" in applied work, and what the basis is for choosing one approach or another. One example is provided by work on industrial accidents where workers were asked to judge if they felt safe during the execution of various tasks (Rundmo & Sjberg, 1998). Still other possibilities exist. Some current work is focused on worry and concern rather than risk. E.g., ratings of worry and worry characteristics were obtained by MacGregor (1991) but they were not related to perceived risk. A study of concern and a number of other risk dimensions also excluded perceived risk level (Fischer et al., 1991). It is not obvious that worry and concern judgments can replace judgments of risk level, consequences or other related dimensions. Ratings of worry have been found to be only weakly related to perceived risk, however

412

LENNART SJBERG

(Drottz-Sjberg & Sjberg, 1990; Sjberg, 1998d). I found this to be equally true of general and specic worry measures. It is therefore unclear what policy implications such data have. It can also be noted that a word such as concern is hard to translate to other languages without losing some of its essential meaning, or without adding some new meanings. Risk, by way of contrast, is a word which appears in many languages and is clearly to be translated by its closely corresponding expressions (risque, risiko, riski etc). International understanding and comparability are therefore probably improved when risk is used, as compared with studies using such terms as concern. This does not deny the need for a closer scrutiny of probabilities and consequences, of course, or for attention being paid to the risk denitions that people use spontaneously (Drottz-Sjberg, 1991). Perceived risk is, by itself, not a very good indicator of demand for risk reduction, which may be a more directly relevant dimension (Sjberg, 1998b, 1999a). Such demand is more closely related to expected severity of consequences, should harm be inicted by a stated hazard. This nding may seem to be counter intuitive, but it still has been found to hold in many studies, and whether the hazard is dened as a consequence or as a risk generating activity (Sjberg, 1999c).

4. The Accumulation of Knowledge Any scientic eld aims at the improvement of knowledge, provided that it is accepted that there can be such a thing. Of course, Tolstoy was not a better writer than Homer but literature is not the same as science or is it? I will here not accept the post-modern claim that all knowledge is arbitrary and simply a matter of social construction, but I presume that "there is something out there" and that some statements about that something are better than others. It is our job to improve on statements about risk perception. How can this be done? Psychological research is full of traps (Sjberg, 1981, 1983, 1987a, 1998c). A common procedure is that of hypothesis testing. A model or theory of some kind implies hypotheses, almost always of the kind X and Y are related or A is different from B . Straw men in the form of null hypotheses are then set up and statistical analysis is used to derive a measure of how likely the actual data would be, given that the null hypothesis is true. If this probability is low, it is concluded that the proposed hypothesized relationship probably exists. There are many problems with this so far dominating approach to the accumulation of knowledge in the behavioral sciences. One is a very basic logical one; even if the null hypothesis is unlikely one cannot logically deduce that the alternative hypothesis is likely (Falk & Greenbaum, 1995). In an attempt to defend the traditional approach to hypothesis testing, Hagen argued that deductions need not at all be logically correct, or that bizarre conclusions can be drawn in a logically correct manner (given a bizarre premise) (Hagen, 1997b). Clearly, the apparent necessity to use such obviously invalid or irrelevant arguments suggests that there is no really good way to defend the received wisdom of hypothesis testing . . . .

THE METHODOLOGY OF RISK PERCEPTION RESEARCH

413

Another problem is that statistical signicance is so easy to obtain. Perhaps that is a major reason for its popularity. All it takes, even with very minuscule effects and correlations, is a moderately sized sample. Researchers typically hunt for signicance, and journal editors, the gate keepers of Academia, rarely ask for anything more by way of quantitative results. Any hunch can typically gain some small effect in data, and scientic divergence rather than convergence is the result. Research becomes something of a random walk, because minuscule effects can always be found, and can have any of a large number of explanations. (Textbook writers, in the next stage, seldom take up the issue of strength of relationship at all, but limit themselves to writing about systematic or signicant effects.) For example, background data are seldom thoroughly checked in connection with ideas about new dimensions in any eld. Take cultural theory of risk perception as an example. We have here minuscule effects of about 5% explained variance (Sjberg, 1997). At the same time it is quite plausible that some or all of these small effects are due to gender variation, but that possibility is typically not checked out. Researchers have been contended with the demonstration of statistical signicance. It is not strange at all that the reliance on hypothesis and signicance testing brings about an explosion of complexity and that any reading of the research literature which does not consider the size of effects leaves an impression of bewildering and overwhelming multiplication of new concepts.

5. Samples Any study of risk perception requires a sampling of respondents. The question is how to do it and what should be required of the sampling procedure. Many investigators have been content with a convenience sample of people that for some reason were available and willing to participate. They then try to draw conclusions about a population. A striking example is a French study where graduate students were respondents and the authors draw conclusions about the French public at large (Karpowicz-Lazreg & Mullet, 1993). The paper was published in a prestigious journal in the eld of risk research, so apparently the editor and the reviewers were not worried about the use of a highly specialized convenience sample. Other examples are easy to nd. Most of the cross-country comparison of risk perception performed in the psychometric paradigm used academic convenience samples and went on to comparisons of nations on this basis, see Boholm (1998) for a review. Others have used a canvassing strategy: widely different convenience samples have been used in order to get a notion about the range of risk perceptions in a country (Nyland, 1993; Sjberg et al., in press). In the latter work, there is at least some possibility to get an impression of the within-country variation among groups. Social scientists appear frequently to be pessimistic about getting randomly sampled persons to respond to mailed questionnaires. The competence to perform such work is apparently not widely spread any more in the academic community,

414

LENNART SJBERG

but resides mainly in commercial rms who in turn have most of their experience from market studies. This, however, is not a good place to start, because market studies are done under quite different auspices from studies of risk perception. People are clearly less than enthusiastic about spending some of their time doing unpaid work for what is obviously commercial interests. When they are approached about taking part in a study of important and interesting social and political topics, such as risk perception of technology, they react, quite understandably, in a different manner. We have done many studies with extensive questionnaires, up to some 40 pages in A5 format, and found that some 6070% of random samples of the population at large are willing to participate for, in most cases, quite small incentives. About 45 minutes to 1 hour is required for the task by each respondent. With special groups, such as experts and politicians, we have obtained some 10% higher response rates, as we also have in time periods when a hazard was particularly salient, such as in 1986 in the aftermath of the Chernobyl accident (Sjberg & Drottz, 1987). In typical commercial applications in Sweden, pollsters report some 70% response in in-home or telephone interviews. In other words, the results are slightly better in this particular respect but at a cost about 20 times higher, and they are probably worse due to the lack of anonymity and interviewer bias. Interviewer fraud is also a very real possibility in a certain number of the cases reported as interviewed (Aaker & Day, 1990). Possibly, conditions in Sweden favor a mailed questionnaire strategy and in other countries it might not work and other approaches may be called for. Yet, Dillman has devised methods that seem to work well enough with mailed questionnaires in a U.S. setting, although probably with shorter questionnaires (Dillman, 1991). A U.K. example is provided by the work of Eiser (Eiser et al., 1998). In our experience we get about the same response rate with a 20 page questionnaire as with one containing 40 pages, however, so length of the questionnaire seems not to be a crucial issue. Admittedly, with 60% response there may be some bias in the sample of respondents as compared to the population, and with 70% response as well. We typically detect a bias only in one major respect: the respondents have, on the average, a higher level of education than the public at large. How important is this fact? Correlations have routinely been computed between level of education and core variables of risk perception. It is then found that these correlations are quite modest in size. In other words, even if there is a bias in terms of level of education, this is apparently less important than one might have believed. The small bias that there might be furthermore implies that levels of perceived risk are underestimated since the correlation between educational level and size of perceived risk is negative. Another possible source of bias is self selection and interest. It is likely that those who respond to any survey are people who have a particular interest in the topics covered by the survey. It is not documented, however, that such a self

THE METHODOLOGY OF RISK PERCEPTION RESEARCH

415

selection based on interest brings with it any serious bias, nor that it would be biasing different response modes in different ways.

6. Conclusions The soft methodology has face validity, but no more. It is expensive to use, often gives confusing results and is very amenable to interviewer expectations and other biases. Its use is advocated by many who take a clinical approach to the study of risk beliefs, but bringing in clinical psychology and psychiatry in risk perception work seems less than useful, at least as long as normal people are studied rather than pathological clinical groups (Drottz-Sjberg & Persson, 1993). Clinical psychology and psychiatry per se are furthermore under heavy criticism (Dawes, 1994a; Hagen, 1997a). The idea to pursue, e.g., Jungian theory in risk perception research (Fritzsche, 1995, 1996) seems to be, at the very least, futile when the basic theory itself is shown to be incoherent and largely incomprehensible, and at least partly based on outright fraud (Noll, 1994). The entire post-modernist philosophy of social research is of course an ever present factor of importance in discussions of method; see Windschuttle (1996) for a clear and illuminating expose of the lack of scientic and logical basis for these claims. It is to be much regretted that critical discussions of quantitative methods seem to have provided impetus to what is so much worse: uninhibited subjectivism. The criticism of quantitative risk perception research is just a special case of the larger debate about positivism. This is not the place to enter into that debate. But one dimension can be mentioned, viz. that of simplicity or cognitive economics. It is easy to produce a wealth of information about anybodys risk perception. People can talk for hours about risks, and they can easily construct risk scenarios. Things can go wrong in so many ways, but right only in a few. Worry is an essential part of the human predicament. The question is what to do with such information and how valid it is. It is exceedingly difcult to summarize all of the details, even with a small number, say 30, of interviews. How important is a given respondents idiosyncratic notions about any given hazard? The quantitative approach tries to do the opposite from the qualitative one. Instead of generating more and more details and new aspects, it is a way of singling out the few dominating and most important themes. The idea is to simplify rather than to complicate. The simplied picture allows both the researcher and the policy maker to concentrate on what is important knowing very well that there is much more to the topic for those who want completeness or to enjoy the pleasure of multiplicatory complexity. The potential value of risk perception research for policy makers has been discussed elsewhere (Sjberg, 1999b). Science cannot proceed without simplication. It is the basic presumption of science that simplication is both necessary and possible and that all problems cannot be treated at the same time. The most tractable and important ones should come rst.

416 References

LENNART SJBERG

Aaker, D. A. & Day, G. S. (1990). Marketing Research. 4th edn. New York: Wiley. Anderson, N. (1996). A Functional Theory of Cognition. Mahwah, NJ: Erlbaum. Boholm, . (1998). Comparative studies of risk perception: A review of twenty years of research. Journal of Risk Research 1: 135164. Brehmer, B. (1987). The psychology of risk. In: W. T. Singleton & J. Hovden (eds.), Risk and Decisions. New York: Wiley, pp. 2539. Brunswik, E. (1956). Perception and the Representative Design of Psychological Experiments, 2nd edn. Berkeley: University of California Press. Cliff, N. (1992). Abstract measurement theory and the revolution that never happened. Psychological Science 3: 186190. Crews, F. (1996). The verdict on Freud. Psychological Science 7: 6368. Dawes, R. M. (1994a). House of Cards. Psychology and Psychotherapy Built on Myth. New York: The Free Press. Dawes, R. M. (1994b). Psychological measurement. Psychological Review 101: 278281. Dillman, D. A. (1991). The design and administration of mail surveys. Annual Review of Sociology 17: 225249. Drottz-Sjberg, B.-M. (1991). Perception of Risk. Studies of Risk Attitudes, Perceptions and Denitions. Stockholm: Stockholm School of Economics, Center for Risk Research. Drottz-Sjberg, B.-M., & Persson, L. (1993). Public reaction to radiation: Fear, anxiety or phobia? Health Physics 64: 223231. Drottz-Sjberg, B.-M. & Sjberg, L. (1990). Risk perception and worries after the Chernobyl accident. Journal of Environmental Psychology 10: 135149. Eagley, A. H. & Chaiken, S. (1993). The Psychology of Attitudes. Fort Worth, TX: Harcourt Brace Jovanovich. Eiser, J. R., Podpadec, T. J., Reicher, S. D. & Stevenage, S. V. (1998). Muddy waters and heavy metal: Time and attitude guide judgments of pollution. Journal of Environmental Psychology 18: 199208. Ekman, G. (1961). Some aspects of psychophysical research. In W. A. Rosenblith (eds.), Sensory Communication. New York: Wiley. Ekman, G. & Bratsch, O. (1965). Subjective distance and emotional involvement: A psychological mechanism. Acta Psychologica 24: 446453. Ekman, G. & Sjberg, L. (1965). Scaling. Annual Review of Psychology 16: 451474. Englander, T., Farago, K., Slovic, P. & Fischhoff, B. (1986). A comparative analysis of risk perception in Hungary and the United States. Social Behavior 1: 5566. Falk, R. & Greenbaum, C. W. (1995). Signicance tests die hard: The amazing persistence of a probabilistic misconception. Theory & Psychology 5: 7598. Fischer, G. W., Morgan, M. G., Fischhoff, B., Nair, I. & Lave, L. B. (1991). What risks are people concerned about? Risk Analysis 11: 303314. Fritzsche, A. W. (1995). The role of the unconscious in the perception of risks. Risk: Health, Safety & Environment 6: 1540. Fritzsche, A. W. (1996). The moral dilemma in the social management of risk. Risk: Health, Safety & Environment 7: 4145. Groves, R. M. (1989). Survey Errors and Survey Costs. New York: Wiley. Gulliksen, H. (1950). Theory of Mental Tests. New York: Wiley. Hagen, M. A. (1997a). Whores of the Court. The Fraud of Psychiatric Testimony and the Rape of American Justice. New York: Regan Books. Hagen, R. L. (1997b). In praise of the null hypothesis statistical test. American Psychologist 52: 1524.

THE METHODOLOGY OF RISK PERCEPTION RESEARCH

417

Judd, C. M. & McClelland, G. H. (1998). Measurement. In D. T. Gilbert, S. T. Fiske & G. Lindzey (eds.), The Handbook of Social Psychology, Vol. I. Boston: McGraw-Hill, pp. 180232. Karpowicz-Lazreg, C. & Mullet, E. (1993). Societal risks as seen by the French public. Risk Analysis 13: 253258. Lehner, P. E. & Noma, E. (1980). A new solution to the problem of nding all numerical solutions to ordered metric structures. Psychometrika 45: 135137. MacGregor, D. (1991). Worry of technological activities and life concerns. Risk Analysis 11: 315 324. Macmillan, M. B. (1991). Freud Evaluated: The Completed Arc. Amsterdam: North-Holland. McNeil, B. J., Pauker, S. G., Sox, H. C. & Tversky, A. (1982). On the elicitation of preferences for alternative therapies. New England Journal of Medicine 306: 12591262. Morrall, J. F., III (1986). A review of the record. Regulation(Nov/Dec): 2534. Noll, R. (1994). The Jung Cult. Origins of a Charismatic Movement. Princeton, NJ: Princeton University Press. Nyland, L. G. (1993). Risk Perception in Brazil and Sweden (Rhizikon: Risk Research Report No. 15). Center for Risk Research, Stockholm School of Economics. Ramsberg, J. & Sjberg, L. (1997). The cost-effectiveness of life saving interventions in Sweden. Risk Analysis 17: 467478. Ramsberg, J. & Sjberg, L. (1998). The importance of cost and risk characteristics for attitudes towards lifesaving interventions. Risk Health, Safety & Environment 9: 271290. Rundmo, T. & Sjberg, L. (1998). Risk perception by offshore oil personnel related to platform movements. Risk Analysis 18: 111118. Schlegloff, E. A. (1990). Comment. Journal of the American Statistical Association 85: 248251. Schwarz, N., Groves, R. M. & Schuman, H. (1998). Survey methods. In: D. T. Gilbert, S. T. Fiske, & G. Lindzey (eds), The Handbook of Social Psychology, Vol. I. Boston: McGraw-Hill, pp. 143179. Sjberg, L. (1971a). The new functionalism. Scandinavian Journal of Psychology 12: 2952. Sjberg, L. (1971b). Three models for the analysis of subjective ratios. Scandinavian Journal of Psychology 12: 217240. Sjberg, L. (1979). Strength of belief and risk. Policy Sciences 11: 3957. Sjberg, L. (1980). The risks of risk analysis. Acta Psychologica 45: 301321. Sjberg, L. (1981). On the homogeneity of psychological processes. Quality and Quantity 15: 1730. Sjberg, L. (1983). Dening stimulus and response: An examination of current procedures. Quality and Quantity 17: 369386. Sjberg, L. (1987a). Conceptual and empirical status of mental constructs in the analysis of action. Quality and Quantity 16: 125137. Sjberg, L. (1987b). Risk and Society. Studies in Risk Taking and Risk Generation. Hemel Hempstead, England: George Allen and Unwin. Sjberg, L. (1994). Perceived Risk vs Demand for Risk Reduction (Rhizikon: Risk Research Report No. 18). Center for Risk Research, Stockholm School of Economics. Sjberg, L. (1996). A discussion of the limitations of the psychometric and cultural theory approaches to risk perception. Radiation Protection Dosimetry 68: 219225. Sjberg, L. (1997). Explaining risk perception: An empirical and quantitative evaluation of cultural theory. Risk Decision and Policy 2: 113130. Sjberg, L. (1998a). Risk perception: experts and the public. European Psychologist 3: 113. Sjberg, L. (1998b). Why do people demand risk reduction? In: S. Lydersen, G. K. Hansen, & H. A. Sandtorv (eds), ESREL-98: Safety and Reliability. Trondheim: A. A. Balkema, pp. 751758. Sjberg, L. (1998c). Will and success individual and national. In: L. Sjberg, R. Bagozzi & D. Ingvar (eds), Will and Economic Behavior. Stockholm: EFI, pp. 85119. Sjberg, L. (1998d). Worry and risk perception. Risk Analysis 18: 8593.

418

LENNART SJBERG

Sjberg, L. (1999a). Consequences of perceived risk: Demand for mitigation. Journal of Risk Research 2: 129149. Sjberg, L. (1999b). Policy implications of risk perception research: A case of the emperors new clothes? In: P. Hubert & C. Mays (eds), Proceedings of the 1998 Annual Conference. Risk Analysis: Opening the Process. Paris: IPSN, pp. 7786. Sjberg, L. (1999c). Risk perception in Western Europe. Ambio 28: 543549. Sjberg, L. (2000). Factors in risk perception. Risk Analysis 20: 111. Sjberg, L. & Drottz, B.-M. (1987). Psychological reactions to cancer risks after the Chernobyl accident. Medical Oncology and Tumor Pharmacotherapy 4: 259271. Sjberg, L., Kolarova, D., Rucai, A.-A. & Bernstrm, M.-L. (In press). Risk perception and media risk reports in Bulgaria and Romania. In: O. Renn & B. Rohrmann (eds), Cross-Cultural Risk Perception. Stevens, S. S. (1957). On the psychophysical law. Psychological Review 64: 153181. Suchman, L. & Jordan, B. (1990). Interactional troubles in face-to-face interviews. Journal of the American Statistical Association 85: 232241. Thurstone, L. L. (1928). Attitudes can be measured. American Journal of Sociology 33: 529554. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: heuristics and biases. Science 185: 11241131. Wentland, E. J. & Smith, K. W. (1993). Survey Responses. An Evaluation of Their Validity. San Diego: Academic Press. Windschuttle, K. (1996). The Killing of History. How a Discipline is Being Murdered by Literary Critics and Social Theorists. Paddington NSW: Macleay.

Vous aimerez peut-être aussi