SAMPLING ERROR AND SELECTING INTERCODER RELIABILITY SAMPLES FOR NOMINAL CONTENT CATEGORIES

By Stephen Lacy and Daniel Riffe
This study views intercoder reliability as a sampling problem. It develops a formula for generating sample sizes needed to have valid reliability estimates. It also suggests steps for reporting reliability. The resulting sample sizes will permit a knozun degree of confidence that the agreement in a sample of items is representative of the pattern that would occur if all content items were coded by all coders. Every researcher who conducts a content analysis faces the same question: How large a sample of content units should be used to assess the level of reliability? To an extent, sample size depends on the number of content units in the population and the homogeneity of the population with respect to variable coding complexity. Content can be categorized easily for some variables, but not for other variables. How does a researcher ensure that variations in degree of difficulty are included in the reliability assessment? As in most applications involving representativeness, the answer is probability sampling, assuring that each unit in the reliability check is selected randomly.' Calculating sampling error for reliability tests is possible with probability sampling, but few content analyses address this point. This study views intercoder reliability as a sampling problem, requiring clarification of the term "population." Content analysis typically refers to a study's "population" as all potentially codabte content from which a sample is drawn and analyzed. However, this sample itself becomes a "population" of content units from which a sample of test units is randomly drawn to check reliability. This article suggests content samples need to have reliabilify estimates representing the population. The resulting sample sizes will permit a known degree of confidence that the agreement in a sample of test units is representative of the pattem that would occur if all study units were coded by al! coders. Reproducibility reliability is the extent to which coding decisions can be replicated by different researchers.- In principle, the use of multiple independent coders applying the same rules in the same way assures that categorized content does not represent the bias of one coder. Research methods texts discuss reliability in terms of measurement error resulting from problems in coding instructions, failure of coders to achieve a common frame of reference, and coder mistakes.-* Few texts or Background

Stephen Lacy is a professor in the Michigan State University School of journalism, and l&MC Quarterly Daniel Riffe is professor m the E.W.Scripps School of journalism at Ohio üniversity. The J^.'*,^'!^^ authors thank Fred Fico for his comments and suggestions. 963-973

SAMPUNG ERROR AND SEIICUNC (hTiKCODER RELMBiLm- SAMPLES FOR NOMINAL CONTEOT CATEGORIES

g. the sample used must havea confidence interval that does not dip below .'" Krippendorf argues that probability sampling to gei a representative sample is not necessary.80." Yet early inquiries into reliability testing did address probability sampling. If.80. Cohen'^ discussed sampling error while introducing kappa." should be reanalyzed by independent coders to calculate overall intercoder reliability. Singletary has noted that reliability checks introduce sampling error when probability samples are used." How large a subsample? "When a very large sample is involved. Scott's'^ article introducing his pi included an equation accounting for sampling error. in a given test. the confidence interval does dip below ." Stempel concludes that reliability estimates "should be based on several samples of content from the material in the study"'' and that a "minimum standard would be the selection of three passages to be coded by all coders.80 to be acceptable.'^ Schutz offered a formula that enabled a researcher to set a minimal acceptable level of reliability and then compute the level that must be achieved in a reliability test to account for chance agreements. Schutz''' dealt with measurement error and sample size. a subsample of 5-7 percent of the total is probably sufficient for assessing reliability. Schutz incorporated sampling error into his formula. For example. For example. He explored the impact of "chance agreement" on reliability measures: i. Weber's^^ only pertinent recommendation is that "The best test of the clarity of category definitions is to code a small sample of the text.. reliability samples have been selected haphazardly or based on convenience (e. the researcher cannot UiM & MASS CCMAUNÍCXTION QiMjrreKiy .-* Often. Sampling Error and Estitnating Sample SizeA Formula 964 The goal ofthe following analysis is to generate a formula for estimating simple random sample sizes for reliability tests. the first 50 items to be coded might be used). though that component was dropped from subsequent references to . then to code the main body of data.. but the article concentrated on measurement error due to chance agreement. Fadner."" Wimmer and Dominick" urge analysts to conduct a pilot study on a sample of the "content universe" and." Most texts do not discuss reliability in the context of probability sampling and the resulting samplingerror. "probably between 10% and 25%. though the existence of coding criteria reduces the influence chance could have. assuming satisfactory results. even if chance agreement could be eliminated/^ the "remainder" level of agreement would exceed the acceptable level. and Janowitz'' comparing reliability of different coding schemes provided reliability coefficients with confidence intervals.e. if the reliability coefficient must equal or exceed . The formula allows the researcher to be certain that the observed sample reliability level is high enough that. in the reliability test in order to control for chance agreement. Research texts vary in their approach to sampling for reliability tests. if the minimal acceptable level of agreement is 807c. Kaid and Wadsworth'' suggest that "levels of reliability should be assessed initially on a subsample of the total sample to be analyzed betöre proceeding with the actual coding. Then a subsample. The formula can be used to generate samples with confidence intervals that tell researchers if the minimal acceptable reliability figure has been achieved. the researcher might need to achieve a level as high as 837<..studies address whether the content units tested represent the population of items studied.)/ in statistics and content analysis texts. some coder agreements could occur by chance. An early article by Janis.

I 100% y^5% -h5'l^o\ Confidence Interval Continuum for level of agreement in coding decisions Relevant area for determining acceptability of reliability test..the population size (number of content units in the study). The reason for a one-tailed confidence interval is illustrated in Figure 1. Survey researchers use the formula for standard error of proporfion to estimate a minimal sample size necessary to infer to the population at a given level of confidence. The resulting area of concem is the gray area between 90% and 80%.CVLGORIES 965 . which involves the negafive side of the confidence interval. For simplicity.FIGURE 1 Why Reliahility Confidence Interval Uses a One-Tailed Test Minimal acceptiibic 0% 80% ya". A researcher'sconclusionofacceptablereiiability is not affected by whether the population agreement exceeds 5% on the posifive side because acceptance is based on a minimal standard. the formula becomes: (Equafion 2) Where N . this analysis uses "simple agreement" (total agreements divided by total decisions) with a dichotomous decision (the coders either agree or disagree). SAMPUNG ERROR AND SELECTINC imERcoaER REUAMITY SAMPLES ÍOR NOMINAL CoNrcm. We start with the equation for the standard error of prop()rfion and add the finite populafion correcfion (FPC).. The resulting formula is: SE = /PQ7 V ÍI-1 . A similar procedure is used here.^Ñ^ V N-l (Equafion 1) But with the radical removed and the distributive property applied. which would fall on the negative side of the interval. lt reduces the standard error but is often ignored because it has little impact when a sample has a small proporfion of the population. and the sample level of agreement is 90%. conclude that the "true" reliability of the populafion equals or exceeds the minimal acceptable level. The FPC is used when the sample makes up 10'^ or more of the population. The minimal acceptable agreement level is 80%.

the formula for confidence intervals is used to calculate the standard error (SE). The first is to estimate P based on a pretest of the coding instrumentand on previous research. Five percentage points is useful because it is consistent with a confidence interval of 5%. if the minimal acceptable reliability figure is . Step 2. The desired level of certainty is the traditional . it will be assumed that the population level should be set at 5 percentage points above the minimal acceptable level of agreement.P ^ the population level of agreement. ThesecondistoassumeaPthat exceeds the minimal acceptable reliability figure by a certain level.05 is 1. If the reliability figure equals or exceeds . '^ For example. ^. i. Step 3. using the formula: JOURNALISM & MASS CoMMUNiCAnoN QLMRTERLY 966 .^' But this level is lower than recommended by others. The first step is to determine ÏV (the number of content units being studied). we find that the one-tailed Z-score" associated with .g. chances are 95 out of 100 that the population (content units in the study) figure equals or exceeds . Andfi=samplesizefor the reliability check...85. This is the level of agreement among all coders if they coded every content unit in the study. A Simulation Assume an acceptable minimal level of agreement of 85% and P of 90% in a study using 1.000 content units (e. a level consistent with minimal requirement recommendations by Krippendorf and the analysisof Schutz. which represents the number of test units. Content analysis texts warn that an acceptable level of intercoderreliability should reflectthenature and difficulty of categoriesand content.05 level. We assume most content analysts will use the same levels of probability for the sampling error in intercoder reliability checks as are used with most sampling error estimates. the researcher must follow five steps: Step 1.Ol) levels of probability. The level of agreement in coding all study units (P) must be estimated.^' Step 5.e. 95% (p=. Then we solve for standard error (SE).andQ^(l-P).O5) and 99% (p=. The researcher must determine the acceptable level of probability for estimating the confidence interval..80. For example. newspaper stories).85. Two approaches are possible. then the assumed P would be . The formula is: Confidence interval probability = Z (SE) (Equation 3) Z is the standardized point on the normal curve that corresponds with the acceptable level of probability. a minimum level of 80% simple agreement is often used with new coding procedures. The researcher must set a minimal level of intercoder reliability for the test units. This step is the most difficult step because it involves estimating the unknown population reliability figure. Equation 2 allows the researcher to solve for n. Once the acceptable probability level is determined. The second approach creates the question: How many percentage points above the minimal reliability level should P be? For this analysis. Once the five steps have been taken. Step 4."^ It usuallyhasbeen determined before reaching the point of checking for the reliability of the instrument. In order to solve for n.8.64. the resulting figures are plugged into Equation 2 and the number of units needed for the reliability test is determined. Using the normal curve.

05. and 10. SE = V H-l and becomes '-1)(SE)^ + PQN {Equafion 2) V N-i (Equation 1) Now we can plug in our numbers and determine how large a random sample we will need to achieve at minimum the standard 85% reliability agreement.000. the smaller will be the sample. So. However.05 = 1.03. SE-.000.250. with 1.10) or .64 (SE) or. A problem can occur if the level of agreement in the test units SAMPUNC ERROR AND SELECTING ¡matcoDEH REiJABunr SAMPLES FOR NOMINAL ComuirCAizcomES 96/ .09. and 95%) and with numbers of study units equal to 100.. Table 1 solves Equation 2 for n with three hypothefical levels of P (85%. The figures for a given number of study units and agreement level are higher in Table 2 because they represent the increased number of test units needed to reach the higher level of probability.0009. The higher the assumed percentage.899 = 91. Assuming a study unit level of 5 percentage points above the minimal level will control for this incentive because the higher the assumed level.64-. and the resulfing SE at p . Thus. the number of test units needed decreases much faster with higher levels of P than with the decline in the number of study units. PQ .Confidence interval = Z (SE) (Equation 3) Our example confidence interval is 5% and our desired level of probability is 95%.-^ Table 2 assumes the same agreement levels as Table 1.00Ö9) + . chances are 95 out of 100 that 85"/" or better agreement would exist if all study units were coded by all coders and reliability measured.1.000 study units and an assumed true agreement level of 90%. The main problem in determining an appropriate sample of test units is estimafing the level of P. the higher will be the minimal acceptable level of reliability.05 confidence level was . 90%. squared to .500.. 5.000 study units.(999)(.Ü9 0. Table 2 presents numbers of test units for 99% level of probability.000.9) taken from 1. Our confidence interval is .05/1.90 (.09(1000) . So Equation 2 looks like n . .90.989 In other words.0009) + .03 Recall that our formula for sample size begins with SE. The table demonstrates how higher P levels and smaller numbers of study units affect the test units needed. if we achieve at least 90% agreement in a simple random sample of 92 test units (rounded from 91. However.9 (999)(. This might produce an incentive to overestimate this level because it would reduce the amount of work in the reliability test. The sample sizes are based on confidence interval with 95% probability.

the researcher could randomly select more content units for the reliability check or accept a lower minimal level of agreement.000 5. say . if the test units' reliability level equals .Oi)O 85"X> 90% 95% 11 4 139 125 111 91 59 100 99 92 84 72 51 54 54 52 49 45 36 500 250 100 Note: The numbers are taken from the equation for standard error of proportions and are adjusted with the finite population adjustment.85. Limitations of the Analysis This analysts may seem limited because it is: (a) based on a dichotomous decision. (b) with two coders.86 minus . = /PxQ X where P = percentage of agreement in population.80. Under this condition. The equation is S. the impact of more complex coding schemes might affect the representativeness of a reliability sample if some of the categories occur infrequently. who introduce measurement error after the reliability sample is selected. Q = (1-P).05. which means the full range of categories has not been tested. For example. and n = the sample size. However. This indicates that reliability figure for the population of study units might not exceed the acceptable level of .000 l. Equation 2 would easily fit nominal content with more than two categories. the larger sample size can be determined by plugging the test units' reliability level (. These infrequent categories have less likelihood of being in the sample.86) into Equation 2 as P. the confidence interval dips below the minimal acceptable level of . Based on Various Population Sizes. Additional units could be randomly selected and added to the original test units to calculate a new reliability figure and confidence interval based on a larger sample. N = the population size. The standard error was used to find a sample size that would have sampling error equal to or less than 5% for the assumed population level of agreement. generates a confidence interval that does dip below the minimal acceptable level of reliability.85. the first two are not limitations.E. as JOURNAUSM & MASS COMMUNKATION QuARTEniy 968 . and (c) it uses a simple agreement measure of reliability.'"* However. Three Assumed Levels of Population ¡ntercoder Agreement. Neither is usinga dichofomous decision a problem.TABLE 1 Number of Content Units Needed for Reliability Test. Sampling error is not affected by the number of coders. If the first approach is used. and a 95% Level of Probability Assumed Level of Agreement in Population (Study Units) Population Size (Sfudy Units) 10. If this is the case.

Three Assumed Levels of Population Intercoder Agreement. Q = (l-P). to nominal data because it is based on the standard error of proportions. N = the population size.-^ Krippendorf's alpha. and n = the sample size. The equations is S. Based on Various Population Sizes. if 99% use Table 2. for the assumed population level of agreement. such as political KHOR ANO SELECTING INTERCODER RinABiLn\ SAMPLES TOR NOMINAL COOTENT CATEOJRIES Using the Tables " O ? . Theseare Scott's pi. if the variables are straightforward counting measures. The representativeness of a sample of test units is not dependent on the test applied.-" These three measures were developed to deal with measurement error due to chance and not with error introduced through sampling.000 1. First. use Table 1. besides agreement among coding pairs.. select a larger number of test units. If 95%.-*" and Cohen's kappa. discussed in note 11.areavailablefornomina] level data. the researcher should start by selecting the level of probability appropriate for the study. At least three other measures of reliability.000 5.TABLE 1 Number of Content Units Needed for Reliability Test. and a 99% Level of Probability Assumed Level of Agreement in Population (Study Units) 85% 90% 95% Population Size (Study Units) 10. = / F X Q X / V n-1 V N-1 where P = percentage of agreemt-nt in population. take the assumed agreement level among study units to be 90%. The use of simple agreement in reliability test is not a problem either. A parallel analysis to this one for interval and ratio level categories could be developed using the standard error of means. Second. however. Some beginning researchers might struggle with the task of making assumptions and solving the equations.E. the researcher should randomly stratify the test units. If the variables involve coding meanings of content. the two tables can be useful for selecting a sample of test units to establish equivalence reliability. If this is the case. The standard error was used to find a sample size that would have sampling error equal to or less than 5"/.000 500 250 100 271 263 218 179 132 74 193 190 165 142 111 67 104 103 95 87 75 52 Note: The numbers are taken from the equation for standard error of proportions and are adjusted with the finite population adjustment. or both. such as source of newspaper stories. Equation 2 is limited.^~ Several discussions of the relative advantages and disadvantages of these measures are available.

Stempel III. the sample must bea prohahility sample. The analysis in this arficle is based on simple random sampling for reliability tests. Variables involve numbers of stories devoted to various types of economic news. However. Inc.'/ and Cohen's kappa can be calculated by referring to the formulas presented in the original articles for these coefficients. Acceptinga confidence level of 95%. "Content Analysis. For example.•*" When reporfing reliability level. NOTES 1. The number of units needed for the reliability check equals 84. For example. Westley (Englewood Cliffs. take the assumed agreement level of 85% among study units. This arficie has attempted to answer this quesfion and to suggest a procedure for esfimating sampling error in reliability samples. Content Analysis: An Introduction to ¡ts Methodology (Beverly Hills. The formula used here is the unbiased esfimator for simple random samples. 'y An inevitable question from graduate sfijdents conducting their first content analysis is how many items to use in the intercoder reliability test. might be preferable for selecting reliability test samples. CA: Sage. This bias can only be estimated through probability sampling. Stempel III and Bruce H. 4. 130-32. a researcher studying coverage of economic news in network newscasts has 425 stories from 40 newscasts selected from the previous year. if certain categories of a variable may make up a small proporfion of the content units being studied. Accuracv reliability involves comparing coding results with some known standard. The term reliability is used here to refer to reproducibility. confidence intervals should be reported with both measures of reliability. 1981). 2. Take the number of test units from the table. for sampling error to have meaning. Reproducibility reliability. 3. under some circumstances. NJ: Prentice-Hall. 1980).^^ Third. other forms of probability sampling. Guido H. Stephen Lacy and Daniel Ri ffe. ed. The role of selecHon bias in determining reliability coefficients seems to have gotten lost since earlier explorafions of reliability. Guido H. "Sins of Omission and Commission in " ' " ¡ouRNAUSM & MASS COMMUNICAHON QUARTEIUÏ . The study of content needs a more rigorous way of dealing with potential selecfion bias. the researcher would look down the 907ci level of agreement column in Table 1 unfil she or he came to a population size of 500 (the closest sample size that is greater than 425).leaning of news stories. 127. Of course. Stability concerns the same coder testing reliability of the same content at two points in fime. differs from stability and accuracy reliability. such as strafified random sampling." in Research Methods in Mass Communication. Using probability samples and confidence intervals for reliability figures would help add rigor. find the population size in the tables that is closest but greater than the size of the study units being analyzed. Simple agreement confidence intervals can be calculated using the standard error of proportions. The confidence intervals for Scott's . also called equivalence reliability. See Klaus Krippendorf. samples based on proporfion or stratification will require adjustments available in many stafistics books. the researcher might oversample these categories.

Stempel and Westley. Just generating a stratified reliability sample that would include sufficient numbers of units for each of these categories would be time consuming and difficult. Philip Emmert and Larry L. 208. "Content Analysis. 6. Scott. (Newbury Park. 23.Mass Communication Quantitative Research. "Statistical Designs for Content Analysis. 3d ed. a twenty-sixcategory scheme for coding the variable "news topic") could create logistical problems. Lynda Lee Kaid and Anne Johnston Wadsworth. Cüivrew C^ATÏCORIES "71 . "Content Analysis. 9.. 8." 128." in Research Methods in Mass Communication. additional units can be selected. but a large number of categories within a variable (e. Roger D. 5. "Coefficient of Agreement for Nominal Scales. Barker (NY: Longman. Stempel III.g. Cohen. Another way of handling infrequent categories would be to increase the reliability test sample size above the minimum recommended here. Robert Philip Weber. William A. are indeed represented in the reliability data regardless of lioiofrequently they may occur in the actual data" (emphasis added). If a researcher suspects that some variable categories will occur infrequently in a simple random sample for a reliability check. ed. 11. Mass Media Research: An Introduction. 2d ed." EducaSAMPIJNG ERROR AND SELEOWJG J^frERCOD£^^ REUABSIFY SAMPLES FOR MIM/N/U.A. 10. When figuring overall agreement for reliability. If the larger sample does not include sufficient numbers of the infrequent categories. disproportionate sampling of the less frequent categories would be useful. Stempel. "Reliability of Content Analysis: The Case of Nominal Scale Coding. 173. This will. See Krippendorf. (Belmont. CA: Wadsworth." Pulylic Opinion Quarterly 19 (fall 1955): 321-25. the results for particular categories would have to be weighted to reflect the proportions in the study units. 297. Dominick. Wimmer and Joseph R. 146. 1991). Larger samples will increase the probability of including infrequent categories among the test units. lead to coding of additional units from categories that appear frequently. 13. 12. No one would argue that all variables need to be tested in a reliability check.: Sage University Paper Series on Quantitative Applications in the Social Sciences. Guido H. 7. Basic Content Analysis. all decisions specified by various forms of instructions. ed. He suggests purposive or stratified sampling to ensure that "all categories of analysis. 1989). Mass Communication Research (NY: Longman. of course. Content Analysis. This procedure might create problems when content has infrequent categories that are difficult to identify. Krippendorf argues that reliability samples "need not be representative of the population characteristics" but "must be representative of all distinctions made within the sample of data at hand" (emphasis in original). 1994). 07-075). CA." journalism Quarterly 70 (spring 1993): 126-32. Frequency of categories could be estimated by a pretest and different sampling rates could be used for categories that appear less frequently. J." in Measurement of Communication Behavior. or selecting and checking content units for these infrequent categories until a proportion of the test units equals the estimated proportion of the infrequent categories. Some would question whether the logistical problems outweigh the potential impact of such a "micro" measure of reliability on the overall validity of the data. but the resulting reliability figure will be more representative of content units being studied. Michael Singletary. It could require quota sampling. 143.

15. The last factor has little impact unless the proportion is large.8.000. 16.67 could be reported for highly speculative conclusions. Three factors affect sampling error: the size of the sample.9 for simple agreement and a Scott's /'/ or Krippendorf's alpha of . 23. "The Reliability of a Content Analysis Technique. Schutz. Its effect can only be acknowledged and compensated for. Raymond H. Krippendorf. However. "Reliability. N equals the number of units analyzed mulfiplied by the number of categories being used. 24." Public Opinion Quarterly 7 (summer 1943): 293-96. N equals the number of coding decisions that will be made by each coder. But just because chance could affect reliability does not mean it dtïes. In effect. 19. Ifthe reliability ischecked for total decisions made in using the coding procedure. As Table 1 shows. 181 ) report a rule of thumb of at least .tional and Psychological Measurement 20 (1960): 37-46. while sound. 22." How long is a piece of string? 20.80 to remain consistent with Schutz. This analysis will use .8 level for intercoder reliability. the homogeneity of the populafion. 17. Note that this is a one-tailed test. recommends generally using the . 296) who states that a Scott's . and Morris Janowitz. Content Analysis. adds a bothersome vagueness to content analysis.7 is the consensus val ue for the sta tistic. Eadner. which would be on the negafive side of a confidence interval.8 level of simple agreement. this lack of independence is a bias in coding and not in the selection of units for 972 JüURNAiJSM & MASS CoMMUNiCAnON QiiAm^my . This analysis assumes that each variable is checked and reported separately. Ambiguity and Content Analysis. Content analysis researchers are concerned that the reliability figure exceeds a minimal level. This advice. Presumably. Ambiguity and Content Analysis") analysis starts with the . and the proportion of the population in the sample." Psychological Rtim-w 59 (1952): 119-29. If reliability is checked separately for each coding category in the content analysis. Multiple-category variables differ from dichotomous variables because mulfipie-categories are not independent of each other. Strictly interpreted. 18. although he says some data with reliability figures as low as . Schutz sought a way to control for the effect of those chance agreements. then N equals the total number of units selected for the content analysis. It is not clear whether Krippendorf's agreement level figures are for simple agreement among coders or for some other reliability measure. But of course it can't. See Singletary (Mi7ss Communication Research. Irving L. William C.)/ of . Janis. Under some condifions this would be consistent with a simple agreement of . This is a bit like a professor's response that the length of an essay should be "as long as it takes.75 for intercoder reliabilitv. which means N equals number of content units in the populafion. 21. but not always. 14. Wimmer and Dominick (Mass Media Research. these chance agreements could lead content analysts to overestimate theextent of coder agreement due to the precision of the coding instrument. The acceptance of a coding instrument as reliable is not affected by whether the population reliability figure exceeds the reliability test figure on the positive side of the confidence interval. Schutz's ("Reliability. populafion size reduces the number of test units noticeably when the number of study units falls under 1. and the "odds" of agreement through randomness change once a coding criterion is introduced and used.

see C. 30." 28. "Reliability of Content Analysis. such as numbers of stories. (NY: Basic Books." journal ofMarketing Research 27 (May 1990): 185-195. "Intercoder Reliability Estimation Approaches in Marketing: A Generalization Theory Framework for Quantitative Data. 1972). Moser and G. 2d ed. Kalton. For examples. Cohen. 27. Content Analysis. The population agreement will be higher than coding schemes that deal with word mearungs. A.a reliability test. 25. 29. Scott. Survey Methods in Social Investigations. Coding simple content. A lower reliability figure is an acceptable trade off for studying categories that concern meaning. typically yields higher levels of reliability because cues for coding are more explicit. For example. and Richard H. SAMPU\G ERSOR AND SEifcriNc ¡UTERCODEM REUABIUTY SAMPISS FOR NOMINAL COKTENI CAiTCi>ms 973 . Krippendorf. "Content-Analysis Research: An Examination of Applications with Directives for Improving Research Reliability and Objectivity. Carrett. see Maria Adele Hughes and Dennis F." 26. Burnett. Kolbe and Melissa S. "Coefficient of Agreement for Nominal Scales."/Di/í-mi/dfConsiíWír Research 18 (September 1991): 243-250.

users may print. or email articles for individual use.Copyright of Journalism & Mass Communication Quarterly is the property of Association for Education in Journalism & Mass Communication and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. . download. However.

Sign up to vote on this title
UsefulNot useful