Vous êtes sur la page 1sur 52

Design & Planning 8: Non-Response Peter Lynn

Overview: Non-Response
Motivation: Why non-response is important Definitions, types, components Causes Objectives: What can we do about non-response? Studying Non-Response: How can we find out about the nature of non-response? Quality indicators: Response rates and other indicators Minimising non-contacts Minimising refusals Minimising panel attrition

Why is Non-Response Important?


Potential Bias (Increased Variance) Cost implications Image problem Almost all surveys suffer from it - even compulsory ones! Some aspects of NR are common across all (or many) types of surveys; others are quite specific to certain characteristics of a survey. In many countries, NR is perceived to have increased in recent years. This leads to worries about the effects of NR on time series.

Definitions of Non-Response
Non-response is the failure to obtain complete measurements on the (eligible) survey sample. [eligible = in-scope = in-population] This incompleteness can be within units (item non-response) or across units (unit nonresponse). [unit = sample element = sample member = case] Unit non-response A sample unit does not provide any of the data required by the survey. Item non-response A sample unit participates but data for some survey items are not available for analysis.

Response Pattern
We can define the survey response pattern by a matrix R = [rjk], where rjk = 1 if item j is observed for unit k, and rjk = 0 otherwise. Possible response patterns include: Unit (k) y1k y2k y3k 1 2 3 4 1 1 1 0 1 0 0 0 1 1 0 0 yqk 1 1 1 0 full response item non-response item non-response unit non-response

Note that, due to item non-response, the set of units available for analysis depends on the item or item required.

Non-Response Error in Context


Reminder: Total Survey Error Framework Errors of non-observation have three components: Coverage error is error caused by the omission of some population units from the sampling frame; Sampling error is error caused by the omission of some sampling frame units (usually most!) from the sample; Non-response error is error caused by the omission of some sample units from the data

Errors of Non-Observation
2

(1)

(1)

Population

Sampling Frame

Responding units

Nonresponding units

1: Frame over-coverage (no error, if identified) 2: Frame under-coverage (coverage error) 3: Non-sampled units (sampling error) 4: Non-responding units (non-response error) 5: Responding units (observational error)

Non-Response Error
Survey response as a deterministic vs. probabilistic process Deterministic model (Groves 1989):

NT N R (YR YR ) E ( y R y T ) = (YR YT ) = ~ N T

N RYR + N RYR ~ ~ Where YT = ; NT

NT = N R + N R ~

Probabilistic model (Bethlehem 2002):

E ( y R YT ) =

Cov( yi , i )

Where i is the participation propensity of unit i .

Non-Response Error ctd.


Under either model, the realised error due to non-response is:

( yr yt ) = nT nR ( yR yR ) ~
nT
Thus, non-response error has two elements: - the non-response rate; - the difference between responding and non-responding units in terms of y . The aim of the survey researcher should be to seek effective ways to minimise both components, noting that - the components are not independent; - the second component is estimate-specific.

Components of Non-Response Error


Non-response errors can be divided into two components: Errors due to unit non-response Errors due to item non-response Each of these can in turn be decomposed into sub-sources, for example: Unit non-response due to: Non-contact Refusal to respond Inability to respond Item non-response due to: Routing (instrument) error Routing (interviewer) error Refusal to respond Inability to respond etc.

Components of Non-Response Error ctd


Non-response is not a single phenomenon, but is the result of a combination of a number of phenomena. Partitioning the effects of non-response into components can be helpful

(see Groves 1989, p.134).

Reasons for Unit Non-Response


Failure of the data collector to locate/identify the sample unit; Failure to make contact with the sample unit; Refusal of the sample unit to participate; Inability of the sample unit to participate (e.g. ill health, absence, etc); Inability of the data collector and sample unit to communicate (e.g. language barriers); Accidental loss of the data/ questionnaire. Unit non-response is defined relative to the eligible sample. In other words, if the sampling frame contains ineligible units, these do not contribute towards response/ non-response. Unit non-response is often divided into three components: non-contact, inability to respond, lack of co-operation (refusal). Response rates can usefully be divided into these (or other) components (see later).

Reasons for Item Non-Response


Refusal to provide an answer Inability to provide an answer Other failure to answer (e.g. by accident) Provided answer being of inadequate quality (e.g. incomplete, implausible, failing an edit/consistency check, etc.) Item non-response can be caused by: the action of the sample member (e.g. refusal to answer); the action of an interviewer (e.g. failure to ask a question that should have been asked, or failure to record the answer adequately); the survey design (e.g. poor routeing instruction). In practice, these factors interact.

Unit Non-Response: Causal Factors/Constraints


Field period Survey budget Allocation of resources Experience/training of researchers and interviewers Study population Survey task(s)

Longitudinal Surveys: Specific Features


Structure of non-response: Complete response Wave non-response Attrition Additional causes: Tracking between waves location Experience of participation co-operation Note: patterns of NR depend partly on survey policy and partly on field efforts (especially tracking)

Patterns of Response: 4-Wave Panel Survey


Policy 1: Issue all eligible cases at every wave. 16 possible patterns:
Wave: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1 2 3 4

Policy 2: Issue only wave 1 responding cases at each subsequent wave. 9 possible patterns:
Wave: 1 2 3 4 5 6 7 8 9 1 2 3 4

Policy 3: At each wave, issue only cases responding to previous wave. 5 possible patterns:
Wave: 1 2 3 4 5 1 2 3 4

Example: England and Wales Youth Cohort Study


Panel with 3 waves; Implements policy 1 (issue all eligible cases at every wave). 8 possible response patterns:
Wave: 1 2 3 4 5 6 7 8 1 2 3 No. of cases 8,396 2,555 2,660 900 386 575 352 2,855 % of cases 44.9 13.7 14.2 4.8 2.1 3.1 1.9 15.3

(From Lynn, Purdon, Hedges and McAleese, 1994)

Flow Sampling: Specific Features


If flow (within PSUs) is uneven, epsem sampling creates variable workloads. Especially at times of peak flow there may well be non-response because no interviewer is available. This source of non-response is unique to flow sampling. Rate of flow is typically correlated with survey measures, so this type of non-response will introduce bias. Choice between flow sampling designs is related not only to sampling variance, but also to non-response (bias).

Survey Types and Modes


Face-to-face surveys typically yield higher response rates than telephone surveys. Response for mail and web surveys usually much lower than other modes of data collection. Differences are mainly due to level of respondent motivation possible in different modes. In mixed mode strategies: motivation of respondents vs. time and costs. Interview surveys: role of the interviewer is crucial. Telephone surveys: scope for interviewer influence is more limited. Mail and web surveys: only communication is via document design and content (qre, advance or covering letter, reminders).

What Can We Do?


Reduce non-response error through good design and implementation practices - Requires attention to each possible cause of non-response - And to the expected effect of each cause on non-response error - Likely to imply a variety of strategies Adjust for effects of non-response at analysis stage - Requires estimation of non-response error - And appropriate adjustment techniques

Studying Non-Response
Designs for Estimating Characteristics of Non-respondents: - Special studies of non-respondents - Using information on the sampling frame - Asking others about non-respondents or having interviewer provide information about them (example on next page) - Comparison of respondent characteristics by call number - Comparison of respondent characteristics to census or other external information - Studying persons who drop out of a panel survey after an initial interview

ALL RESIDENTIAL ADDRESSES (CONTACTS AND NON-CONTACTS INCLUDING VACANTS) 17. Does the address have an entryphone? 22a. SAMPLED DWELLING IS: Yes No 18. Which of the following are visible at the sampled address? CODE ALL THAT APPLY Burglar alarm 1 Security gate over front door Bars/grilles on any windows Other security device(s) Estate/block security lodge/guards None of these INTERVIEWER ASSESSMENTS: 19. Are the houses/flats in this immediate area in a good or bad physical state? Mainly good Mainly fair Mainly bad Mainly very bad 20. Is the sampled house/flat in a better or worse condition outside than the others in this area? Better Worse About the same Does not apply 21. Do you know or think that the occupants are probably white black Asian Other: __________________ Dont know 1 2 3 4 5 e. 1 2 3 4 10th 1 2 3 4 c. BUILDING HAS: Fewer than 5 floors 5 floors or more Unable to code d. FLOOR LEVEL OF MAIN ACCOMMODATION: Basement/semi-basement Ground floor/street level First floor 2nd/3rd 4th 9th floor floor 1 2 3 4 5 6 1 2 3 2 3 4 5 0 b. 1 2 Whole house: detached Semi-detached IF NO DWELLING SELECTED, CODE FOR ADDRESS mid-terrace end terrace Maisonette Flat: purpose-built converted Rooms, bedsitter Unable to code IF FLAT ETC (5-7 AT a.) ANSWER b-e. OTHERS - END CODE TYPE OF FLAT ETC: Self-contained Not self-contained Dont know 1 2 8 1 2 3 4 5 6 7 8 9

floor or higher

BUILDING HAS: Common entrance: lockable Common entrance: not lockable No common entrance

1 2 3

Example: Using Interviewer Observation Data


Data were collected by interviewers, using the form on the previous page, on the 1996 British Crime Survey. The following table uses the answers to question 22a.
Response rate House: detached semi-detached end terrace mid-terrace Maisonette Flat: converted purpose-built Rooms/bedsit Unable to code Base 82.6% 79.6% 79.2% 77.7% 74.9% 72.3% 70.3% 75.6% 51.2% Selected sample % 19.5 30.2 7.3 20.4 1.7 2.9 11.7 0.3 6.0 13,117 Responding sample % 21.0 31.3 7.6 20.6 1.7 2.7 10.7 0.3 4.0 10,059

Source: Lynn P (1996) Weighting for non-response, in Survey and Statistical Computing 1996, Chesham: Association for Statistical Computing

Example: Using Sample Frame Data


These data are from the Scottish School Leavers Survey, a postal survey for which the sampling frame is a list of pupils and their school exam results.
Highest Qualification 5+ Higher grades 3-4 Higher grades 1-2 Higher grades 5+ Standard grades 1-3 3-4 Standard grades 1-3 1-2 Standard grades 1-3 Standard grades 4-7 only No qualifications Base
Source: Lynn P (1996)

Response rate 91.1% 85.1% 81.7% 76.4% 74.1% 69.1% 62.6% 59.6%

Selected sample % 18.0 13.0 15.0 8.1 9.1 14.5 14.4 7.8 4,542

Responding sample % 21.4 14.5 16.1 8.1 8.8 13.1 11.8 6.1 3,469

Example: Using Geographical Data


The following data are from the Italian Multipurpose Survey carried out by ISTAT. The figures presented are regression coefficients from a model to predict propensity to refuse. It can be seen that refusals are most likely in metropolitan areas and least like in small (rural) municipalities.
Parameter Metropolitan area Met. area ring Large municipality Small municipality 2.341 1.287 0.915 0 s.e. (0.147) (0.168) (0.114)

Source: Baldazzi et al (2002) Interviewers effect on refusal risk in the Italian Multipurpose Survey: a multilevel approach, paper presented at the annual conference of the Italian Statistical Society, May 2002.

Example: Using Number of Calls


These data are from the 1987 British General Election Survey. This kind of analysis can be done for any survey, even if there is no useful information from the sampling frame or from interviewer observation (e.g. postal or telephone survey).
Interviewed Interviewed Interviewed at 1st call at 2nd call at 3rd call Interviewed after 4+ calls % 50.4 36.9 7.9 4.8 1,354 Nonrespondents % Unknown Unknown Unknown Unknown 1,637

Social Class Non-manual Manual Self-employed Unclassifiable Base Source: Lynn P (1996)

% 41.3 46.4 6.2 6.0 726

% 44.3 44.5 6.3 4.9 1,024

% 49.7 39.5 6.1 4.7 722

Example: Comparing Non-Contacts, Refusals and Easyto-get Households


Data are from 1996 Health Survey for England. Lynn et al (2002) analysed calls record data and classified respondents as easy or difficult to contact, and willing or reluctant. It can be seen, for example, that persons who were difficult to contact were much younger than average and much more likely to be employed; but reluctant persons (potential refusals) were slightly less likely than others to be employed.
Estimate Male (%) Age (mean) Owner-occupier Employed (ILO definition) (%) White (%) Difficult to contact (6+ calls) 46.7 39.4 66.8 66.6 92.0 Reluctant (temporary refusal) 40.5 46.5 74.1 47.7 90.9 Hard-to-get 45.7 40.7 68.1 63.3 91.8 Easy-to-get households 45.5 47.9 72.8 50.9 94.1 All responding households 45.5 46.7 72.0 53.0 93.7

Source: Lynn P, Clarke P, Martin J and Sturgis P (2002) The effects of extended interviewer efforts on nonresponse bias, in Survey Nonresponse (ed.s R M Groves, D A Dillman, J L Eltinge and R J A Little), New York: Wiley.

Common Non-Response Patterns


1. Many (most) household surveys find response rates lower in urban areas. 2. Many self-completion surveys find response rates higher amongst those with more education. 3. Interview surveys often find higher contact rates amongst many-person households than amongst 1- or 2- person households. 4. Many household surveys find contact rates higher amongst persons aged 65+, but cooperation rates lower. 5. In most (but not all) European countries, response rates are usually higher for women than men. 6. On most business surveys, response rates are higher amongst larger businesses. But These relationships may not be true for all countries, cultures and surveys.

Minimising Non-Contact
A conceptual model for contacting sample households (Groves & Couper 1998):
Social environmental attributes Socio-demographic attributes Physical impediments Accesible at-home patterns Likelihood of contact

X
Number of calls

Timing of calls

Interviewer attributes

Contact likelihood is a function of three factors: 1) Physical impediments that prevent visiting interviewers from alerting the household to their presence 2) When household members are at home 3) When and how many times the interviewer visits the household.

Reasons for Non-Contact


Reduced accessibility of household respondents. Life styles that lead to reduced time at home (patterns of work, related patterns of shopping and entertainment) Security features or multi-unit Other factors (Groves and Couper, 1998): rural/urban, owner/renter, crime rate, ethnic/cultural differences structural disrepair one person households households with no children <5 or adults >70 Hidden refusals

Reasons for Non-Contact continued


Issues to do with Interviewers: Interviewer attributes may affect contact rate via choice of timing and number of contacts Number of call backs Minimum of 4, 5, 6, 7 Timing of call backs When are interviewers available? Dont let interviewer preferences rule Ensure different times of day and days of the week with at least one evening and one weekend call Weekday evenings are the best time to make contact Need to know when respondents are most likely to be agreeable
.

Reasons for Non-Contact continued


Efficiency of call backs: Adequacy of interviewer strategies Optimal call-back algorithms But note that optimal time for making contact will depend upon: 1. Contact mode (face-to-face, telephone not relevant to post, web) 2. Survey population (e.g. businesses day-time, employed persons evenings and weekend, retired persons any time)

Case Study: Interviewers and Non-Contact


The following conclusions are summarised from: Purdon S, Campanelli P, and Sturgis P (1997). Studying Interviewers Calling Strategies as a Way to Improve Response Rates, (Chapter 3 in Campanelli, P., Sturgis, P., and Purdon, S. (1997). Can You Hear Me Knocking: An Investigation into the Impact of Interviewers on Survey Response Rates, London: NatCen.) Objectives of this study When is the best time to call at addresses? When do interviewers actually call? What impact do interviewers calling patterns have on: Non-contact rates Final response rates To identify efficient calling strategies for personal visit interviews

Conclusions Regarding Interviewers and Contact


1. At least 7 calls have to be made before accepting a non-contact if the non-contact rate is to be reduced to around 4% (for UK similar findings for USA, Scandinavia, Netherlands). 2. As the number of calls to an address increases, the chance of finding somebody at home on the next call decreases. 3. The time and day of calling affect the likelihood of finding somebody at home. (Sunday and Monday evenings are the best times to call, followed by other weekday evenings.) 4. Calling on a weekday evening is almost always the optimal strategy, irrespective of the time of previous calls:
Weekend Time of First Call Weekend Weekday morning or afternoon Weekday evening 0.42 0.43 0.40 Time of Second Call Weekday morning or afternoon 0.34 0.38 0.32 Weekday evening 0.49 0.53 0.47

5. The timing of first contact is related to the outcome of that call . . . For example, * An immediate interview is most likely if contact is made on a weekday morning or afternoon * An appointment is most likely if contact is made on a weekday or Sunday evening. However, no evidence that the time of day or day of week affects the likelihood of the final outcome of the case (i.e., certain call combinations do not lead to a higher refusal rate.) 6. A variety of different calling strategies can be used by interviewers as long as they increase the number of calls if a sub-optimal strategy is adopted. Four interviewer groups were created based on their percentage of weekday evening calls:
Interviewer Group Time of First Contact Weekend Weekday morning or afternoon Weekday evening AA 18 45 37 AB 22 51 27 BA 15 64 21 BB 17 68 14

Number of Calls:
Interviewer Group AA AB BA BB Percentage of Average number of Final Noncontact rate calls made on calls made per weekday evenings address achieved 34 24 19 14 2.06 2.20 2.31 2.36 4.3 3.4 4.2 4.8

Response Rates by Interviewer Group:


AA Response Rate 70% AB 71% BA 70% BB 71%

Minimising Non-Contacts on Telephone Surveys


Some kind of call scheduling system needed. 1. Timing of repeat calls Studies of calling patterns on telephone surveys have generally found that: If number busy, best time to try again is 10 - 30 minutes later. If that not possible, another good time is same time the next day; If no reply (business number), best time to try again is the next day; If no reply (private number, daytime), best time to try again is 2 to 6 hours later; If no reply (private number, evening), best time to try again is the following evening. 2. Overall pattern of calls The system should ensure that, by the end of the fieldwork period, all sample cases have been attempted a sufficient number of times, with an appropriate spread over times of day and days of the week (and weeks).

Minimising Refusals: Conceptual Model


Factors influencing household survey participation (Groves & Couper, 1998):
Social environment Survey design

Household

Interviewer

Household-interviewer interaction Decision to co-operate or refuse

Refusals: Relevant Survey Design Features


Mode of the initial contact. This affects The number of channels of communication between interviewer and respondent (Groves, 1978). The selection of persuasion strategies to employ and the effectiveness of alternative strategies (Groves, Cialdini and Couper, 1992).

Length of the interview being requested. A basic indicator of the response burden. Topic of the survey. Helps to determine the respondents level of interest and knowledge in the survey. Use of respondent incentives. To motivate and to invoke the norm of reciprocity. Design and content of advance letters / covering letters / questionnaires.

Refusals: Role of Interviewers


Observable socio-demographic characteristics of the interviewer may affect the script evoked in the respondents mind at first contact (Groves, Cialdini and Couper, 1992). No research has yet found strong links between stable interviewer-personality characteristics and success in gaining co-operation. Reasons might be: interviewers relatively homogeneous, tailoring, social skills, other adoptive behaviours (Groves & Couper, 1998; Sinibaldi et al, 2009). Morton-Williams (1993) argues that social skills can be taught and offers an outline of such training. Those with greater interviewing experience tend to achieve higher rates of co-operation. It is still unclear whether this is a selection effect (less successful interviewers terminate their employment earlier) or a training effect (due to the benefits of coping over time with diverse situations in recruiting respondents) or both (Groves and Couper, 1998). Interviewers who, prior to the survey, are confident about their ability to elicit co-operation tend to achieve higher co-operation rates (Lehtonen 1996, Groves and Couper, 1998). Tailoring of communication and tactics by interviewers increases the chances of barriers to participation being overcome (Groves, Cialdini and Couper, 1992).

Psychological Concept of Compliance


People often decide whether to act upon a requested activity on the basis of the attractiveness of inherent features of the activity itself (e.g. the interest value and personal relevance of the activity as well as the cost in time, energy, and resources required to perform it). Cialdini (1988) has argued that in addition, other social or psychological factors play a powerful role in determining whether individuals will agree to perform the activity. Cialdini specifies six such compliance principles that people regularly use to decide when to yield to a request. These principles serve as heuristic rules for compliance and can be manipulated in the survey situation: 1. Reciprocation People feel obliged to respond to positive behaviour received with positive behaviour in return (Cialdini & al 1975: Regan 1971). 2. Commitment and consistency Most people want to be consistent. After committing oneself to a position one is more willing to comply with requests for behaviours consistent with that position. This is a likely explanation for the foot in the door effect in surveys (Freedman and Fraser 1966; Groves, Couper 1998).

3. Social Validation One is more likely to comply with a request to the degree that one believes that similar others would comply with (Groves, Cialdini, Couper 1992). 4. Liking One should be more willing to comply with the requests of liked others. A variety of factors (e.g. similarity of attitude, background, dress, praise) have been shown to increase the liking of strangers, and these cues may be used to guide the decision in evaluating the interviewers request. (Groves, Couper 1998). 5. Authority People are more likely to comply with a request is it comes from a properly constituted authority, someone who is sanctioned by the society to make such requests and to expect compliance. (Groves, Couper 1998). 6. Scarcity One should be more willing to comply with requests to secure opportunities that are scarce.

Consideration of all Survey Stages


Example: The Survey Participation Process in the British Crime Survey The British Crime Survey (BCS) is a survey on which there was a determined effort to minimise non-response bias and to find suitable and cost-effective ways to reduce it. It can be useful to identify the logical stages of the survey participation process. This was done for BCS (next page). It can be seen that there are many stages at which either a non-contact or a refusal could occur (see diagram on next page, from Laiho and Lynn, 2000). This illustrates the heterogeneity of the survey non-response phenomenon. Survey design and implementation features should be used to address each of these possible non-response outcomes.

Survey participation process in BCS (Laiho & Lynn 2000)


Advance letter to the sampled address

Contact attempt

Office refusal

Ineligible

Dwelling Unit (DU) Multiple DU

- Insufficent address - Not traced - Not yet built - Vacant/ derelicted / demolished - Empty - Business/industrial only - Other

Single DU

1) List in systematic order by flat number 2) random selection of the dwelling unit

Listing of all adult (16+) members of the DU

Information about number of persons 16+ refused

No contact with responsible/ any adult in selected DU (after 5+ attempts)

Random selection of the respondent

Contact attempt - respondent

Respondent contacted

No contact with selected person after 5+ attempts

Completed interview

Refusal Personal refusal by respondent Broken appointment, no recontact Proxy refusal on behalf of selected respondent

Other reason for non-response Ill at home Away/in hospital during survey period Senile/incapacitated

Inadequate English

Other

Respondent Incentives
Used most commonly on Market surveys Surveys where extensive burden is involved (e.g. Surveys involving diary-keeping; longitudinal surveys) Ethical issues, especially re. differential incentives for resistant sample members Effect depends on amount Prepaid more effective than conditional upon participation Money or money-equivalent more effective than gifts or lottery entries Budget issues.

Example: Cost-effectiveness of Incentives


Incentives can sometimes pay for themselves. This study showed that the average number of calls to a sample address is reduced significantly if incentives are used (as well as refusal rate being reduced).
No incentive Refusal rate Response rate Total visits to eligible addresses Mean visits per eligible address Mean visits per completed interview Base (eligible addresses) 28.1% 56.0% 13,269 5.07 9.05 2,617 3 24.2% 60.4% 6,640 4.98 8.25 1,333 5 23.8% 63.3% 6,379 4.81 7.59 1,326

Source: Lynn P, Thomson K and Brook L (1998) An experiment with incentives on the British Social Attitudes Survey, Survey Methods Newsletter 18:2, 12-14.

Re-issues
The idea is to re-issue some sample cases that have been returned as non-response usually to a different (more experienced) interviewer. This includes refusal conversion attempts and attempts to make contact with sample members previously classified as non-contact. May add 2 to 5% to response rate Typically around 1 in 4 are converted BHPS had best conversion rate among Rs whose concern had to do with the survey rather than a personal issue Expensive (especially for f-to-f, less so for telephone) Time-consuming Consider alternative strategies (prevention)? Effect of conversion can be long-lasting for panel surveys
See: Burton, J., Laurie, H. & Lynn, P. (2006) 'The long-term effectiveness of refusal conversion procedures on longitudinal surveys'. Journal of the Royal Statistical Society Series A (Statistics in Society), 169(3): 459-478.

Postal Surveys: Reminder Mailings


At least 2, if response rate is to be respectable; 3 or 4 is common Vary the form and wording of the reminders Often, postcard used as first reminder (sometimes as final reminder) At least one of the reminders should include another copy of the questionnaire Advanced letter? A telephone reminder? Interval between mailings is important: too short, and you will be sending unnecessary reminders too long, and sample members will have forgotten/lost the earlier mailing and perceive a lack of urgency 10-12 working days is a common interval, but should depend on: nature of population and task; time needed to prepare each mailing; class of postage

Effect of Postal Survey Reminder Mailings


SSLS 1994: Cumulative response
4000 3500 3000 2500 2000 1500 1000 500 0

Working days after initial mailout

On this survey, three reminder mailings were sent. Each can be seen to have boosted the response rate
Source: Lynn P (1996). Quality and Error in Self-Completion Surveys, Survey Methods Centre Newsletter, 16(2), 4-9.

Vous aimerez peut-être aussi