Académique Documents
Professionnel Documents
Culture Documents
org/about/
The typical Pew Research Center for the People & the Press national survey selects a random digit sample
of both landline and cell phone numbers in all 50 U.S. states and the District of Columbia. As the
proportion of Americans who rely solely or mostly on cell phones for their telephone service continues to
grow, sampling both landline and cell phone numbers helps to ensure that our surveys represent all adults
who have access to either (only about 2% of households in the U.S. do not have access to any phone). We
sample landline and cell phone numbers to yield a combined sample with approximately 40% of the
interviews conducted by landline and 60% by cell phone. This ratio is based on an analysis that attempts
to balance cost and fieldwork considerations as well as to improve the overall demographic composition of
the sample (in terms of age, race/ethnicity and education). This ratio also ensures an adequate number of
The design of the landline sample ensures representation of both listed and unlisted numbers (including
those not yet listed) by using random digit dialing. This method uses random generation of the last two
digits of telephone numbers selected on the basis of the area code, telephone exchange, and bank number.
A bank is defined as 100 contiguous telephone numbers, for example 800-555-1200 to 800-555-1299. The
Selections from http://www.pewresearch.org/about/
telephone exchanges are selected to be proportionally stratified by county and by telephone exchange
within the county. That is, the number of telephone numbers randomly sampled from within a given
county is proportional to that county’s share of telephone numbers in the U.S. Only banks of telephone
The cell phone sample is drawn through systematic sampling from dedicated wireless banks of 100
contiguous numbers and shared service banks with no directory-listed landline numbers (to ensure that
the cell phone sample does not include banks that are also included in the landline sample). The sample is
designed to be representative both geographically and by large and small wireless carriers (also see cell
Both the landline and cell samples are released for interviewing in replicates, which are small random
samples of each larger sample. Using replicates to control the release of telephone numbers ensures that
the complete call procedures are followed for all numbers dialed. The use of replicates also improves the
overall representativeness of the survey by helping to ensure that the regional distribution of numbers
called is appropriate.
When interviewers reach someone on a landline phone, they randomly ask half the sample if they could
speak with “the youngest male, 18 years of age or older, who is now at home” and the other half of the
sample to speak with the youngest female, 18 years of age or older, who is now at home.” If there is no
eligible person of the requested gender currently at home, interviewers ask to speak with the youngest
adult of the opposite gender, who is now at home. This method of selecting respondents within each
household improves participation among young people who are often more difficult to interview than
Unlike a landline phone, a cell phone is assumed in Pew Research polls to be a personal device.
Interviewers ask if the person who answers the cell phone is 18 years of age or older to determine if the
person is eligible to complete the survey (also see cell phone surveys for more information). This means
that, for those in the cell sample, no effort is made to give other household members a chance to be
Selections from http://www.pewresearch.org/about/
interviewed. Although some people share cell phones, it is still uncertain whether the benefits of sampling
Sampling error results from collecting data from some, rather than all, members of the population. For
each of our surveys, we report a margin of sampling error for the total sample and usually for key
subgroups analyzed in the report (e.g., registered voters, Democrats, Republicans, etc.). For example, the
sampling error for a typical Pew Research Center for the People & the Press national survey of 1,500
completed interviews is plus or minus 2.9 percentage points with a 95% confidence interval. This means
that in 95 out of every 100 samples of the same size and type, the results we obtain would vary by no more
than plus or minus 2.9 percentage points from the result we would get if we could interview every member
of the population. Thus, the chances are very high (95 out of 100) that any sample we draw will be within
3 points of the true population value. The sampling errors we report also take into account the effect of
At least 7 attempts are made to complete an interview at every sampled telephone number. The calls are
staggered over times of day and days of the week (including at least one daytime call) to maximize the
chances of making contact with a potential respondent. Interviewing is also spread as evenly as possible
across the field period. An effort is made to recontact most interview breakoffs and refusals to attempt to
Response rates for Pew Research polls typically range from 5% to 15%; these response rates are
comparable to those for other major opinion polls. The response rate is the percentage of known or
assumed residential households for which a completed interview was obtained. The response rate we
report is computed using the American Association for Public Opinion Research’s (AAPOR) Response
Rate 3 (RR3) method (For a full discussion of response rates see AAPOR’s Standard Definitions).
Fortunately, low response rates are not necessarily an indication of nonresponse bias as we discuss in the
In addition to the response rate, we sometimes report the contact rate, cooperation rate, or the
completion rate for a survey. The contact rate is the proportion of working numbers where a request for
an interview was made. The cooperation rate is the proportion of contacted numbers where someone gave
initial consent to be interviewed. The completion rate is the proportion of initially cooperating and eligible
Data weighting
Nonresponse in telephone interview surveys can produce biases in survey-derived estimates. Survey
participation tends to vary for different subgroups of the population, and these subgroups are likely to
also vary on questions of substantive interest. To compensate for these known biases, the sample data are
The landline sample is first weighted by household size to account for the fact that people in larger
households have a lower probability of being selected. In addition, the combined landline and cell phone
sample is weighted to account for the fact that respondents with both a landline and cell phone have a
The sample is then weighted using population parameters from the U.S. Census Bureau for adults 18 years
of age or older. The population parameters used for weighting are: gender by age, gender by education,
age by education, region, race and Hispanic origin that includes a break for Hispanics based on whether
they were born in the U.S. or not, population density and among non-Hispanic whites – age, education
and region. The parameters for these variables are from the Census Bureau’s 2012 American Community
Suvey (excluding those in institutionalized group quarters), except for the parameter for population
density which is from the 2010 Census. These population parameters are compared with the sample
characteristics to construct the weights. In addition to the demographic parameters, the sample is also
weighted to match current patterns of telephone status (landline only, cell phone only or both landline
and cell phone), based on extrapolations from the 2013 National Health Interview Survey. The final
weights are derived using an iterative technique that simultaneously balances the distributions of all
Selections from http://www.pewresearch.org/about/
weighting parameters. You can view the demographic and phone usage questions we use to compare the
Weighting cannot eliminate every source of nonresponse bias. Nonetheless, properly-conducted public
opinion polls have a good record in achieving unbiased samples. In particular, election polling – where a
comparison of the polls with the actual election results provides an opportunity to validate the survey
results – has been very accurate over the years (see the National Council on Public Polls Evaluations of
Each Pew Research survey report includes a “topline questionnaire” with all of the questions from that
survey with the exact question wording and response options as they were read to respondents. This
topline provides the results from the current survey for each question, as well as results from previous
For discussion of the results in reports and commentaries, differences among groups are reported when
we have determined that the relationship is statistically significant and therefore is unlikely to occur by
chance. Statistical tests of significance take into account the effect of weighting. In addition, to support
any causal relationships discussed, more advanced multivariate statistical modeling techniques are often
employed to test whether these connections exist, although the results of these models may or may not be
and surveyed using several different modes: by an interviewer in-person or on the telephone (either a
landline or cell phone), via the internet or by paper questionnaires (delivered in person or in the mail).
Selections from http://www.pewresearch.org/about/
The choice of mode can affect who can be interviewed in the survey, the availability of an effective way to
sample people in the population, how people can be contacted and selected to be respondents, and who
responds to the survey. In addition, factors related to the mode, such as the presence of an interviewer
and whether information is communicated aurally or visually, can influence how people respond.
Surveyors are increasingly conducting mixed-mode surveys where respondents are contacted and
Survey response rates can vary for each mode and are affected by aspects of the survey design (e.g.,
number of calls/contacts, length of field period, use of incentives, survey length, etc.). In recent years
surveyors have been faced with declining response rates for most surveys, which we discuss in more detail
In addition to landline and cell phone surveys, the Pew Research Center for the People & the Press also
conducts web surveys and mixed-mode surveys, where people can be surveyed by more than one mode. We
discuss these types of surveys in the following sections and provide examples from polls that used each
method. In addition, some of our surveys involve reinterviewing people we have previously surveyed to see
if their attitudes or behaviors have changed. For example, in presidential election years we often interview
voters, who were first surveyed earlier in the fall, again after the election in order to understand how their
opinions may have changed from when they were interviewed previously.
Questionnaire Design
Perhaps the most important part of the survey process is the creation of questions that accurately measure
the opinions, experiences and behaviors of the public. Accurate random sampling and high response rates
will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased
questions. Creating good measures involves both writing good questions and organizing them to form the
questionnaire.
Selections from http://www.pewresearch.org/about/
Questionnaire design is a multiple-stage process that requires attention to many details at once.
Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of
detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how
people respond to later questions. Researchers also are often interested in measuring change over time
and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.
Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in
order to better understand how people think about an issue or comprehend a question. Pretesting a
survey is an essential step in the questionnaire design process to evaluate how people respond to the
For many years, surveyors approached questionnaire design as an art, but substantial research over the
past thirty years has demonstrated that there is a lot of science involved in crafting a good survey
questionnaire.
Oversamples
For some surveys, it is important to ensure that there are enough members of a certain subgroup in the
population so that more reliable estimates can be reported for that group. To do this, we oversample
members of the subgroup by selecting more people from this group than would typically be done if
everyone in the sample had an equal chance of being selected. Because the margin of sampling error is
related to the size of the sample, increasing the sample size for a particular subgroup through the use of
oversampling allows for estimates to be made with a smaller margin of error. A survey that includes an
oversample weights the results so that members in the oversampled group are weighted to their actual
proportion in the population; this allows for the overall survey results to represent both the national
For example, African Americans make up 13.6% of the total U.S. population, according to the U.S. Census.
A survey with a sample size of 1,000 would only include approximately 136 African Americans. The
margin of sampling error for African Americans then would be around 10.5 percentage points, resulting in
estimates that could fall within a 21-point range, which is often too imprecise for many detailed analysis
surveyors want to perform. In contrast, oversampling African Americans so that there are roughly 500
interviews completed with people in this group reduces the margin of sampling error to about 5.5
percentage points and improves the reliability of estimates that can be made. Unless a listed sample is
available or people can be selected from prior surveys, oversampling a particular group usually involves
incurring the additional costs associated with screening for eligible respondents.
An alternative to oversampling certain groups is to increase the overall sample size for the survey. This
option is especially desirable if there are multiple groups of interest that would need to be oversampled.
However, this approach often increases costs because the overall number of completed interviews needs