Vous êtes sur la page 1sur 29

Mainstreet Research and the 2017

Calgary Election:
How it went wrong and how it
can improve

Joseph Angolano, PhD


Vice President, Analytics
Mainstreet Research
Mistakes are the
portal to discovery
-James Joyce
Summary
The purpose of this report is to examine Mainstreet writ period. Mainstreet should also begin investigating
Researchs methodology and research practices and online technologies to eventually transition away from
identify causes of what went wrong with its Calgary IVR. While IVR is still very much part of the present
polling. While there were some shortcomings that could of polling, it may not be part of its future.
have been avoided, there are some conditions unique to
Calgary itself - we can call it the Calgary Effect - that I want to take this opportunity to acknowledge and
contributed to Mainstreets polling error. thank Dr. Kimble Ainslie of Nordex Research, Professor
Claire Durand of the Universite de Montreal, and
If these conditions were to repeat themselves in other Professor Bryan Breguet of Langara College for their
jurisdictions, Mainstreets IVR methodology will likely useful feedback, suggestions, comments, and criticisms.
generate errors again. While the Calgary Effect is likely id also like to acknowledge Philippe Fournier and
an anomaly, steps should be taken to mitiage against Professor Jeremy Rosenthal for helping to guide the
these circumstances. direction of this report.

This report makes several recommentations that As a final note, although I am an employee of
should be implemented immediately by Mainstreet Mainstreet Research, no member of Mainstreet
Research. Specifically, random digit dialling should Research management or staff placed any restrictions
be incorporated into Mainstreets IVR, longer on my report, or attempted to influence this report in
questionnaires when polling during elections to discern any way. I was given full latitude and independence to
truthful voter intentions, polling conducted over complete this report.
multiple days and times, incorporate AAPOR standards
- including reporting wieghted and unweighted
frequencies. Furthermore, Mainstreet should release
its raw data for its public polls every three months, or
annually in the case of polls released during an election
Mainstreet Research has had a good track record of providing polls that closely mirrored election results,
most notably being the only public opinion research company to predict a Liberal majority in the last
federal election. In recent years, Mainstreet has provided externally valid polls1 for the British Columbia
general election of May 2017 and the Nova Scotia general election of 2017, the Saskatchewan and Manitoba
elections of 2016, and numerous byelections in Canada & the United States. Mainstreet also accurately
predicted an NDP majority in the 2015 Alberta election, a Progressive Conservative majority in Alberta
2012, a BC Liberal majority in the 2013 British Columbia election, and the 2014 Toronto mayoral election.

However, Mainstreet Research polling for the recent Calgary mayoral election on October 16th, 2017
did not come close to approximating the election result. Incumbent Naheed Nenshi won re-election as
mayor of Calgary by beating Bill Smith by a margin of 7.63%. Mainstreets final poll released on October
13th 2017 had Smith leading Nenshi by 13 percentage points among decided and leaning voters. While
Mainstreet polling has been inaccurate at times - no public opinion research company has a 100% track
record - this represents the largest error that Mainstreet has ever had.

The purpose of this report is to examine our methodology and research practices and identify causes of
what went wrong with our Calgary polling. While there were some shortcomings in the lead-up to the
election that could have been avoided, there are some caveats that have to be noted.

First, the Calgary election was an anomaly compared to other municipal elections given its high voter
turnout. Moreover, Calgary is one of the youngest cities in Canada, which means that a spike in voter
turnout among younger voters will have a greater impact on the results than it might in other jurisdictions.
Second, our post-election polling found that a significant number of Nenshi voters did not answer
previous attempts to contact them. Adding random digit dialling to Mainstreets methodology would have
contacted more cell-phone-only respondents and respondents under the age of 35, which might create a
more reliable sample an a more accurate poll.

A key reason why Mainstreet polling has closely aligned with election results in the past is that respondents
that complete its Interactive Voice Recording (IVR) polls also vote in most circumstances, which means
that Mainstreet has never had to apply a likely voter model to improve its accuracy. Specifically, IVR has
had trouble reaching younger voters without the use of quota sampling especially in the United States
where cell phones cannot be called with IVR. Also, older voters are more likely to vote than younger
voters in most elections. However, when elections do not conform to this pattern, as what happened in
Calgary, then IVR polling will face the challenges described in this report. It is important to note that
while we cannot say future polls will closely mirror election results, we can mitigate for all these factors
and still use IVR.
1 As a form of shorthand, I use the term accurate prediction as a stand in for the traditional concept of external validity. That is
to say, a survey of voting intentions should be considered externally valid when it mirrors election results, see Matthew Mendelsohn and
Jason Brent, Understanding Polling Methodology, http://www.queensu.ca/cora/_files/mendlesohn_e.pdf (accessed December 6, 2017).
4
That being said, Mainstreet staff members took its past track record for granted and took what turned
out to be anomalies in the Calgary polling at face value, especially in the second poll released on October
7th. However, this is a matter of hindsight being 20-20. It is very easy to second guess things after the
fact. Nonetheless, more steps should be taken to ensure that Mainstreet polls are accurately signalling
voter intention and not measuring a fragment of likely voters, as what happened in the Calgary election.
Changes in research procedure, questionnaire scripting, and methodology must be implemented to
attempt to prevent a massive polling failure like this from happening again. Specifically, I recommend that
Mainstreet incorporate random digit dialling (RDD) in every Mainstreet poll going forward, along with
other recommendations which will be more fully elaborated at the end of this report.

However, it must be said that there was no collusion between Mainstreet Research and Postmedia to
release fabricated polls that showed Bill Smith leading Naheed Nenshi. Why anyone would seriously
think that is the case beyond my imagination. In this case, the three polls that showed Bill Smith leading
Naheed Nenshi used the same method to collect samples and weigh the data that was used in hundreds
of Mainstreet polls in the past - polls that had both left-of-centre and right-of-centre candidates winning.
I accept that Mainstreets methodology can be improved. Even before October 16th I would have said the
same thing. No polling company worth their salt would seriously claim that their research methods are
perfect. However, it is impossible to accept that Quito Maggi or any other Mainstreet Research employee
would collude with one of its clients or a campaign to deliberately rig a poll and show a particular result
and risk destroying the companys reputation. I find these criticisms to be wholly baseless and malicious. I
have obtained the raw data from four dials - three of these dials were published during the election - from
Mr. Maggi and have recommended that they be made public as part of this report.

THE THREE ELECTION DIALS


We first begin by looking at the three polls that were released in the lead-up to the election, along with
a poll fielded on October 14th that was never publicly released. Each poll used IVR to call from a frame
generated at random from Mainstreets telephone directory, which is assembled from various sources. That
directory is made up of roughly 70% landlines and 30% cell phones.

The first poll, released on September 30th, had a sample size of 1000 and was weighted by age and gender
totals from the Canada 2016 Census. This sampling and weighting procedure is what Mainstreet Research
has followed for all of its polls that are limited to a single municipality or riding, except when otherwise
noted in its methodology statement. The poll showed that Bill Smith led Naheed Nenshi by almost nine
points (41.7% to 32.9%), which amounted to a 10.3% lead among decided voters. The full breakout is
provided in Figure 1.

As far as I can tell, there were no methodological or weighting errors with this poll. There were criticisms of
the September 30th poll once it was released, with much of the focus on the unexpected Smith lead among

5
females. The team was not all that troubled by that figure, due to Mainstreets past track record, along with
the fact that past Mainstreet polling had produced somewhat unusual age and/or gender breakouts and
ended up being correct on election day. Mainstreet had faced similar criticism with their numbers in the
18-34 age group during the British Columbia election, and Mainstreets final poll was very close to final
election results. Finally, what the poll was showing did seem relatively plausible. After all, the proposition
that Nenshi could have been losing among females was not that outlandish. For example, no one seriously
thought that Donald Trump would win the state of Pennsylvania in the 2016 U.S. Presidential election,
and a poll showing this would have faced some criticism. However, Trump ended up winning Pennsylvania
on election day. A poll cannot be dismissed just because it goes against conventional wisdom.

Figure 1: If the election for Mayor of Calgary were held today which candidate would you support?
(broken out by gender and age September 30th poll)

Since the team knew that it would be releasing at least two additional polls before the election, the
decision was taken to alter its methodology in light of some the criticisms made. To be clear, Mainstreets
plan throughout the election cycle was to publish at least two more polls through Postmedia.

It was suggested that perhaps the Sept 30th poll did not sample enough from the downtown wards, which
is where Nenshis support was thought to have been. To address this concern, a sample of 100 from each
of Calgarys fourteen wards was taken. This meant that the second poll had a sample of 1500 compared to
1000 in the September 30th poll. Moreover, the September 30th poll asked several issue-based questions,
while the second poll only asked voter intention. Also, oversamples of some wards were taken and checked
for consistency against the new sample. Another change was that that sample was weighted by age, gender,
and ward. Also, an online sample was gathered and screened for age only for observational purposes. The
samples were gathered between October 4th and 5th, and the poll was released on October 7th without
the online sample. The results of this poll are presented in Figure 2.

The October 7th poll showed a 16.6% lead for Smith. However, the criticism of this poll focused on the
large lead that Smith had over Nenshi in the 18-34 age group (61% to 25%). This poll was the only one of

6
the three that had a methodological error. Mainstreet did not ask respondents which ward that they lived
in. There was a good reason for this: knowing what municipal ward one lives in requires a certain level of
political knowledge that not every voter has, which would have created bias in the sample and badly skewed
the sample. Instead, the team matched the postal codes in its directory to the corresponding ward, and
sampled from that directory. In doing so, some phone numbers were stripped because they could not be
assigned to a specific ward.

What this did was cause nearly 30% of Calgary phone numbers which did not have a postal code to
not be present in the frame, and these phone numbers likely contained a higher ratio of cellular phone
numbers. While a sampling frame for Mainstreet typically is 30% made up of cell phones, this frame was
likely composed of 20% cell phones or less. Normally this would not affect things, but it would in a city
like Calgary where a third of the population is under 35 years old. The second poll likely oversampled
Smith supporters in the 18-34 age cohort but also building the stratified ward model created a poor frame.
Another potential problem that moving to a stratified ward model was that turnout was very different in
each ward, which led to the poll having some bias towards the wards that had less turnout, thanks to each
ward counting equally in the model.

Figure 2: If the election for Mayor of Calgary were held today which candidate would you support?
(broken out by gender and age October 7th poll weighted by gender, age, and ward)

Mr. Maggis statement addresses the question about why the online sample was not added. The answer
simply was that it is not Mainstreet Research policy to publish any polls in Canada that contain an online
sample, either by itself or blending with IVR samples. Mr. Maggi rightly cites the Fields Institute study
commissioned by Mainstreet that showed that blending online samples with IVR samples decreased
accuracy, and this study was the reason why the online survey results were not published. I will discuss
survey modes later in this report, but I cannot find fault with the decision not to blend the online polling
with the October 4-5th sample, even though it would have been the most accurate poll of the bunch2.
Mr. Maggi is on the record saying that the future of telephone surveys may be short and Mainstreet has
invested resources and time in finding ways to transition to other modes of polling. However, the studies
2 Quito Maggi, Statement on Calgary Municipal Election Polling, Mainstreet Research, October 19, 2017, https://www.
mainstreetresearch.ca/statement-calgary-polling/ (accessed on November 19, 2017).
7
that Mainstreet had commissioned recommends against blending online and telephone polling samples,
so it had no reason to blend the samples.

The third poll also used the ward model, and the breakout tables are presented In Figure 3. This poll
showed Smith leading by 11.7%, and by 13% among decided voters. Among decided and leaning voters,
the margin was 52% to 39% for Nenshi. It should be noted that Smiths lead was shrinking.

Figure 3: If the election for Mayor of Calgary were held today which candidate would you support?
(broken out by gender and age October 13th poll weighted by gender, age, and ward)

The team also decided to go in the field once again on October 14th, two days before the election, to see
if any significant shifts had occurred in the last week. Again, the criticism led the team to conduct another
over-sample, this time of voters under 35. While Mainstreet usually would soldier on because it had faced
similar criticisms like this in the past and in the end had gotten the final numbers correct. However, in
this case, the criticism was so intense that some doubts emerged amongst the team. Specifically, some
team members felt that the sample of respondents in the 18 to 34 age group was not representative of
likely voters in that age cohort, so a separate sample of respondents aged 18-34 was collected. Also, quota
sampling was not used. The sample size was 568. This time the results were weighted by age and gender.
The results are in Figures 4 and 5.

Figure 4: If the election for Mayor of Calgary were held today which candidate would you support?
(broken out by gender and age October 14th poll weighted by gender, age - includes leaning voters)

8
Figure 5: If the election for Mayor of Calgary were held today which candidate would you support?
(broken out by gender and age October 14th poll weighted by gender, age - includes leaning voters
and excludes under 35 oversample)

Both tables above includes leaning voters. Among all voters, Smith had his narrowest lead yet 7.9%,
which translates into 8.1% among decided and leaning voters. Without the oversample, Smith leads by
9.6% among decided and leaning voters. This is a remarkable slide for Smith between the second and the
fourth poll, which is not surprising as during this time the Smith campaign had suffered a week of negative
news stories. Smiths lead of nearly 10% was an insignificant shift from the second poll so a new report
was not published using the weekend data. It was only after weighing in the youth over sample during the
review that the more significant movement to just an 8.1% lead was revealed. Either number regardless
would have not changed the prediction of a significant Smith victory.

To sum up, none of the four polls presented contain any real methodological error, save for the second
and third polls. I take issue with the use of a ward model, as it likely ended up stripping out cellular phone
respondents from the frame, making it contain more landlines than what a typical frame might have.
However, the team did try to verify their results in the wake of criticisms, and in my opinion, went down
a road that might have created more measurement error. That being said, if Mainstreet did not tweak its
methodology, it likely never would have found Nenshi winning, at least around the time of the release of
the first and second poll.

THE POST DIAL


I commissioned Town Hall Strategies to conduct an IVR dial in the aftermath of the Calgary election.
The data was collected between October 25th and October 28th. The methodology is almost the same as
previous Mainstreet polls. The questionnaire is found at the end of the report. A total of 3090 respondents
were surveyed and the results were weighted by age and gender.

Interestingly enough, the overall results mirror the election result. These results are in Figure 6. Among
decided respondents, 50.5% said that they voted for Nenshi while 40.7% said that they voted for Smith,
which is close to the actual results of 51.4% for Nenshi and 43.7% for Smith. Three changes were made to
the methodology. The first was the addition of an RDD. Secondly, respondents had the option of taking
9
the survey in Punjabi, Mandarin, and Cantonese. Only 2.6% of the respondents took the survey in these
languages. This sample size is so small that no solid conclusions could be drawn from it. Finally, the dials
occurred over three days, while Mainstreet pre-election polls were conducted over two days for the last two
published polls, and just a single day for the first poll. This change is in line with recommendations that
IVR polling should occur over multiple days to avoid nonresponse bias3.

Figure 6: Who did you vote for in the mayoral election?


(broken out by gender and age October 25th-28th poll weighted by gender and age)

There are differences between the RDD dial and the directory dial that should be noted (see Figure
7). The first is the difference in support for Nenshi between the two samples. Looking at the directory
dial alone (n=2783), Nenshi leads Smith by a margin of nearly three points among decided respondents
(47.4% to 43.5%), but in the RDD sample (n=307), he leads by nearly forty points (66.3% to 26.5%). It
should be said that the RDD sample is much smaller than the directory sample.

Figure 7: Who did you vote for in the mayoral election?


(October 25th-28th poll differences between directory and RDD dials)

The second difference is that respondents were not screened for age before taking the survey. This is
a departure from recent Mainstreet methodology. I had taken this decision because I had noticed that
there was a component of the fourth dial that only screened for respondents aged 18 to 34 by asking
respondents for their age both at the beginning and the end of the survey. What was interesting about that
was that nearly half of the respondents who said they were between the ages of 18 to 34 at the beginning
3 Nate Silver, The Uncanny Accuracy of Polling Averages*, Part IV: Are the Polls Getting Worse?, New
York Times, October 4, 2010, https://fivethirtyeight.blogs.nytimes.com/2010/10/04/the-uncanny-accuracy-of-
polling-averages-part-iv-are-the-polls-getting-worse/ (accessed on November 20, 2017).
10
of the survey said that they were not in that age bracket when asked the same question. I was curious to see
if reverting to Mainstreets pre-2015 methodology of asking respondents their age at the end of the survey
made any difference. The third difference was in the introduction. All pre-election surveys mentioned in
the introduction that the poll was being conducted by Mainstreet Research on behalf of Postmedia, while
the post-election dial said the survey was conducted by Townhall Strategies.

The RDD sample had far more respondents aged 18 to 34 than the directory sample. 4.7% of the directory
sample said they were between the ages of 18 to 34, while 23.4% of the RDD sample said the same.
Mainstreet polling has had a low amount of respondents aged 18-34 in the past, which would be adjusted
with post-stratification weighting, but this had not affected their accuracy. In fact, having fewer respondents
aged 18-34 might have helped Mainstreets accuracy because it was probably excluding unlikely voters from
the sample. This issue will be discussed further below, but we know that younger people usually do not
come out and vote, especially in municipal elections. However, as Calgarys population heavily skews young,
it is likely that Mainstreets usual methodology would never have gotten a sample that was representative of
the voting population in Calgary. Moreover, when we look at the age breakouts, we can see that the specific
type of younger voters that usually do not vote in municipal elections did vote in this election and voted
differently than traditional voters to boot. The large difference in Nenshi support between the directory
and RDD sample is evidence of this.

There was something about the directory - which is used to build the frames for all of Mainstreets Canadian
polling - that did not allow Mainstreets Smart IVR or Chimera IVR to capture enough of Nenshis support
to put it closer to the election results. One point of evidence that demonstrates this the sheer amount of
Nenshi supporters in the post-election dial that did not respond to any of Mainstreets attempts to call
them. Mind you, several respondents who said they supported Smith were also never contacted during
the municipal writ period. However, the number of attempted but uncontacted Nenshi supporters in the
post-election poll is much higher than Smith supporters who meet the same criteria. 35.8% of Nenshi
supporters in the post-election poll was attempted to be contacted by Mainstreet polling, while the number
is 30.1% for Smith supporters.

Moreover, Nenshi supporters themselves tell us that they were contacted less often than Smith supporters
(see Figure 9). We asked how often they were contacted by a robocall, either by a campaign or a polling
company. 54.5% of respondents say they were contacted less than five times. Among these respondents,
44.5% said they supported Nenshi, and 28.3% said they voted Smith. Among the respondents who said
they were contacted more than five times, Bill Smith leads Nenshi by seven points (40.7% to 33.6%).
Including an RDD component in the Calgary dials certainly would have helped find Nenshi support, but
this also shows that Mainstreet was attempting to reach Nenshi voters, who for whatever reason were opting
not to participate.

11
Figure 8: How often did you receive a robocall during the election campaign, either from a campaign
or a polling company? (October 25th-28th poll differences between directory and RDD dials)

Figure 9: Mayoral support crosstabbed with often respondent received a robocall (October 25th-28th
poll)

Also, the post-election poll shows evidence of respondents either switching their voting preferences or
misrepresenting their true voter preferences to favour Nenshi. 24.1% (n=744) of respondents answered
both the post-election survey and one of the four pre-election surveys and the poll Mainstreet conducted
in Calgary on August 28th. 26.9% of these respondents gave a different response in the post-election
survey than they did in the previous survey. A full breakout is provided in Figure 10.

Figure 10: Movement of respondents from Mainstreet pre-election polls to post-election poll

12
Among these respondents, Nenshi had a net gain of 20% while Smith had a net gain 9%. This gives Nenshi
a total gain of 11% among the respondents who switched their vote. Let us assume that the discrepancies
among these respondents in pre-election and the post-election dials are attributed entirely to vote switching
and not to misrepresentation of preferences. If this subsample is representative of the whole population,
this 11% switch would have meant a 2.8% gain for Nenshi overall on election day.

However, we also know that Nenshis support had grown by a greater amount than 2.8% as the election
wore on. Figure 11 what Nenshi, Smith, Chabot, and the other candidates received regarding both in the
advance polls and on election day.

Among these respondents, Nenshi had a net gain of 20% while Smith had a net gain 9%. This gives Nenshi
a total gain of 11% among the respondents who switched their vote. Let us assume that the discrepancies
among these respondents in pre-election and the post-election dials are attributed entirely to vote switching
and not to misrepresentation of preferences. If this subsample is representative of the whole population,
this 11% switch would have meant a 2.8% gain for Nenshi overall on election day.

However, we also know that Nenshis support had grown by a greater amount than 2.8% as the election
wore on. Figure 11 what Nenshi, Smith, Chabot, and the other candidates received regarding both in the
advance polls and on election day. Nenshi had a 4% lead in advance polls, which had grown to 8.6% by
election day. This equals to a 4.6% gain for Nenshi between the close of advance polls to the day of the
election.

The truth of the matter is that the Calgary mayoral election was always a competitive race, and contrary
to some claims made after the election, Nenshi was likely losing at some point in the race. Although
Mainstreet polling in the election was completely off the mark with its overall numbers, it did capture the
downward trendline for Smith that closely mirrored his downward trend from advance polls to election
day. The third Mainstreet poll released used data collected between October 10th and 11th. The advance
polls closed on October 11th. That poll had Smith leading by 13% among decided and leaning voters. The
unreleased fourth poll - which was in the field on October 14th, had Smith leading by 8.1% among decided
and leaning voters (this includes the 18-34 oversample). This difference amounts to a decline of 4.9% for
Smith. Thus there is is only a 0.3% difference between the actual Smith decline from advance polls to
election day and the downward trend depicted by Mainstreet polling.

If we look back at the second poll which was in the field on October 3-4, we find that Smith leads Nenshi
by 19% among decided respondents. If this trend line detected by Mainstreet polling is accurate, and the
Smith decline from advance polls to election day shows that it is by and large, then Smith was leading
Nenshi around the time that advance polls opened, or at least was slightly behind him. If we were to add
6% (the amount of decline from the second poll to the third) to Smiths advance poll total of 45.4%, we

13
get a total of 51.4%. Accounting for the 0.3% difference between Mainstreets downward trend line for
Smith and his actual trend line, then Smith would have had 51.1% support.

Figure 11: Nenshi and Smith Advance Vote and Election Day Percentages

This is a far more plausible theory than the notion that Nenshi always had enjoyed a comfortable lead
throughout the election. The only two polls published during the writ period both showed large leads for
Nenshi. The Asking Canadians poll was in the field between October 7th and October 10th and showed
Nenshi leading by a margin of 20.8%. The CMES poll, on the other hand, had an unusually long fielding
period of September 28th to October 12th4 and also had Nenshi leading by a margin of 20.8%. Both
polls had Smith at 36.1% and 37.8% respectively. Both polls indicated a strong Nenshi lead throughout
the campaign that had shrunk on election day.

This conclusion seems counterintuitive. Bill Smith had numerous negative stories released about him after
the field dates of the Asking Canadians poll and during the fielding of the CMES poll. On October 4th,
Smith was criticized for his statement about City Council needing to rethink the building of the Green

4 My guess is that the fielding period was so long due to their methodology. CMES had hired Forum Research to do an
IVR call to recruit respondents to their online panel, and then asked the panelists their voting intentions. My guess is that the
recruitment started on September 28th and the voter intention survey ended on October 12th. The dates as to when they actually
asked the panelists who they would be voting for in the mayoral election is unknown.
14
Line5. On October 10th, news broke that a civil enforcement company was prepared to seize more than
$24,000 in property from Smiths business6. On October 11th it was revealed that a company that retained
Smith as a lawyer sued him 2010 over what they believed to be a failure to exercise the care and skill to
be expected of a reasonably competent solicitor7. All of these news stories would have had a negative
impact on Smiths campaign and would cause his support to drop, or at the very best, have no impact on
his support. They should not cause Smiths support to increase. And yet Smith garnered 43.73% of the
vote. A week of bad headlines does not usually make a candidates support go up. It is more believable that
Smith fell to 43.7% instead of increasing to that number as the Asking Canadians/LRT and CMES poll
indicated. Mainstreets polling showed a downward trend for Smith, as does the vote totals in the advance
polls and election day. If Smith dropped to 43.73%, the truth of the matter is that the mayoral election
was quite competitive - even though no poll properly caught it - and that Smith was likely winning at some
point earlier in the election.

Figure 12: Timeline of Mainstreet polling and Calgary election day events
September 30 poll (Poll 1) Smith +10.3
October 4 - advance polls open
October 7 poll (Poll 2 - fielded on October 3-4) Smith +19%
October 11 - advance polls close
October 13 poll (Poll 3 - fielded on October 10-11) Smith +13%
October 14 poll (Poll 4 - unreleased) Smith +8.1%

Unfortunately, any public discussion of Smiths downward trend was completely lost. Justin Lings report on
Mainstreets communications strategy will discuss this further, and I am not saying that it was Mainstreets
fault for not including a discussion of trend lines in its reports or on social media. But this shows the
utility of publishing several polls during a writ period because it is useful to establish trend lines. While
the overall numbers might end up being wrong, showing trend lines is useful for public discourse. This
justifies the release of multiple polls during an election, as it will help neutral observers figure out which
polls are outliers or not. I recognize that building a reliable sampling frame in a municipal election is tough
for pollsters and that might be the biggest reason why many pollsters did not release any polling, but I think
pollsters might consider trusting the intelligence of the public, release its numbers, and explain the polls
potential shortcomings.
5 Helen Pike, Bill Smith wants to axe Calgary Green Line tunnel in favour of longer line, Metro News, October 4, 2017,
http://www.metronews.ca/news/calgary/2017/10/05/bill-smith-wants-to-axe-calgary-green-line-tunnel-in-favour-of-longer-line.
html (accessed on November 22, 2017).
6 Drew Anderson, Mayoral candidate Bill Smith embarrassed by unexecuted warrant to seize property, CBC News Calgary,
October 10, 2017, http://www.cbc.ca/news/canada/calgary/bill-smith-mayoral-candidate-embarrassed-1.4329008 (accessed on
November 22, 2017).
7 Drew Anderson, Settled $2.2M lawsuit from 2010 alleged Bill Smith failed in duty as reasonably competent solicitor, CBC
News Calgary, October 11, 2017 http://www.cbc.ca/news/canada/calgary/calgary-bill-smith-lawsuit-real-estate-deal-1.4349741
(accessed on November 22, 2017).
15
In fact, this reason is exactly why polls should be released during elections, because without them voters
who want to know how the candidates are doing are at the mercy of talking heads basing their guesses on
their personal hunches or the campaigns spin. This unfortunately did not happen in Calgary. Mainstreet
deserves a shard of credit here because it released multiple polls, as do Asking Canadians and CMES for at
least releasing a single poll and contributing to the discourse. Sadly the conversation degenerated beyond
this, but this topic is beyond the scope of this report.

The task of this report rather is to identify and explain what happened with Mainstreet polling in this
election and not give an account as to why other polls was off the mark. This section points to some of
the reasons why Mainstreets polls did not capture enough of Nenshis support. Nenshi voters were not
answering their phones, which is confirmed by matching respondents in the post-election poll to polling
during the writ. Also, a good chunk of Nenshi voters were seldom contacted (not by Mainstreet or anyone
else by that matter), which we can infer from the fact that Nenshi had more than 20% support over Smith
among respondents who were contacted less than five times.

THE CALGARY EFFECT: WHAT MAKES CALGARY SO SPECIAL?


The surprising thing is that this is not the first time that a Calgary election had massive polling error.
Below is a summary of the polls from the 2010 mayoral election. Turnout was 53% - an increase from the
previous election.

Figure 13: Polls released before the 2010 Calgary Mayoral Election and their total deviation

As we can see, the variation from these polls was far greater - and thus represents a larger polling failure -
than what had occurred in 2017. Below is the same table for the 2017 election.

Figure 14: Polls released before the 2017 Calgary Mayoral Election and their total deviation

16
Only two polls, from Leger Marketing and Insights West, were released in the 2013 election, and the
average deviation of these two polls was significantly lower than the 2010 and 2017 polls.

Figure 15: Polls released before the 2017 Calgary Mayoral Election and their total deviation

It is worth noting that both 2010 and 2017 elections had over 50% turnout. It is not typical to see voter
turnout surpass 50% in a municipal election in any Canadian city, and voter turnout in Calgary for this
election was the second highest in a Canadian city over the last two election cycles.

Calgary mayoral polls has generated significant polling failure before, and all of the polls released in this
cycle had significantly more variation from the actual results than the most accurate poll in 2013 (released
by Insights West on October 18, 2013).

Calgary provides some unique polling challenges. In fact, polling in smaller jurisdictions presents difficult
challenges - at least in terms of getting a reliable sample of the population as a whole or voters. Mainstreet
uses IVR to dial its directory, and while cell phones make up a percentage of that directory, it likely did
not contain enough cell phones to build a reliable frame. Although there is no consensus regarding the
proportion of cell phones that should be included in a frame8, it is fair to say that reducing the amount of
cell phones in a frame might create problems. Calgary is one of the youngest cities in Canada, where voters
aged between the age of 18 and 35 make up 32.6% of the voting population, and the ratio of 18-34s to the
number of voters over the age of 65 is 2.284, the second highest ratio among all major Canadian cities.

When a population skews that young, it stands to reason that a methodology that relies rather heavily
on landlines might have trouble mirroring the actual election results if a solid amount of that younger
demographic decide to come out and vote. There will be more about the topic of younger voters below,
but a key reason why Mainstreets polling was so far off the mark, along with the fact that we have clear
evidence that Nenshi voters were responding to our surveys in lower numbers, is that a significant number
of voters aged 18 to 34 who usually do not vote did come to the polls this time in significant numbers, and
that they were primarily responsible for the spike in voter turnout overall. On top of this, I suspect that this
cohort voted differently than those who usually vote. The fact that there is far more support for Nenshi in
8 American Association for Public Opinion Research Ad Hoc Committee on 2016 Election Polling, An Evaluation of 2016
Election Polls in the U.S., American Association for Public Opinion Research http://www.aapor.org/Education-Resources/Reports/
An-Evaluation-of-2016-Election-Polls-in-the-U-S.aspx (accessed on November 20, 2017).
17
the RDD dial, which likely had more cellular phones than the directory dial and had more respondents
aged 18 to 34, is evidence of this.

Figure 16: Municipal Elections in major Canadian cities ranked by turnout


(turnout over 50% in bold)

The strange thing about this is that Mainstreet had a previous warning about this phenomenon in its
previous polling. It had gotten the Saskatoon mayoral election incorrect as well, although the error was
not as great as it was in Calgary. Mainstreet had forecast that incumbent Don Atchison would defeat
challenger Charlie Clark by 5%9, but Clark ended up defeating Atchison by 3.81%. Like Calgary, the
ratio of voters aged 18 to 35 to voters over the age of 65 in Saskatoon is over 2, and like Calgary, the
winner of the election was a progressive candidate that fought a hotly contested election in which voter
turnout increased. Once again, hindsight is 20-20 and it easy to stand from this vantage point and say
that Mainstreet should have noticed the similarity between the demographics of Calgary and Saskatoon
and proceeded with more caution. The Mainstreet team did attempt to verify its original poll and tried to
find ways to improve its methodology as a reaction to the criticism that it had received. What is important
to note is that unlike Calgary, the Saskatoon mayoral election online panel polls also showed a lead for
incumbent Don Atchison and in fact had Charlie Clark in 3rd place in their final published poll. Upon
immediate reflection, the conclusion had been in Saskatoon a year earlier that vote had shifted rapidly in
the closing days and caused Clark to surge past Atchison. In hindsight, similar factors like those in Calgary
9 Mainstreet Research, Atchison Leads as Campaign Heads to Finish Line October 24, 2016 (https://www.
mainstreetresearch.ca/atchison-leads-campaign-heads-finish-line/ (accessed on November 23, 2017).
18
may have been at play in the Saskatoon election last year. This is important because the demographics of
more urban centres and the population as a whole may resemble Calgary and Saskatoon in the years ahead.

Figure 17: Ratio of 18 to 34 age group to 65+ age group of twelve major Canadian cities

Adding an RDD component would have helped contact younger voters that generally do not vote, which
is especially critical in a city like Calgary where the ratio between the youngest group of voters and the
oldest cohort is so great. To sum up, more Nenshi supporters than Smith supporters were not answering
calls. We infer this from matching those contacted in the four pre-election polls and the post-election poll,
and the fact that Nenshi has a significant lead among respondents who said that they were contacted less
than five times either by a campaign or a polling company. Also, RDD found much more Nenshi support
than the directory dial. The RDD likely contained more cell-phone-only respondents, which means that
non-traditional voters came out and voted for Nenshi, while traditional voters voted more for Smith. All
of these points, combined with the fact that Calgary is one of the youngest cities in Canada, contrived to
create a perfect storm that led to Mainstreet polling to not remotely mirror the actual election results in
Calgary. Call this the Calgary Effect because this perfect storm happened in Calgary first, but Mainstreet
and other pollsters need to be cognizant of situations like these, proceed with caution, and put in all
necessary caveats in its reports.

MODE EFFECTS
In the discussion of our polling in the lead-up to the Calgary election, much was made of the fact that
Mainstreet uses IVR and that it was structurally unable to generate valid measures of public opinion or
draw a sample that was not representative of the voting population. Critics pointed to other survey modes
such as online panels and live agent calling and claimed that these were superior modes to IVR.

19
Each mode has its strengths and weaknesses. While Mainstreet has used live calls for polling for some
clients at their request, we prefer to conduct its polling with IVR for a few reasons. The cost of conducting
an IVR poll is far less than that of live calls and respondents can complete surveys in a far shorter period
of time. IVR respondents answer questions asked by a pre-recorded message, which ensures a level of
consistency in how the questions are asked.

Also, there is evidence that suggests that IVR is less susceptible to social desirability bias than live agent
polling10. That is to say; respondents might be more willing to reveal socially undesirable opinions on an
IVR poll than live calls. Social desirability bias is something that pollsters are looking at more closely given
President Donald Trumps win in the 2016 U.S. Presidential Election. Many commentators, including
Mr. Maggi himself, speculated that there was a hidden Trump vote that polls were not finding. This was
likely because respondents might have been embarrassed to admit that they were voting for President
Trump, who has admitted to committed socially undesirable (to put it mildly) behaviour in the past11.

Perhaps the reason why IVR polls have less social desirability is that respondents do not have to tell a
stranger, for example, that they lie on their taxes, are in the lower income bracket, or are voting for an
admitted sex offender. They can register their opinions and facts about themselves by simply pushing a
button on their telephone pad. The fact that IVR has less social desirability bias than live calls is a reason
why it is the preferred methodology for Mainstreet. Voting preference is something that may be less socially
desirable to reveal to others in this current age of hyper partisan politics and frequent shaming on social
media, so having a methodology where respondents are comfortable registering their political opinions no
matter what might be is important.

These reasons along with IVRs good performance in predicting elections the United States in 201612
and Mainstreets good record (at least before the Calgary election) is why IVR is Mainstreets preferred
methodology. IVR is notorious for its low response rate13, and many studies rightly point out that low
response rates could be an indicator of nonresponse bias where the population of non-respondents is
significantly different from those who do respond14. However, Nate Silver suggests that the reason why
10 Frauke Kreuter, Stanley Presser, and Roger Tourangeau, Social Desirability Bias in CATI, IVR, and Web Surveys: The
Effects of Mode and Question Sensitivity, Public Opinion Quarterly 72, Issue 5 (2008), 847865.
11 Quito Maggi, Are shy Trump voters for real? iPolitics November 3, 2016 https://ipolitics.ca/2016/11/03/are-shy-trump-
voters-for-real/ (accessed November 15, 2016), also see Peter K, Enns, Julius Lagodny & Jonathon Schuldt, Understanding the
2016 US Presidential Polls: The Importance of Hidden Trump Supporters. Statistics, Politics and Policy 8, Issue 1 (2017), 41-63.
12 American Association for Public Opinion Research Ad Hoc Committee on 2016 Election Polling, An Evaluation of 2016
Election Polls in the U.S., American Association for Public Opinion Research http://www.aapor.org/Education-Resources/Reports/
An-Evaluation-of-2016-Election-Polls-in-the-U-S.aspx (accessed on November 20, 2017).
13 A key reason why IVR polling has low responses rates is because IVR can dial tens of thousands of phone numbers in
the span of a few hours much faster than a call centre with 20 or so agents. IVR pollsters can often set and hit their completion
quotas in one night of dialling, sometimes in a few hours.
14 Ronald R. Rindfuss, Minja K. Choe, Noriko O. Tsuya, Larry L. Bumpass, & Emi Tamaki, Do low survey response rates
bias results? Evidence from Japan, Demographic Research 32 (2013), 797-828.
20
IVR polls might be having an easier time with predicting elections is precisely because of this nonresponse
bias. As he explains; the response bias that may be present for pollsters with low response rates may act as
a de facto likely voter model, which means that automated firms may have less work to do in pruning out
unlikely voters later on.15

Other studies show that online panels have less social desirability bias than both IVR and live calls16.
Holbrook and Krosnick argue that this because online surveys are self-administered, but that this might
only apply to overreporting of socially desirable traits. That is to say, online panels may not have the
problem of too many people saying that they did not vote (a socially desirable thing to do), but it may suffer
from underreporting socially undesirable traits17.

Online panels are very much works in progress, and the fact that they rely on non-probability sampling
is a prime concern18. As the AAPOR report on online panels states, without a universal frame of email
addresses with known links to individual population elements, some panel practices will ignore the frame
development step. Without a well-defined sampling frame, the coverage error of resulting estimates is
unknowable.19 On this point, both AAPOR and MRIA do not recommend that polling companies release
a margin of error with reports on online panels. The Fields Institute20 study commissioned by Mainstreet
Research showed that blending responses from online polling with IVR responses would introduce more
error. This is probably because online panels likely attract respondents that are politically engaged and
are interested in the subject matter, and are likely high-information voters21. As we know, low-information
15 Silver 2010.
16 See Roger Tourangeau and Yan Ting. Sensitive Questions in Surveys, Psychological Bulletin 133 (2007), 859-83 and Allyson
L. Holbrook and Jon A. Krosnick, Social Desirability Bias in Voter Turnout Reports: Tests Using the Item Count Technique, Public
Opinion Quarterly 74, Issue 1 (2010), 3767.
17 Takahiro Tsuchiya, Yoko Hirai, and Shigeru Ono, Study of the Properties of the Item Count Technique, Public Opinion
Quarterly 71 (2007), 25372. Holbrook and Krosnick also suggest that the sensitivity of the question may also create distortion
based on social desirability bias.
18 (T)he problems with the opt-in panel is that it clearly overestimates the proportion of those who are politically engaged.
This is likely to be the result of nonobservation error, which has the potential to be more severe with opt-in online panels using
nonprobability sampling Jeffrey Karp and Maarja Luhiste, Explaining Political Engagement with Online Panels: Comparing the
British and American Election Studies, Public Opinion Quarterly 80, Issue 3 (2016), p. 686
19 American Association for Public Opinion Resarch Executive Council by a Task Force, Report on Online Panels, June 2010
American Association for Public Opinion Research http://www.aapor.org/Education-Resources/Reports/Report-on-Online-Panels.
aspx (accessed on November 20, 2017).
20 The Fields Institute is a centre for mathematical research activity housed at the University of Toronto - a place where
mathematicians from Canada and abroad, from academia, business, industry and financial institutions, can come together to carry
out research and formulate problems of mutual interest, taken from the Fields Institute website http://www.fields.utoronto.ca/about
(accessed on December 8, 2017).
21 Helen Cheyne, Andrew Day, Nathan Gold, Neal Madras, Tom Salisbury, Tyler Wilson, Mainstreet Research: Improving
Polling Results 2016, Industrial Problem Solving Workshop, Fields Institute, 2016 http://www.fields.utoronto.ca/sites/default/
files/IPSW%20Mainstreet%20Report.pdf (accessed on November 1, 2016). Also De Leeuw weighs in on this topic; Hardly any
theoretical or empirical knowledge is available on how to design optimal questionnaires for mixed-mode data collection (e.g.,
unimode and generalized mode design). Empirical research is needed to estimate what constitutes the same stimulus across different
21
citizens vote too (likely due to campaign mobilization), and thus online panels will have a degree of
selection bias when surveying for voting behaviour if they do not have a good amount of low-information
voters in their sample22.

The preceding section was not meant to be an editorial for the superiority of IVR. There are bad IVR polls
as well as good online panel polls, and it is possible to minimize any perceived mode effect. With work,
any mode can closely mirror election results, although IVR has some base advantages in that it has a built-
in voter model of sorts, whereas companies that use other modes try to build likely voter models to try to
improve accuracy. Nonetheless, the perceived advantages of IVR are very much short-term ones. IVR still
is part of the present of polling. But the Internet is the future of polling, and it is likely that this future is
coming sooner rather than later.

IVR and voter turnout


IVR works when it is polling an election where the voter population fits the traditional voter model.
That is to say, we know that younger people in Canada are less likely to vote than older voters, and the
generational divide between voters and non-voters is increasing as time goes on23. Also, younger voters
generally live in cellphone-only or cellphone mostly households. As Silver explains; It is now very difficult,
for instance, to get young people on the phone when using a landline-only sample. About half of all adults
from age 25-29, for instance, are cellphone-only, and two-thirds are either cellphone-only or cellphone-
mostly. (The numbers are actually slightly better for adults aged 18-24, who are more likely to be living in a
college dormitory, or still to be living at home, where a landline will usually be available.) Couple this with
the fact that young people have grown up in a call-screening culture, and their response rates are often
completely inadequate.24 Mainstreet uses a mostly landline sample, and while it has used quota dialling to
get a sufficient number of respondents aged 18-34, most of these respondents were reached at via landline.
Silver goes on to note that there are attitudinal differences between cellphone-only households and
landline-using households. As he explains; They tend to be younger, poorer, more urban, less white,
and more Internet-savvy. All of these characteristics are correlated with political viewpoints and voting
behaviour.25 To this point, evidence in the United States suggests that those living in cellphone-only
households hold more progressive views and tend to vote Democrat if they come out to vote26. The
modes, and especially how new media and new graphical tools will influence this, Edith de Leeuw, To Mix or Not to Mix Data
Collection Mode in Surveys, Journal of Official Statistics 21, Issue 5 (2005), p. 249-250.
22 Karp and Luhiste (2016), p. 666.
23 See Elisabeth Gidengil, Neil Nevitte, Andre Blais, and Richard Nadeau, Turned off or tuned out? Youth participation
in politics Electoral Insight 5, Issue 2 (2003), 914; Richard Johnston, J. Scott Matthews, Amanda Bittner, Turnout and the
party system in Canada, 19882004, Electoral Studies 26, Issue 4 (2007), p. 735-745; and Jon H. Pammett and Lawrence Leduc,
Confronting the problem of declining voter turnout among youth, Electoral Insight 5, Issue 2, (2003), 38.
24 Nate Silver, Study: Excluding Cellphones Introduces Statistically Significant Bias in Polls, FiveThirtyEight, May 24th 2010,
https://fivethirtyeight.com/features/study-excluding-cellphones-introduces/ (accessed on November 22, 2017).
25 Silver 2010.
26 Leah Christian, Scott Keeter, Kristen Purcell and Aaron Smith, Assessing the Cell Phone Challenge, Pew Research Centre,
22
situation is likely similar in Canada, especially in larger urban centres.

So to sum up, there may be substantial differences between landline and cellphone households regarding
political attitudes and age. However, what we do know is that younger voters, who are more likely to live
in cellphone-only households, are less likely to vote. Moreover, the difference in political values from older
voters accounts for their abstention from voting, specifically in whether they view voting as a civic duty or
not27.

In situations like these, which we see in just about every election, IVR polling will usually be effective. More
to the point, even when the younger cell phone population do come out and vote but vote along similar
lines as more habitual voters, IVR polling will still generally be correct, although it might underestimate
the winners total vote share somewhat.

However, when this cohort of younger voters turn out and vote differently from more habitual voters, then
IVR will run into trouble, especially in areas where the residents aged 18 to 34 outnumber those over the
age of 65 by a factor of two. Both of these criteria were met in the Calgary mayoral election. We know that
the younger voters came out and voted for Nenshi because of both the higher ratio of respondents aged 18
to 34 and that Nenshi has a much larger lead over Smith in the RDD sample. We also know that Nenshi
supporters were not contacted as much during the writ period as Smith supporters and thus a lot of Nenshi
supporters were hidden from Mainstreet polling and turned out to vote.

Thus the reason why Mainstreet polling has by and large mirrored final election results is also the reason
why it failed in Calgary. IVR respondents are also traditional voters. However the Calgary electorate did
not follow the traditional voter model, and younger people voted and voted differently from traditional
voters to boot. Moreover, to exacerbate the problem, Calgary is one of the youngest cities in Canada and
respondents aged 18 to 34 make up nearly a third of the total population. So this younger cohorts voting
impact will be especially amplified in Calgary as opposed to older cities such as Toronto or Montreal.
Including an RDD sample would likely have been enough to capture enough Nenshi support to have
corrected for most of its polling error. The question that remains to answer is whether this voting behaviour
shown in Calgary is an anomaly or the prototype for future voting behaviour.

Nonresponse bias in IVR is real. I accept that that the population of nonrespondents to an IVR survey is
potentially significantly different than IVR respondents. I would speculate that IVR non-respondents are
likely younger, more liberal or hold more progressive views, and less likely to vote. Because they were less
likely to vote, Mainstreet Research surveys tended to be externally valid because they were representative of
the population that acutally vote and not representative of the general voting population at the same time.
May 20, 2010, http://www.pewresearch.org/2010/05/20/assessing-the-cell-phone-challenge/ (accessed on November 22, 2017).
27 Andre Blais and Daniel Rubenson, The Source of Turnout Decline: New Values or New Contexts?, Comparative Political
Studies 46, Issue 1 (2013), 95-117.
23
IVR works well when the traditional nonrespondents do not vote. But this was not the case in Calgary.
The impact of the youth vote becomes more compounded when the voting population far exceeds those
over the age of 65.

RECOMMENDATIONS
The Calgary effect was likely an anomaly and that the factors described above contributed to this anomaly
occurring. However, part of the polling companys job is to forecast when these anomalies occur. While
Mainstreet has made election forecasts that went against conventional wisdom in the past, it was likely due
to habitual voters making choices that ran against conventional wisdom. In the time that Mainstreet has
been polling elections, the Calgary election was likely at most the second time when non-habitual voters
came out and voted differently than traditional voters, while the Saskatoon mayoral election last year was
likely the first.

It is not an act of humility to admit that Mainstreet will have polling failures again. No polling company
will get it right all the time. However, there are steps that Mainstreet can and should take to reduce the
likelihood of a failure like Calgary every happening again.

In this section, I make some recommendations that will not only help to prevent polling error in the
future but also make Mainstreets polling more transparent and accessible to the average Canadian.

Recommendation 1: Implement Random Digit Dialing for all Mainstreet polling


The post-election RDD dial captured a significant number of respondents under the age of 35 and was
able to access phone numbers - and thus unique respondents - that were not available through directory
dialling. Thus I recommend that all Mainstreet poll incorporate an RDD component to accompany its
directory dialling. I leave it up to the Mainstreet team to determine the appropriate ratio of RDD completes
to directory completes. On that note, I would also recommend that a careful comparison be made between
RDD and directory samples to see if there are any notable differences between them, as these differences
might be indications whether habitual non-voters might be voting on election day. Including RDD should
reduce nonresponse bias in a way that will not sacrifice accuracy. Also, implementing RDD should make
it easier for Mainstreet to reach its targets for respondents aged between 18 and 35 without the need for
quota sampling.

Recommendation 2: Include Longer Questionnaires When Polling Horserace Numbers


Mr. Maggi has discussed the issue of shy Trump voters not answering polls truthfully their voting intention.
Enns, Lagodny & Schuldt confirm that shy Trump voters were real, and these were voters who embarrassed
to admit that they were voting for a candidate who has admitted to sexual assaults and has a long history
of crass behaviour. Mainstreet employees do know that respondents at times will for whatever reason not
be truthful about their voting intention and that asking other questions about economic optimism, or

24
their opinions about specific policy planks of a candidate or party might reveal what their true preference
is. This technique is known to Mainstreet, but for some reason was not used in their Calgary polling. Some
additional questions were asked, but they were not intended to verify voting intentions.

I recommend strongly that these types of questions return to all Mainstreet polls released during a writ
period. Respondents may be shy to reveal their voting preference, or they may just decide not to tell the
truth. Further questions that indirectly ask respondents what they think about the candidates will help
reveal shy voters. Enns, Lagodny & Schuldt gives a good example that they used in the 2016 U.S. election;
1) If you had to choose, how truthful do you find each candidate? without giving respondents the option
to choose Not Sure. After all, voters cannot vote for Not Sure in the voting booth and must choose a
candidate (very few Canadian voters cast blank votes). Mainstreet has used questions about economic
pessimism to reveal shy voters in the past. I would like to see these questions return for polls conducted
during a writ period.

Recommendation 3: Polls are to be conducted over multiple days and times


Often Mainstreet will collect samples for only one night, and more often than not, will collect a very large
sample in that one night. While this has not affected Mainstreets accuracy in the past, it does increase
nonresponse bias. In this day and age where fewer people are working traditional nine to five jobs, it is
becoming less likely that a pollster can truly capture a representative sample for a group of respondents
that are either home or can answer their cell phone in a three-hour window. Therefore I recommend
that Mainstreet stay in the field over multiple days (a minimum of two) and at different times. Usually,
Mainstreet will poll during the hours of 6 to 9 pm, but perhaps doing a dial between noon and 4 p.m. the
next day might capture a more representative sample.

Recommendation 4: Use AAPOR Reporting Standards


The American Association for Public Opinion Research (AAPOR) is the oldest professional public opinion
research association in the world and has worked to develop some of the most rigorous reporting standards
currently available. I strongly urge Mainstreet to implement them immediately. Doing so would increase
transparency and remove any doubt about how Mainstreet conducts its polling. Mr. Justin Ling I suspect
will certainly address how Mainstreet deals with the media and comment on its communications practices,
and I hope that his recommendations be compatible with AAPOR reporting standards.

I submit that Mainstreet has a duty to Canadians to fully explain its polling, irrespective of how any third
party might choose to report or interpret its findings. Specifically, I ask that all employees that work on
Mainstreet polls review AAPORs Best Practices for Survey Research (although nearly all of these suggestions
are already being met by Mainstreet already). I also recommend that a more detailed methodology statement
be included in every report, one that meets all of the requirements set out by AAPORs Survey Disclosure
Checklist. Finally, I ask that any Mainstreet employee that either works on a poll or discusses our findings

25
in the media read, understand, and adhere to the AAPOR Code of Professional Ethics and Standards.
All of these standards should be met insofar that they do not contradict Canadian federal or provincial
laws and regulations.

In addition to these standards, I ask that all polls that ask undecided respondents which way they might be
leaning include a breakout table with the true undecided number. The way Mainstreet reports decided
and leaning voters is valid, but can be confusing to some. Perhaps including a brief primer, perhaps even
a small video, about how Mainstreet calculates its decided and leaning voters numbers would be helpful.
Secondly, I recommend that Mainstreet develop a probabilistic model that can be used to calculate the
probabilities of each candidate or party winning the election based on our polling. Probabilities are easier
to explain to a layperson than margins of error or what constitutes a significant difference in polls. A good
example was when Mainstreet was polling the Conservative Party of Canada leadership race. Looking at
only the horserace numbers one would have thought that our polling was giving Andrew Scheer no chance
of winning, but our simulations were showing that Scheer had a 15% chance of winning. According to
our data, Scheers odds of winning were close to rolling a six with a die. One might bet against rolling a
six, but rolling a six cant be considered that remote of a possibility. This helps lay people understand that
our polling is not stating with absolute certainty that an election result will happen, despite our good track
record in the past.

Finally, I recommend that Mainstreet also include weighted and unweighted frequencies in their reporting
going forward and not just the unweighted frequencies in their breakouts as has been done since Mainstreet
began.

Recommendation 6: Release raw data


I recommend that from this point forward Mainstreet release and house the raw data of its publicly
released polls every three months, preferably to a department of statistics, sociology, political science,
economics or any similar discipline at a Canadian university. The only exception should be polls released
during an election writ period, which I advise should be released on a yearly basis. This recommendation
would create an unprecedented layer of transparency to our polling, as this will allow for independent
researchers, academics, and students try to replicate our results and learn from our raw data.

Recommendation 6: Investigate online polling methodologies


I have made much of Mainstreets past track record, and what happened in Calgary is the exception and
not the norm. However, landlines are on the decline, and while RDD will help Mainstreet capture a more
reliable sample, completion rates will still be low and worries about nonresponse bias may still exist and
will become stronger as the years go on. I have concerns about the lack of probability sampling and opt-
in bias with online panels - concerns that have been shared by many of the Mainstreet team. Because of

26
these issues with online responses, I also agree that blending online and IVR responses may not be a good
option yet.

The Internet is the future of polling, and we might be starting to witness the beginning of the end for
telephone polling. The end might come in three years or five we dont know. However, I think that it is
important that Mainstreet is not left behind and get ahead of the curve on this. Therefore I recommend
that Mainstreet investigate other polling methods that use the Internet/social media as a polling frame. I
have concerns about the non-probability sampling of online panels, as does most of the Mainstreet team.
However, hundreds of thousands of Canadians go on social media every day and voluntarily register their
political opinions, while polling companies will either spend a lot of money trying to contact Canadians on
the phone or curate respondents via an online panel and offer them compensation to complete a poll. Both
methods have their problems. Mainstreet should investigate and develop a viable polling methodology that
can harness social media that involves probability sampling.

CONCLUSION
This report finds that the polling failure in Calgary can be attributed to several factors. First, Calgary has a
very young population, which makes it difficult to poll. Second, we find evidence from the post-election poll
that significantly more Nenshi supporters did not answer previous calls than Smith supporters. Also, those
who voted for Nenshi say that they were contacted on the telephone far less often than Smith supporters,
and those who were contacted via RDD supported Nenshi far more than respondents in the directory dial.
The RDD dial likely included more cellular phone respondents than the directory dial, as well as more
respondents under the age of 35. Unlike other elections, the younger voters who usually would abstain
from elections came out and voted in the Calgary election in significant numbers and along different lines
from traditional voters. Because the traditional voter model did not hold in Calgary, Mainstreets polling
turned out to be incorrect. In turn, Mainstreet has successfully predicted elections in the past because
enough of these non-habitual voters stayed home and did not have an impact on the election. The only
externally valid element that Mainstreet polling had in the Calgary election was that it correctly showed a
downward trend line for Smith that mirrored his decline from advanced polls to election day, which leads
to the conclusion that the mayoral race was competitive, contrary to conventional wisdom which held that
Nenshi was leading substantially throughout.

I find no evidence that Mainstreet made up any numbers or doctored the data to generate any given results.
Mainstreet staff did make tweaks to the methodology to try to verify or falsify its original poll. However, it
was not successful in finding what turned out to be hidden Nenshi support. I have recommended some
strategies that should improve Mainstreet polling for the future, and some of them have been implemented
in the post-election dial. These changes should reduce nonresponse bias and should improve Mainstreets
accuracy in regards to elections. Moreover, I recommend that Mainstreets reports approximate AAPOR

27
reporting standards and that any Mainstreet employee that works on any future polling review and adhere
to AAPORs code of ethics. Also, I recommend that all reports included weighted and unweighted
tables, a breakout table that includes true undecideds in polls when leaning questions are asked and
explore building probabilistic models that can generate likelihoods of candidates winning elections. I also
recommend that Mainstreet investigate various online polling methodologies that can address some of the
concerns that have been raised in the past.

28
APPENDIX: POST-ELECTION DIAL Q6: How often did you receive a robo call during
QUESTIONNAIRE the election campaign, either from a campaign or a
Q1: Did you vote in the recent municipal election polling company?
in Calgary? Press 1 for under 5 times
Press 1 for Yes [REDIRECT TO Q2] Press 2 for between 5 and 10 times
Press 2 for No [REDIRECT to Q4] Press 3 for between 10 and 20 times
Press 4 for over 20 times
Q2: How long did you wait in line to vote? Press 5 for Not Sure
Press 1 for under 30 minutes
Press 2 for between 30 minutes and an hour Q7: Who did you vote for in the mayoral election?
Press 3 for between one hour and two hours Press 1 for Naheed Nenshi
Press 4 for over two hours Press 2 for Bill Smith
Press 3 for Someone Else
Q3: Did your polling station run out of ballots? Press 4 for Not Sure
Press 1 for Yes
Press 2 for No Q8: What is your gender?
Press 3 for Not Sure [REDIRECT TO Q5] Press 1 for Male
Press 2 for Female
Q4: Why didnt you vote?
Press 1 if you were waiting too long in line at the Q9: What age group do you belong to?
polling station Press 1 for 18 to 34 years of age
Press 2 if you felt that none of the candidates Press 2 for 35 to 49 years of age
deserved your vote Press 3 for 50 to 64 years of age
Press 3 if you are not interested in municipal politics Press 4 for 65 years or older
Press 4 if you felt the voting process was too difficult
Press 5 if you didnt have the time to vote
Press 6 if you thought the polling station was too
difficult to get to
Press 7 if you are not sure

Q5: All things considered, did you think it was easier


to vote compared to the last election municipal
election in 2013?
Press 1 for Yes
Press 2 for No
Press 2 for Not Sure

29

Vous aimerez peut-être aussi