Quantitative Research – National Electors Study on the 43rd Canadian Federal General Election – Methodological Report

The survey component of the NES was conducted by telephone and online with eligible electors (i.e. Canadian citizens at least 18 years of age on election day) and involved three waves of surveys conducted before, during, and after the election period. The table below presents technical information about each wave of surveying:

Wave Sample Mode of Data Collection Field Period Sample Size
W1 Longitudinal Online, telephone Pre-election: June 12 to July 14, 2019 49,993
W3 Longitudinal Online Election period: September 3 to October 20, 2019 23,880
W3a Longitudinal Online, telephone Post-election: October 23 to December 9, 2019 19,435
W3b Discrete Telephone Post-election: October 22 to November 12, 2019 2,000

Detailed information is provided below.

Sampling

The NES survey component included both longitudinal and discrete samples. The longitudinal sample was recruited for the pre-election survey (W1) in June 2019 using probability sampling (random-digit dial phone recruitment) and non-probability sampling (Web panel). The discrete random sample was recruited for the post-election survey wave to offset attrition in the longitudinal sample. Random-digit dial probability sampling was used to collect the discrete sample.

Longitudinal Sample

Electors were recruited in proportion to the population by province, age, and gender. To ensure sufficient final sample sizes, the recruitment targets took into consideration expected attrition across each sample source. The table below presents the target and actual number of completes per wave by sample type. Two-thirds of initial respondents (W1) across all modes were obtained via probability sampling; the remainder were sourced from an online panel of volunteer participants. Respondents to the subsequent W2 election period survey and W3a post-election survey were drawn solely from the initial sample of W1 respondents. Respondents did not need to answer W2 to be invited to respond to W3a.

Type of Sample W1 W21 W3a
Targeted Interviews Completed Interviews Targeted Interviews Completed Interviews Targeted Interviews Completed Interviews
Probability Web 18,000 29,462 10,000 14,266 5,000 8,521
Probability telephone 3,000 3,063 02 0 2,000 1,744
Panel 17,200 17,468 12,040 9,614 9,030 9,170
Total 38,200 49,993 22,040 23,880 16,030 19,435

Probability sample

A dual sample frame of both landline and wireless phone numbers was used to maximize coverage and ensure a representative sampling of electors. The landline sample was supplied by ASDE and the cellphone sample was supplied by Advanis.

The same random-selection process was used for both the landline and cellphone samples. In terms of the specific respondent in the household, interviewers asked to speak to the person in the household who had the most recent birthday and who would be at least 18 years of age and a Canadian citizen by the time of the October election. If that was not the initial individual answering the telephone but another in the household, interviewers asked to speak to the eligible respondent. No selection procedures were used for the cellphone sample. Once an appropriate adult was reached, voter eligibility was verified by the interviewer.

Phone numbers selected for the longitudinal sample were assigned exclusively to either the telephone survey or Web survey for the duration of the study. Those in the telephone sample were contacted by live interviewers who proceeded to administer the survey through a Computer Assisted Telephone Interviewing (CATI) system. Those in the probability Web sample were initially contacted by live interviewers via telephone for recruitment. Some members of the Web sample were screened for study eligibility during the recruitment call and others were screened via an online questionnaire distributed via SMS message. Consenting participants who met the eligibility criteria then had the choice of receiving an SMS or an email invitation to complete the surveys via a Computer Assisted Web Interviewing (CAWI) system. The fieldwork protocols are outlined later in this report.

Non-probability sample

A key objective of the sampling design was to provide sufficient cases of subpopulations with a historically lower propensity to vote in Canadian federal elections that in past surveys of electors have also proven to be harder to reach using purely random sampling approaches. Previous surveys of electors have relied on oversampling targets for groups such as youth and Indigenous electors to obtain sufficient case numbers for analysis. The number of calls and length of field time required with such an approach were not feasible for the 2019 NES given the larger sample sizes that needed to be recruited in a short time for the W1 survey. Instead, the sampling design incorporated a non-probability panel, with the rationale that these additional respondents could be used to ensure sufficient representation of subpopulations throughout the entire study.

Dynata's Web panel was used for the non-probability sampling. Dynata recruits panelists through several sources: partnerships with major brands' loyalty programs, open recruitment via messaging on websites, mobile app panels, and targeted online communities. All Dynata panelists are required to double opt-in, and survey participation is limited to avoid “professional” panelists. Panelists have unique ID numbers used to track and store their activity, including past survey participation, and to verify their identity. Members of the longitudinal sample recruited through Dynata's Web panel received email invitations to complete the surveys via a CAWI system. The fieldwork protocols are outlined below.

Discrete Sample

As with the longitudinal probability sample, random-digit dialing with a dual wireless and landline overlapping frame was used for the representative sample of 2,000 electors. Please refer to the description above (Longitudinal Sample: Probability Sample) for information about the sample frame construction and respondent selection.

Subpopulations

The sampling strategy took into consideration the need to obtain sufficient final sample sizes of selected subpopulations while being to some degree proportional to the Canadian elector population. Demographic subpopulations were selected based on groups that are historically more difficult to reach using surveys and have lower voter turnout rates than the population at large, or else represent new electors eligible to vote for the first time in a federal election.

With the exception of non-voters, representatives of these subpopulations were identified through a screening process. This process was started in advance of W1 and continued through the W1 pre-election survey.

Particular subpopulations were identified based on the following definitions:

The table below presents the target and final number of completes (W3a, W3b) by sample type and subpopulation.

Subpopulation Longitudinal Probability Samples Longitudinal Non-probability Sample Discrete Probability Sample W3 Totals
Targeted Interviews Completed Interviews Targeted Interviews Completed Interviews Targeted Interviews Completed Interviews Targeted Interviews Completed Interviews
Electors with a disability 1,850 3,057 1,760 3,216 440 421 4,050 6,694
Indigenous electors 596 887 272 300 68 90 936 1,277
New Canadians 154 206 176 139 30 56 360 401
Youth ages 18–24 1,020 953 696 264 174 144 1,890 1,361
PSE students 770 1,190 464 635 116 75 1,350 1,900
NEET youth 144 230 144 157 36 32 324 419
Non-voters -- 648 -- 1,150 -- 211 4,0703 2,009

Two groups in particular proved more challenging to both reach and retain throughout the duration of the study: The final target for youth ages 18 to 24 and a specific target for First Nations electors who live on a reserve were not met (although the target for Indigenous electors as a whole was exceeded).4

For both groups, missed targets originated in W1, where initial targets for the probability Web sample specifically were not met, despite the respective targets being met for both the non-probability sample and the probability phone sample. Due to the larger scale of the probability Web sample, this resulted in an overall shortfall for both groups at W1. This issue was then exacerbated, as both groups were observed to have higher attrition rates between survey waves than other respondents within the same sample, although in this case the non-probability sample experienced higher attrition than the probability samples.

Since youth and First Nations electors tend to have a lower propensity to vote, the missed targets for these two groups likely help explain at least some of the shortfall in the final number of non-voters obtained.

Incentives

An incentive structure was put in place for the longitudinal sample; no incentives were offered to the discrete random sample recruited for the post-election survey wave.

By necessity, separate incentive strategies were used for the two sample sources: the probability sample and the non-probability. For the probability sample, the proposed incentive strategy was twofold:

  1. a prize draw for everyone who agreed to participate in the study (five prizes of $200), and
  2. a guaranteed post-paid incentive for harder-to-reach respondents.

In practice, overall recruitment and attrition rates were healthy and did not warrant a prize draw; therefore, all incentive resources were directed toward increasing retention rates among specific subpopulations with noticeably higher attrition rates: NEET youth and First Nations electors who live on a reserve. Both were offered an incentive of $20. Dynata's panelists were rewarded for taking part in the surveys per the panel's incentive program, which is structured to reflect the length of survey and the nature of the sample.

Questionnaires

Elections Canada provided questionnaires based on its previous post-election surveys in order to facilitate the tracking of the agency's core measures over time. This included the Survey of Electors Following the 42nd General Election, the Evaluation of the Electoral Reminder Program for the 42nd Canadian Federal Election, the National Youth Survey, and EC questions that in previous years had been placed in the Canadian Election Study. Split samples were employed on some core measures to test the comparability of different question scales and offer the possibility of moving to a new scale in future iterations while preserving the ability to track with previous iterations.

Four questionnaire instruments were developed in total: one questionnaire for each wave of the longitudinal sample (W1, W2, and W3a) and one questionnaire for the post-election survey of the discrete sample (W3b). The W1 and W3a questionnaires were designed to be administered in both telephone and online modes, with survey questions modified as needed for each mode of administration. Efforts were taken to ensure comparability of results across modes. For example, telephone survey questions with a “do not read list” of response options were treated as open-ended questions in the online questionnaires. In some cases, questions were assigned to only one mode where the other mode was not suitable. For example, a question that asked respondents to review or rank a long list of items could be unwieldy to administer over the phone and would, therefore, be included only in the Web survey.

The questionnaires administered to the longitudinal sample were designed to: minimize respondent burden (demographic questions, for example, were only asked during recruitment); allow for tracking of electors' activities, such as registering to vote, as well as knowledge and attitudes towards voting; and enable comparisons of electors' expectations of voting versus voters' actual experience. In order to achieve this, skip logic in the W2 and W3a surveys depended on certain responses being imported from previous surveys. For example, only respondents who identified as having a disability in the W1 survey were asked if they found voting to be accessible in the W3a survey.

The W2 and W3a surveys were, in part, used to measure recall of Elections Canada's voter information campaign through the inclusion of questions from the Government of Canada Advertising Campaign Evaluation Tool (ACET). Aided measures of recall were limited to the W2 survey, where Web respondents were presented a selection of image, audio, and video ads that varied based on when the survey was taken. See Appendix 3 for an overview of the advertising materials tested in each phase of the W2 survey.

Overall, there were five phases of the W2 questionnaire timed to coincide with election period milestones as well as the five phases of the voter information campaign, as follows:

Questions specific to a particular phase were programmed to appear based on the date that respondents accessed the survey invitation.

The W3b questionnaire included a set of questions that were considered core to the W3a questionnaire, with others excluded to allow space to collect socio-demographics from the discrete sample that had already been collected for the longitudinal sample at W1.

Questionnaires varied in length from 15 to 20 minutes, and the online questionnaires were mobile-friendly.

Pretest

Following survey best practices, the questionnaires were pretested in advance of the fieldwork. Overall, the questionnaires generally worked well, although the W1 and W3a questionnaires were too long and required edits to reduce their length. The solutions were split samples, edits to the syntax and/or wording of specific questions, and removal of questions either altogether or from the phone survey only, as this mode tended to take longer to administer than a Web survey with the same number of questions. Beyond questionnaire length, there were no significant problems in terms of design or respondents' comprehension of the questions. As a result, only minor changes to the questionnaires and programming instructions were made.

Separate testing procedures were used for the interviewer-assisted telephone surveys and for the self-administered online surveys. To pretest the questionnaires administered by interviewers over the telephone (W1, W3a, W3b), respondents were first administered the survey in the official language of their choice and then asked a series of short follow-up questions.5 The debriefing following the survey provided an opportunity for respondents to offer feedback on the questionnaire. The pretest interviews conducted by telephone were digitally recorded and the anonymized recordings were reviewed by team members and Elections Canada officials.

The online questionnaires (W1, W2, and W3a) were thoroughly tested by team members and election officers in advance of the fieldwork. Following this internal testing, the surveys were deployed in the form of a soft launch. Invitations to complete the surveys were sent to a small number of respondents in the longitudinal sample. After at least 20 surveys were completed, the results were reviewed to assess data quality and general functioning of the questionnaire. Once the reviews were completed for each wave, the online questionnaires were launched in full.

Fieldwork

Fielding Procedures

The fieldwork was conducted by Advanis. All respondents were informed that their participation was voluntary and that information collected is protected under the authority of the Privacy Act. The following specifications applied to the CATI surveys (W1, W3a, and W3b):

The following specifications applied to the CAWI surveys (W1, W2, and W3a):

The fieldwork was conducted in accordance with the Government of Canada's Standards for the Conduct of Government of Canada Public Opinion Research for telephone surveys and online surveys, the standards set out by the Canadian Research Insights Council (CRIC), and applicable federal legislation, including the Personal Information Protection and Electronic Documents Act (PIPEDA), Canada's private sector privacy law.

Election Period Rolling Cross-Section

The W2 election period survey was uniquely fielded as a rolling cross-section designed to collect a steady, continuous stream of responses on a daily basis for the duration of the election period. Although the survey questions evolved across five broad phases depending on survey date, W1 respondents were invited to participate in the online questionnaire only once. A controlled number of invitations to the W2 survey were sent each day to a random selection of W1 respondents; thus, each day of data collection could be analyzed independently as a representative sample of views on the day and then together in time series to measure trends over the course of the election period. Given the time-sensitive nature of the survey and its questions, the questions and choices that respondents were presented with in the survey were determined based on the date they accessed the survey rather than the invitation date.

The rate of daily invitations was designed to escalate over the course of the election period to ensure that the later phases would obtain sufficient sample sizes despite their shorter duration. Invitations were rationed so that all possible invitations were exhausted by October 17. Only reminders were used to generate responses from October 18 to 20. This included a “last chance” reminder that was sent to all non-respondents to date in addition to the standard reminders.

The first workday following September 1 was chosen as the start of fielding. By law, that was the earliest possible start date for an election period based on an October 21, 2019, election day. In actuality, the election period began on September 11, such that the early election phase includes one week of pre-election surveys.

The table below shows the number of completes per phase by sample source as well as the average number of completes per day per phase.

Phase Target Completes Completed Surveys Total Completes Field Days Daily Average
Probability Sample (Web) Panel Sample
W2a 3,071 1,584 1,528 3,112 15 207
W2b 6,888 3,933 3,306 7,239 14 517
W2c 4,638 2,305 2,269 4,574 7 653
W2d 4,900 2,882 1,925 4,807 7 689
W2e 4,684 3,562 586 4,148 5 830

Outcome Rates

Probability Samples

Longitudinal sample

The following tables provide the initial response rate for the longitudinal probability sample (Web and phone) at W1 followed by the rate that longitudinal respondents were retained across each survey wave in relation to W1 completes.

Wave 1 Phone Response Rate Longitudinal Phone
Total phone numbers attempted = I + U + IS + R 29,992
Out-of-scope – invalid (I) 4,075
Unresolved (U) 11,183
No answer/answering machine/busy 11,183
In-scope – non-responding (IS) 10,173
Language problem, illness, incapable 70
Selected respondent not available 20
Household refusal 6,469
Respondent refusal 3,369
Qualified respondent break-off/partial complete 245
In-scope – responding units (R) 4,561
Language disqualification 302
Terminate, does not qualify (determined at introduction) 52
Terminate, under 18 years old by election day 1,109
Terminate, not a Canadian citizen 33
Terminate, lives outside of Canada 2
Completed the W1 survey 3,063
Wave 1 phone response rate = R / (U + IS + R) 17.6%
Total survey invitations sent = U + IS + R 55,439
Email invites sent 2,111
SMS invites sent 53,328
Unresolved (U) 104
Undeliverable email/SMS invites 104
In-scope – non-responding (IS) 25,873
Non-response from email/SMS invites 25,873
In-scope – responding units (R) 29,462
Completed the W1 survey 29,462
Wave 1 Web response rate = R / (U + IS + R) 53.1%
Wave 2 Retention Rate Longitudinal Web
Total survey invitations sent 29,340
Email invites sent 1,275
SMS invites sent 28,065
Invite attrition (A) 122
Invalid invites (undeliverable, unsubscribes, failed quality checks) 122
Non-response from email/SMS invites (IS) 15,074
Completed the W2 survey (C) 14,266
Wave 2 retention rate = C/ (C+IS+A) 48.4%
Wave 3a Retention Rate Longitudinal Web Longitudinal Phone
Total numbers attempted - 3,063
Total survey invitations sent 29,244 -
Email invites sent 1,275 -
SMS invites sent 27,969 -
Invite attrition (A) 6 218 520
Invalid numbers/invites 218 185
No answer/answering machine/busy - 303
Language disqualification - 8
Terminate, does not qualify (determined at introduction) - 24
In-scope – non-responding (IS) 20,723 799
Language problem, illness, incapable - 12
Selected respondent not available - 14
Household refusal - 199
Respondent refusal - 574
Non-response from email/SMS invites 20,723 --
Completed the W3a survey (C) 8,521 1,744
Wave 3a retention rate = C / (C+IS+A) 28.9% 56.9%

Discrete sample

The response rate for the W3b discrete probability phone sample was 15.4%.

Wave 3b Response Rate Discrete Phone
Total phone numbers attempted = I + U + IS + R 23,447
Out-of-scope – invalid (I) 3,513
Unresolved (U) 9,007
No answer/answering machine/busy 9,007
In-scope – non-responding (IS) 7,857
Language problem, illness, incapable 37
Selected respondent not available 17
Household refusal 4,118
Respondent refusal 3,019
Qualified respondent break-off/partial complete 666
In-scope – responding units (R) 3,070
Language disqualification 139
Terminate, does not qualify (determined at introduction) 183
Terminate, under 18 years old 710
Terminate, not a Canadian citizen 28
Terminate, not 18 years old by election day 7
Terminate, lives outside of Canada 3
Response rate = R / (U + IS + R) 15.4%

Non-probability Sample

The following table provides the initial W1 participation rate for the non-probability Web panel, along with the retention rates for the W2 and W3a surveys relative to W1.

Participation and Retention Rates Non-probability Sample
Wave 1 Wave 2 Wave 3a
Email invites sent (S) 29,000 17,468 17,468
Completed the survey (C) 17,468 9,614 9,170
Participation/retention rate = C / S 60.2% 55.0% 52.5%

The participation rate at W1 was 60.2%, and the subsequent retention rates were 55.0% (W2) and 52.5% (W3a).

Potential for Non-Response Bias

The final survey sample over-represented voters in the 43rd GE. Among survey respondents, the self-reported turnout was 90%, while the actual turnout rate among registered voters was 67%. Two factors may be responsible for the over-representation of voters: 1) people who vote may be more likely than non-voters to participate in a study about voting, particularly across multiple survey waves (response bias); and 2) people who did not vote may report doing so because they think to present themselves in a more positive light (social desirability bias). Readers and researchers should be aware of this potential for bias resulting from non-response (including from attrition) when interpreting the results.

Margin of Error

Since the NES survey sample included samples generated through both probability and non-probability sampling techniques, no estimate of sampling error can be calculated for the entire survey sample, and the overall survey results are not statistically projectable to the entire population of eligible electors. A margin of sampling error and statistical estimations can be obtained if the panel is excluded and only the random samples are considered, in which case all samples are of a size such that overall results across all waves would have a margin of sampling error less than ±1%, 19 times out of 20, as detailed in the table below.

Wave Total Respondents Respondents by Sample Source Total Probability Sample Overall Margin of Error at 50%
Longitudinal Non-Probability Panel Longitudinal Probability Web Longitudinal Probability Telephone Discrete Probability Telephone
W1 49,993 17,468 29,462 3,063 - 32,525 ±0.543%
W2 23,880 9,614 14,266 - - 14,266 ±0.82%
W3 21,435 9,170 8,521 1,744 2,000 12,265 ±0.885%

Data Production

Quality Control

Following the fieldwork, the data were cleaned using SPSS syntax. The review assessed response ranges to identify any respondents who “straight-lined” responses (provided the same answer for all tabular questions) and the length of time taken to complete the surveys to flag any “speeders” (respondents who took an unreasonably short time answering the survey). Any cases flagged for data quality were replaced prior to the weighting and tabulation of the data.

Quality control measures were performed after each wave of data collection, including the production and review of interim datasets, banner tables, and topline reports to allow improvements to be made in the design and conduct of subsequent survey waves.

Coding

Verbatim responses provided in “other (specify)” categories were reviewed for possible coding where these represented more than 10% of responses to a question. Priority was given to minimizing the proportion of “other,” first by cleaning mis-specified responses and, then, by creating new categories where numbers warranted.

A selection of fully open-ended questions was coded through machine and/or human manual coding.

Text responses collected via the Web for W1 question 17 and W3a question 22 were coded into categories by an algorithm developed by Elections Canada, using a coding dictionary derived from manually coded text responses for the same question in the 2015 ERP Evaluation. The objective of the questions was to measure the top-of-mind organization that is a source of information on the voting process, so the algorithm only needed to code a single response for each unique string. Where strings contained multiple possible responses, the algorithm prioritized certain categories before others, with Elections Canada being the highest priority. Where no priority category was identified, the first mentioned category took precedence over categories identified later in the string. Inspection of the coded results indicated that the algorithm correctly coded unique strings representing over 95% of cases. Incorrectly coded strings were then recoded manually into the correct category. Altogether, over 64,000 text responses were coded using this approach.

Other key open-ended questions were coded manually, as the responses were more complex in nature and a coding dictionary approach would not have been feasible. Given the large number of open-ended Web responses collected across multiple waves, for practical reasons, only random samples of text responses to key evaluation questions were coded for reporting purposes. In these cases, random samples of cases from each wave were selected for coding. The size of the random samples was determined based on the desired maximum margin of error for the sample versus the total number of responses. Prior to drawing these random samples, the open-ended responses were cleaned of “don't know,” missing, or invalid responses. This way, only valid cases remained for coding purposes.

Derived variables were coded and used in lieu of raw question variables where required to produce the final survey results. For example, W2 respondents were asked a selection of questions about their voting experience if they indicated they had already voted early in the election period, rather than being asked the same questions at W3. Therefore, to produce a final measure of all those who voted in the election, any W3 respondents who had already answered the question at W2 needed to be merged with those who only answered the question at W3.

Weighting

The survey sample was weighted so that the results were representative and/or could be generalized to the population of electors. Separate weights were calculated so that data could be weighted on a per-wave, per-mode basis, with an option to use probability samples only or the entire sample. Weight values sum to the size of the total sample available for the respective wave and mode.

In the analysis, data were weighted according to the modes used on a per-question basis. For example, frequencies for a question asked at W3 on the Web only were weighted to sum to the total of all Web respondents at W3. A question asked using both modes would be weighted such that the frequencies sum to the total of all respondents across both phone and Web modes.

W1

Starting with W1 and the longitudinal sample, the weighting was done in two stages:

The following adjustments were calculated during the first stage of the weighting process:

These four adjustment factors were multiplied together for each W1 record and were used to calculate the starting sample proportions for the second phase of weighting.

The second stage of the weighting process involved the following steps:

The following adjustments were considered but not incorporated into the W1 weighting structure:

W2

W2 weighting included an attrition adjustment factor to correct for uneven response rates to the W2 survey between different segments of W1 respondents. The adjustment started with the W1 weight factor, which was then multiplied by an attrition factor derived from ratios of W1 respondents to W2 respondents for each segment. Attrition adjustments were based on segments constructed by gender (male, female, non-binary/transgender), age (18–24, 25–34, 35–54, 55–64, 65+), and region (BC, AB, SK/MB/Territories, ON, QC, Atlantic).

W3

The W3 longitudinal (W3a) and discrete (W3b) samples were weighted independently. For the W3a sample, the W1 weights were again used as the starting point and, like W2, an attrition factor was applied based on uneven attrition from W1 to W3a between segments. Weighting of W3b was done in three steps: 1) a household size adjustment for landline records was applied; 2) cases were weighted by age (18–24, 25–34, 35–54, 55–64, 65+), gender (male, female, non-binary/transgender), and region (BC, AB, SK/MB/Territories, ON, QC, Atlantic); and 3) weights were adjusted so that the weighted sample size equaled the unweighted sample size. The weighted samples for W3a and W3b were then combined to produce a final integrated W3 sample. The application of additional correction factors (e.g. benchmarking, post-stratification) to the combined weighted samples was explored but found unnecessary.

Integration of Probability and Non-probability Samples

Consideration was given to whether probability and non-probability samples could be integrated into the final results and whether to integrate the non-probability sample at a lower weight determined from known benchmarks such as population demographic characteristics or official voter turnout figures.

A comparison of unweighted results collected at W1 from the probability and non-probability samples showed that including the non-probability sample at its full weight produced a socio-demographic profile that, before weighting, was overall more in proportion with known population figures. The effect was most pronounced for the proportions of gender and voter turnout in the 42nd GE in 2015: Including the non-probability sample improved representation of women to 48% of W1 respondents, up from 45% if using only the probability sample; the proportion of non-voters in 2015 improved to 17% when using all available sample, up from 10% in the probability samples. This trend was borne out through later survey waves, where the non-probability sample obtained the highest proportion of non-voters in the 2019 GE at W3 compared to any other sample (accounting for over half of all non-voter respondents despite representing one-third of all W3 respondents).

The decision to integrate all samples was further supported by other results comparisons that found there was either no large impact on the results obtained from using the entire sample versus only the probability sample; or else that differences between the entire sample and the probability sample tended to move in expected directions, given the higher proportion of non-voters when using the entire sample (for example, the entire sample had lower intention to vote in the 2019 GE than the probability sample did).

Based on these findings, the full weight of the non-probability sample was integrated into the results with all other samples as a means of mitigating the over-representation of voters in the probability samples. Similar considerations with the W3b survey resulted in its integration with the W3a survey.



Footnote 1 For more detailed information on W2 targets and completes, please see the section titled, “Election period rolling cross-section”.

Footnote 2 No telephone fieldwork was permitted during the election campaign.

Footnote 3 The target for non-voters was set retroactively. Given historical over-reporting of voter turnout in surveys of electors, the goal across all samples was to keep the proportion of non-voters within 15 percentage points of the official turnout rate. The official turnout rate was 67%; therefore, the minimum target for non-voters equated to 19% of W3 respondents across all sample sources.

Footnote 4 The targets for First Nations and First Nations living on a reserve were variable targets that aimed to have 50% to 70% of Indigenous electors be First Nations and then to have 50% of First Nations respondents be electors who live on a reserve. Based on the overall W3 target of 936 Indigenous electors, the absolute minimum target for First Nations electors was 468 cases (versus an actual 464 cases collected at W3) and 234 for First Nations who live on a reserve (versus an actual 95 cases collected at W3).

Footnote 5 The follow-up questions were:

Footnote 6 For the purposes of calculating phone respondent retention at W3a, units that would normally be considered out-of-scope (e.g. invalid) or responding (e.g. disqualified respondents) are instead considered eligible responding units that were lost to attrition at the invite stage rather than from non-response, since the intention of calls made at W3a was to re-contact the specific qualified individual who responded to W1 at that phone number.