IRRODL: 23. Best Practices in Online Polling

October - 2003

Technical Evaluation Report

23. Best Practices in Online Polling

Jim Klaas
Masters of Distance Education Program
Athabasca University - Canada's Open University

Abstract

This report summarizes major polling design principles and practices, with particular emphasis on those affecting the integrity of online polls in distance education (DE). Specific consideration is given to the statement of polling objectives, the design of good questions and response options, online poll format, motivation of the respondents, and poll pre-testing.

Adopting Best Practices

The previous report in this series (click here to read XXII) recommended the use of the term “online polling” in referring generally to “questionnaires, quizzing, survey and assessment products,” and further defined the online polling as an asynchronous or real-time process of information gathering, obtained via responses to question(s) mediated by Web-based formats. Prior to this, the major users of polling methods have been in the advertising and political research industries. Currently, online polling methods are becoming recognised as useful in the development of interactive group learning approaches in distance education (DE). Report XXII outlined the advantages and problems of using online polling as a collaborative tool in DE. The careful selection of appropriate polling software was discussed, and the need to develop appropriate user skills. The current report discusses these online polling “best practices.”

Witmer, Colman and Katzman (1999) have recommended that researchers can benefit from exploring the online medium’s potential before blindly applying paper-and-pencil approaches to their online polling methodologies. The current literature includes numerous recommendations for online polling design, including new ways of presenting the study’s objectives, its questions and responses, providing incentives to participation, and adequate testing.

  1. Statement of Objectives: To ensure that the information gathered will be useable, clear articulation of the poll’s topic and purpose is of fundamental importance (Dillon, 2001; McNamara, 2003). The poll’s objectives should be specific, clear-cut and unambiguous in order for the study to yield valid and reliable statistical information, as opposed to serving as a mere ruse in, for example, marketing, fund-raising, or vote-influencing activities (Best, 2002).

  2. Posing Good Questions: The formulation of appropriate questions is crucial. The need for every question should be justified. The poll designer should avoid posing “every conceivable question that might be asked with respect to the general topic of concern…resulting in annoyance and frustration” (Frary, 2003). Questions should be avoided prompting recall of details that may never have been committed to memory, or which are beyond a fifth grade reading level (Stinson, 1999). Slang, cultural-specific and technical words, and pejorative and emotionally laden words should be avoided. The conjunction “and” and the potentially double negative “not” may be indicators of a poorly formed question (Dillman and Christian, 2002; McNamara, 2003). The initial questions in the poll should be comfortable and generic, in order to suggest to respondents that the survey will be easy to complete. They should also avoid advanced features such as drop-down lists and long scrolling demands (Dillon, 2001).

  3. Wording the Response Options: Frary (2003) cautions against excessive detail in the design of polling items. Instructions such as “check all that apply” should be used sparingly to avoid “category proliferation.” A five-point scale is sufficient for most polling needs, and avoids “scale-point proliferation.” These precautions help to anticipate the pitfall of “satisficing” – i.e., allowing respondents to be tempted to consider a poll item only until they believe that a satisfactory answer has been given (Dillman and Christian, 2002). The poll’s designer should also be aware of to the possibility of order bias (Rose and Gallup, 2002) – i.e., the effects of the order in which questions and response options are presented upon the responses themselves. Poll items commonly contain the response option “other.” If the range of response options is adequate for the purposes of the study, however, use of the “other” option can be a design flaw (Dillon, 2001). It may provide respondents with an easy option owing to carelessness or laziness, or because of reading difficulty and reluctance to answer. Frary (2003) recommends the alternative use of “no basis for judgment” or “prefer not to answer” options. Dillman and Christian (2002) recommend giving respondents the option to leave a question blank if viable. McNamara (2003) advocates including item(s) evaluating the questionnaire itself.


  4. Designing the Poll Format: An attractive and easy-to-read format can improve response rates (Solomon, 2001). Dillman, Tortora, and Bowker (1998) believe that a good poll design will “reduce the occurrence of sample errors through improvement of the motivational aspects of responding as well as the technical interface between computer and respondent.” Conn (2003) recommends using the visual message design principles of contrast, alignment, repetition, proximity, and “sufficient open space,” so respondents can easily distinguish “between directions and actual questions, between individual questions, between sections of a questionnaire, or between responses for a question.” Dillman and Christian (2002) point out that the visual design of questions “has a significant impact on respondent behaviour,” and make the following format recommendations:

    • Poll design is aided by the judicious use of symbolic, numerical, and graphical conventions (e.g., bullets and arrows)

    • Providing a larger space for open-ended responses can elicit answers that are longer and contain more themes

    • Double/ triple-column formats should be avoided since they may be read out of sequence (vertically or horizontally)

    • A space should be provided after each question, and equal distances between response options

    • A “progress bar” is useful to indicate how much of the survey remains to be completed

    • Common Web formatting errors (e.g., reduced spacing, centering, and omission of item numbering) should be avoided

  5. Motivating Respondents: Many of the above principles are aimed at encouraging respondents to complete the poll. The promise of feedback and summary statistics can also provide an incentive to participation and completion (Witmer, Colman, and Katzman, 1999; Yun and Trumbo, 2000; Dillon, 2001; Sax, Gilmartin, and Bryant, 2003). Moss and Hendry (2002) indicate that in a course evaluation context online polls should be infrequent, short, simply designed, free from password access, and that results should be displayed to students on completion of each poll without revealing the respondents’ identities. Dillman and Christian (2002) indicate that the “welcome screen” should motivate participants via emphasizing the ease of responding, time required, nature of the online response tasks, and sufficient technical instruction without excessive detail. Further motivational tips include the use of “give-aways” such as movie tickets and gift certificates (Handverk, Carson, and Blackwell, 2000). Rosenblatt (1999) believes that incentives do not greatly increase the number of respondents in a poll, but do increase the probability that individual respondents will complete it.

  6. Pre-testing the Online Poll: As far as possible, the poll items and response options should be pre-tested for accuracy (Stinson, 1999). The polling instrument should be reviewed and tested on a variety of computer browsers and platforms (Pitkow and Recker, 1995, Best 2002; Conn, 2002); although Carbonaro, Bainbridge, and Wolodko (2002) suggest that pre-testing should be limited to the most viable combinations of software and hardware, since it is usually impracticable to test the complete range. Bowker and Dillman (2000) recommend that pre-tests of a poll’s Hyper coding should apply the “least compliant browser” principle. Conn (2002) recommends that pre-tests should ensure that a minimum of computer skills is required to complete the poll, and that the instrument’s design should be sufficiently simple to allow for rapid downloading. Simpler questionnaires also demand less of the computer’s random access memory or RAM (Dillman and Christian, 2002). Carbonaro, Bainbridge, and Wolodko (2002) recommend that pilot respondents should use a “think aloud” procedure allowing their verbal reactions to be audio taped.

Conclusion

Currently, online polling methods have not yet become a standard methodology in online education, and in many parts of the world, their delivery is complicated by institutional security policies and network “firewall” technologies. These can interfere with both the transmission and collection of polling data. Detailed liaison is needed between the researchers and network designers in institutions to overcome these obstacles. Meanwhile, the standard textbook literature on the criteria for efficient polling design should be studied as background to the principles of online polling design covered in this report.

References

American Association for Public Opinion Research (2002). Best Practices for Survey and Public Opinion Research. Retrieved October 9, 2003 from: http://www.aapor.org/default.asp?page=survey_methods/ standards_and_best_practices/best_practices_for_survey_and_public_opinion_research

Bowker, D., and Dillman, D. A. (2000). An Experimental Evaluation of Left and Right Oriented Screens for Web Questionnaires. Year 2000 Annual meeting of the American Association for Public Opinion Research, May. Retrieved October 9, 2003 from: http://survey.sesrc.wsu.edu/ dillman/papers/AAPORpaper00.pdf

Conn. C. (2002). Using the Internet for Surveying: techniques for designing, developing and delivering. Office of Academic Assessment, N. Arizona University. Retrieved October 9, 2003 from: http://www4.nau.edu/assessment/main/ research/responserates.htm

Conn, C. (2003). Message Design: four principles to consider. Office of Academic Assessment, N. Arizona University. Retrieved October 9, 2003 from: http://www4.nau.edu/assessment/ main/research/carp.htm

Dillman, D. A., and Christian, L. (2002). The Influences of Words, Symbols, Numbers and Graphics on Answers to Self-administered Questionnaires. Washington State University. Retrieved October 9, 2003 from: http://survey.sesrc.wsu.edu/ dillman/papers/single_space_fig_table.pdf

Dillman, D. A., Tortora, R. D., and Bowker, D. (1998). Principles for Constructing Web Surveys. Washington State University. Retrieved October 9, 2003 from: http://survey.sesrc.wsu.edu/dillman/papers/websurveyppr.pdf

Dillon, L. (2001). Online surveys: lessons learned. Centres for IBM e-business Innovation. Retrieved October 9, 2003 from: http://www.the-cma.org/ council/download-council/2001_ibm_lessons_learned.pdf

Frary R. B.(2003). A Brief Guide to Questionnaire Development. Virginia Polytechnic Institute & State University. Retrieved October 9, 2003 from: http://www.ericae.net/ft/tamu/vpiques3.htm

Handverk, P., Carson, C., and Blackwell, K. (2000). On-line vs. Paper-and-Pencil Surveying of Students: A case study. AIR 2000 Annual Forum paper. ERIC Document # RIEAAPR2001.

McNamara, C. (2003). Basics of Developing Questionnaires. Management Assistance Program for Nonprofits website. Retrieved October 9, 2003 from: www.mapnp.org/library/evaluatn/questnrs.htm

Moss, J., and Hendry, G. (2002). Use of electronic surveys in course evaluation. British Journal of Educational Technology, 33(5), 583 – 592. ERIC Document # CIJAAPR2003.

Pitkow, J. E., and Recker, M. M. (1995). Using the Web as a Survey Tool: Results from the Second WWW User Survey. Journal of Computer Networks and ISDN systems, 27(6). Retrieved October 9, 2003 from: www.cc.gatech.edu/gvu/ user_surveys/papers/survey_2_paper.html

Rose, L., and Gallup, A. (2002). Responsible Polling. Hoover Institution, Leland Stanford Junior University. Retrieved October 9, 2003 from: http://www.educationnext.org/ 20023/73.html

Rosenblatt, A. J. (1999). On-Line Polling: Methodological limitations and implications for electronic democracy. Harvard International Journal of Press/ Politics, 4(2), 30 – 44.

Sax, L., Gilmartin, S., and Bryant, A. (2003). Assessing response rates and non-response bias in web and paper surveys. Research in Higher Education, 44(4), 409 – 432.

Solomon D. J. (2001). Conducting Web based Surveys. Office of Educational Research & Development. Washington DC. ERIC Document # 458291.

Stinson, L. (1999). Designing a Questionnaire. American Statistical Association. Retrieved October 9, 2003 from: http://www.amstat.org/sections /srms/brochures/designquest.pdf

Witmer, D. F., Colman, R. W., and Katzman, S. L. (1999). From Paper-and-Pencil to Screen-and-Keyboard: Toward a methodology for survey research on the Internet. In S. Jones (Ed.) Doing Internet Research: Critical issues and methods for examining the Net. Thousand Oaks, CA.: Sage.

Yun, G. W., and Trumbo, C. (2000). Comparative response to a survey executed by post, e-mail & web form. Journal of Computer-Mediated Communication, 6(1), Retrieved October 9, 2003 from: http://www.ascusc.org/jcmc/vol6/issue1/yun.html

The next report in the series discusses the installation of open source collaborative software.

N.B. Owing to the speed with which Web addresses are changed, the online references cited in this report may be outdated. They can be checked at the Athabasca University software evaluation site: cde.athabascau.ca/softeval/. Italicised product names in this report can be assumed to be registered trademarks.

JPB. Series Editor, Technical Notes




PID: http://hdl.handle.net/10515/sy56688x9



ISSN: 1492-3831