Government of CanadaPublic Health Agency of Canada / Agence de santé publique du Canada
   
Skip all navigation -accesskey z Skip to sidemenu -accesskey x Skip to main menu -accesskey m  
Français Contact Us Help Search Canada Site
PHAC Home Centres Publications Guidelines A-Z Index
Child Health Adult Health Seniors Health Surveillance Health Canada
   



Volume 18, No.4 -1997

 [Table of Contents] 

 

Public Health Agency of Canada (PHAC)

Development of an Instrument to Measure Cancer Screening Knowledge, Attitudes and Behaviours

Tricia Kindree, Fred D Ashbury, Vivek Goel, Isra Levy, Tammy Lipskie and Robin Futcher


Abstract

The development of a comprehensive survey instrument to measure the knowledge, attitudes and behaviours of the general public with regard to cancer screening was the goal of this project. A thorough review of the literature was undertaken, and existing survey instruments were identified and organized according to type of cancer screening behaviour being measured; question foci (predisposing, enabling and reinforcing factors); and survey implementation protocol. A comprehensive survey instrument was developed with the intention that, if feasible, the survey of cancer screening behaviours could be implemented nationally by telephone. Separate survey instruments were developed according to sex. Focus groups were held across Canada to determine the comprehensiveness of the survey items; ease of understanding and ability to respond; feasibility with respect to possible sensitivity of some of the question items; and general implementation issues (e.g. length, sex of interviewer). This paper reports on the qualitative portion of the project. Our study supports the use of qualitative methodology for instrument development and implementation.

Key words: Focus groups; neoplasms, epidemiology; neoplasms, prevention and control; qualitative research; screening; survey development


Introduction

Cancer control involves a range of activities: prevention, early detection and diagnosis, treatment, supportive care, palliative care, and research and evaluation. According to the Framework for Cancer Control of the National Cancer Institute of Canada (NCIC), the full range of these activities can be explained using five categories: fundamental research, intervention research, program delivery, surveillance and monitoring, all leading into knowledge synthesis and decision making.1 An ideal cancer control strategy will conduct surveillance of each of these activities. Some activities can be monitored through routinely available data sources. For example, cancer registries provide data on incidence and mortality, and hospitalization data provide information on type and patterns of care.

Data about knowledge, attitudes and behaviours (KAB) with respect to cancer screening are currently incomplete. Systematically collected information is important for the development, implementation and evaluation of cancer control initiatives. In Canada, a variety of surveys have assessed some or a few cancer screening behaviours. National omnibus surveys, such as the National Population Health Survey, the Health Promotion Survey or the Canada Health Survey and some provincial surveys (e.g. Ontario Health Survey) collect some information on screening behaviours. Such surveys do provide a means to assess trends in screening test utilization and the social and demographic factors associated with this use. Additionally, special surveys have been used in Canada to specifically evaluate cancer screening KAB, but usually only for one specific cancer site. For example, studies have been conducted on mammography (particularly in the context of breast screening programs) utilization,2,3 cervical screening4 and prostate screening.5

This study was part of a process to develop and evaluate the feasibility of a single survey instrument that could assess KAB on a range of cancer screening tests. Ideally, such an instrument would be administered in a national survey by telephone. The development of this cancer surveillance instrument was initiated by Health Canada in co-operation with the NCIC's Advisory Committee on Cancer Control. The project was undertaken because, although many surveys have been done at the national, provincial and community level, there is still no clear picture of what information is available, what information is missing and what information has been collected over time [see next article, "Workshop Report: Knowledge, Attitudes and Behaviours Concerning Cancer Screening in Canada"]. This paper describes the use of qualitative methods in survey design. Although not always standard practice in survey design, focus groups are increasingly used as an effective method in evaluating survey instruments. We will expand on this and also discuss this technique specifically with respect to developing surveys regarding sensitive health topics and surveillance of cancer screening KAB.

Methods

Instrument Development

The survey instrument was developed after a thorough review of the English-language literature. MEDLINE was searched using combinations of key words for neoplasms, screening, questionnaires and surveys. Reference lists of key studies were prepared and experts in the field were consulted for additional surveys. At the Laboratory Centre for Disease Control, an inventory of survey questions was created to map out the information that has been collected about cancer screening behaviours and the determinants of those behaviours. The inventory contains questions from national and provincial population surveys and commercial surveys maintained in a database of Canadian health surveys and from American surveys co-ordinated by the Centers for Disease Control and Prevention. Where possible, the original survey instruments were retrieved and incorporated into the discussion of which instruments or items would be considered for inclusion in the comprehensive survey instrument.

The survey items were organized using the PRECEDE/PROCEED framework for each cancer site6 as stipulated in requirements for the search of relevant instruments directed by Health Canada. The rationale for using this model was that questions should be sought covering not only behaviours, but also the knowledge, attitudes and beliefs that predispose, enable or reinforce these behaviours. This framework emphasizes that health and health risks are caused by multiple factors, and that, because health and health risks are determined by multiple factors, efforts to effect behavioural, environmental and social change must be multidimensional or multisectoral.

For behaviour change, the PRECEDE/PROCEED framework outlines three sets of determining factors: reinforcing factors, predisposing factors and enabling factors.6 Reinforcing factors are provided by the social context of family, society or health professionals and refer to rewards or feedback for the discontinuation or adoption of a behaviour. Predisposing factors comprise knowledge, attitudes, beliefs and values. Finally, enabling factors include skills (e.g. smoking cessation techniques) and resource needs and uses (e.g. office systems) that facilitate the adoption of new behaviours. The items included in the final survey instrument used in the focus groups were identified and organized according to these three factor sets.

A draft instrument was developed with the following components: demographic information, health status and health care utilization, cancer KAB and a series of site-specific inventories. For each cancer site, questions on the use of screening tests, reasons for having or not having the screening tests and the results of the most recent screening test were included. Cancer sites were selected based on burden of illness, availability of a screening test and expected level of screening test use in the community, based on data from other jurisdictions. Two slightly different sex-specific draft instruments were developed. The core sections of both surveys were identical. However, because some screening tests are sex-specific, a version for use among women included questions about screening for breast, cervical and ovarian cancer. The instrument to target men contained questions regarding testicular and prostate cancer screening. Additionally, both versions included questions about screening for colorectal, skin, lung and oral cancers.

Focus Group Design

Surveys are an essential tool of researchers interested in measuring knowledge, attitudes and behaviours in large populations and are widely used in many sociobehavioural research studies.7,8 There are several processes that can be used to develop survey instruments. Typically, investigators develop a draft questionnaire through a team brainstorming process and/or by compiling questions from other, previously developed survey instruments.8 This step is generally followed by a review from other experts to identify ambiguities with respect to wording, item selection and response options. Next, the questionnaire is revised based on the responses of a pre-test subsample of the intended survey population.9 Finally, the revised survey instrument is disseminated to the target audience.

It is now a more common practice to conduct focus group interviews also, either before developing or prior to implementing a structured questionnaire.9-13 Prior to pre-testing the survey instrument, focus groups can be used to structure and facilitate questionnaire design. This process can identify issues to be included in the questionnaire, formulate question categories or simply fine-tune wording on particular questions.14-17 While a pre-test was part of the original design for the KAB cancer screening survey, it was agreed that a qualitative research step involving focus groups would be introduced to determine the feasibility of using the instrument and to confirm and elaborate or amend the item pool.

The research was administered in two phases: an exploratory phase and a consolidation phase. The first phase consisted of five exploratory focus group sessions designed to generate data on the understandability, comprehensiveness and feasibility of the survey instrument in addition to clarifying specific implementation issues. Three of these sessions were conducted in Ontario, one in British Columbia and one in Saskatchewan. During the second phase, two focus groups were assembled in Ontario in order to confirm and consolidate the findings generated during the first phase. These consolidation focus groups verified the results and interpretations generated from the first phase of focus groups.

Participants in all focus groups were selected from a convenience sample, rather than randomly, in order to reduce costs and save time. This convenience sample included some individuals known to the authors, but most were recruited through staff or volunteers associated with the NCIC or the moderators. Importantly, however, the groups consisted of adults over the age of 18 with a variety of income and education levels, occupations and ethnicities. Although qualitative research does not demand that representative samples be used, it was recognized that a general population sample would be used for the survey. Thus, demographic characteristics were identified from the focus group sample to ensure that it was representative of the general population. Additionally, the wide range of income and education levels and ethnicities seemed to represent the general Canadian population.

The number of participants in each group ranged from five to eight, and the interviews lasted approximately two hours. The group interviews were conducted in English because the survey instrument was not translated into French or other languages. The focus groups were sex-specific for several reasons: it simplified the focus group process since survey instruments were also sex-specific; the literature suggests that males and females may have sharp differences in opinion and behaviour associated with many health-related issues, including cancer screening;18,19 and the focus group research regarding sensitive issues suggests that involving both sexes in the same group may inhibit frank discussion.20 Table 1 outlines the number and sex of focus group participants by study phase.

Prior to the group discussion, each participant read and, if he/she agreed, signed a consent form explaining that all feedback would be strictly confidential. Participants were informed that, although the discussion would be tape-recorded (unless anyone objected), all responses would be analyzed as a group and combined with information provided by other participants. The consent form also explained that participation was voluntary and anyone could withdraw at any time during the session. Each focus group participant received a reimbursement of $15 for their involvement.

In keeping with focus group design,14,17 only a few structured questions were necessary as the purpose of the interview was to evaluate the feasibility of a pre-designed survey instrument. All participants were informed of the instrument's potential use in a cross-Canada telephone survey. They were also told the purpose of the study and that the two facilitators had not assisted in the development of the questionnaire.

Two facilitators were present during the focus group session: one had primary responsibility as moderator to lead the interview, stimulate discussion and respond to questions, while the other took notes and helped to moderate the discussion as required. The first half hour of the interview was allotted to written survey completion by the participants, including their noting any questions, concerns or comments on the questionnaire. We chose to introduce the survey instrument at the time of the focus group session to generate a "top-of-mind" response from participants, as would be expected if the survey was implemented as a telephone interview. The remaining 1 1/2 hours was devoted to assessing the survey instrument page by page to obtain reactions to the questions, including wording, content, interpretation and comfort levels. Additionally, we solicited suggestions to add or delete questions from the item pool. Particular attention was paid to those survey questions covering sensitive topic areas (e.g. colorectal cancer screening procedures). The session concluded with a brief discussion regarding participants' perceptions of the overall feasibility of implementing the instrument as a nation-wide telephone survey.

Participants returned their completed questionnaires to the moderator, and the research team reviewed any written comments on the survey. Responses were coded, entered and analyzed to determine quantitatively how the survey worked, and completed surveys were retained for later analysis to obtain profiles of the participants and participation rates. Demographic data from the first 12 questions of the survey were entered in a database that recorded marital status, country and province of birth, year of immigration (if applicable), ethnicity, language spoken, employment status, job title, income (in $10,000 intervals), sex and age.

To ensure that the analysis of the qualitative data was systematic, the facilitators revisited their field notes from each focus group session in order to clarify and elaborate upon their findings.8 Two researchers independently analyzed the qualitative data and prepared independent interpretations. These were discussed by the two team members in order to confirm their interpretations and identify areas of disagreement.8,17 Because of the number of focus groups, it was possible to assess the reliability of the data by comparing statements within and across sessions.15 Additionally, the use of a consolidation phase of two focus groups confirmed the interpretations and preparation of results. Finally, the accuracy of the interpretive analysis was further enhanced as the researchers involved in the analysis were intimately involved with actual data collection, having served as the focus group facilitators.17


TABLE 1

Number and sex of participants by each study phase

Phase I: Exploration
(5 focus groups)

Phase II: Consolidation
(2 focus groups)

Totals
(7 focus groups)

Females

Males

Females

Males

Females

Males

n = 20

n = 14

n = 5

n = 6

n = 25

n = 20


Results

Participant Feedback

The focus group discussions and feedback from participants were very lively and extremely beneficial to the survey development process. Feedback was generated in three key areas.

  • Item pool: reactions to specific questions and response options in terms of wording, understandability, comprehensiveness
  • Feasibility and comfort levels: feasibility and comfort levels associated with responding to questions concerning particularly sensitive cancer screening methods
  • Survey implementation issues: identification of issues relating to implementation of the survey instrument over the telephone

Item pool

As stated earlier, question items were adopted or adapted slightly from existing survey instruments. From feedback on the understandability of question items, we discovered several wording problems that generated critical misunderstandings among the participants. When asked, "Describe your knowledge of the warning signs and symptoms of cancer," participants would state how much knowledge they had rather than listing the symptoms of which they were aware. For instance, several participants responded to this particular question by stating, "I really don't know much about these warning signs." Several other questions were also troublesome in terms of wording.

Additionally, the participants suggested modifications to the definitions of the cancer screening methods included in the questionnaire. For example, the survey described an ovarian ultrasound as follows: "Ultrasound uses sound waves to examine internal organs. Ultrasound to examine the ovaries can be done by examining the abdomen or with an internal (trans-vaginal) probe." We found that participants generally had great difficulty with these definitions and descriptions. Many suggested that this was because they lacked adequate familiarity with the different cancer screening techniques. Two specific comments from participants demonstrate the confusion this definition created.

Is that like the thing they use when you are pregnant, to view the fetus? Because I have had that type of ultrasound, but I really don't know if they looked at my ovaries or not.

What on earth is an "internal, trans-vaginal probe?" That means absolutely nothing to me. After reading this definition, I really couldn't tell you whether or not I have ever had an ovarian ultrasound.

The interviews revealed that, even with the increased publicity regarding particular methods of cancer screening and the potential health benefits derived from these tests, few persons had sufficient knowledge of the term "cancer screening." One question asked, "If recommendations about cancer screening were to be made by some official group or organization, which group would be most likely to influence your attitudes and choices?" This question and related questions were difficult for participants to answer because the majority did not understand generally what was meant by "cancer screening." Thus, the survey instrument should include a description or definition of cancer screening as a general term.

Feasibility and comfort levels

The focus group interviews also provided feedback regarding the feasibility of specific questions in addition to the survey instrument as a whole. For instance, the personal screening questions asked respondents to recall the number of specified screening tests they had experienced in their lifetime. For some tests, such as Papanicolaou (Pap) tests and clinical breast exams, female respondents felt challenged to provide a meaningful response. In fact, the majority of female participants were unable to complete these questions. One participant commented:

I have a big problem with this question PAP-4 that asks me how many Pap smears I've had in my lifetime. Are you kidding me? There is no way I can remember this. I suppose if I got a calculator out and added at least one for every year I've been having Paps, I might be able to figure it out—that is, if I can remember when I started having them! Anyway, this would take too much time—much more than I would be willing to spend on a survey.

Furthermore, many of these questions were perceived as too personal and invasive to be asked over the telephone by a stranger. For instance, the participants were asked to explain why they had experienced particular screening tests by stating the main reason as well as the specific medical condition or symptom(s) that led them to seek medical attention. The majority of participants felt uncomfortable with these questions and would be unwilling to disclose much of this personal information over the telephone, indicating that using the telephone to obtain this type of information was problematic. One woman explained it this way:

What you are asking is very personal. I really don't know who you are when you call me or where this information could end up. What if a health insurance company or my employer got this information? It might not matter, but if I had a health problem I might not want anyone to find out about it. I would definitely feel uncomfortable divulging this information to some stranger over the phone.

Participants suggested that they would prefer to complete this type of survey in written format. They all felt that this would ensure anonymity and confidentiality to a greater extent than a telephone survey would. One participant commented:

I feel very uncomfortable discussing this information over the telephone. You (the interviewer) will have access to my name and address. How do I know where the information will end up? I certainly don't feel confident that my answers will remain confidential. I would be more likely to complete this survey if it was given to me in written form—like in the mail or something. At least then I would feel that my identity would remain anonymous.

Survey implementation issues

Important information regarding survey implementation was also generated from the focus group interviews. The sex of the interviewer was an issue for female participants, in particular, in that they would have preferred to be interviewed over the phone by a woman. The length of the questionnaire also produced much discussion. Every participant agreed that the survey was much too long (completion time ranged from 20 to 35 minutes). Instead, participants suggested they would only participate in a phone interview that lasted no more than 10-20 minutes. One participant commented:

My biggest concern with this survey is the length. There is absolutely no way I would spend any more than 10 minutes of my valuable time completing a telephone survey. I want to help the cancer cause and everything, but I am just too busy to be spending that amount of time on the phone to do a survey.

It is important to note, however, that completion time during the focus groups may have been longer than intended because the questionnaire was completed as a self-report rather than as a telephone interview. Although focus groups are not the appropriate tool for assessing length issues, participants in all groups consistently commented on the questionnaire's length and considered it to be an important issue for discussion.

The breadth of the survey instrument was also a problem. Participants did not feel that a cancer screening questionnaire to elicit KAB about a spectrum of cancer screening methods was feasible in a telephone survey format. One man expressed his opinion this way:

I'm having difficulty figuring out exactly what this survey is trying to get at. In the beginning, you are asking me for a ton of information about all sorts of things—it jumps all over the place. Next thing I know, you start asking me very personal questions about rectal exams and prostate tests. From my perspective, this is just too much. I would rather be asked a smaller number of questions about one or two specific issues.

Discussion

The focus group technique is extremely important for survey instrument development. In our study, we were able to generate useful feedback about item language/wording, unanticipated areas of concern and implementation issues. Furthermore, we found that focus groups can and should be used even in the early stages of questionnaire development, when the concepts and item pool are still to be identified. However, because the instrument was at a draft stage, it was very important to continually focus the group discussions.

While it is always important to direct focus groups to the purpose of the discussion, it was a particular challenge with this study. The moderator had to reiterate that the purpose was to examine the wording, comprehension and feasibility of the survey, and to prevent discussions from turning to issues of appearance. Clearly, participants were often distracted by how the draft was formatted and several people tended to move the discussion in this direction. Perhaps providing a copy of the questionnaire with a letter explaining the task prior to the focus group interview would have avoided some of the discussion of formatting and related details. Alternatively, a more fully developed questionnaire with completed formatting, introductions to sections, etc. might have reduced this minor problem.

Focus group interviews aid the questionnaire development process as well as personally aiding the researcher.13 This qualitative technique enabled us to gather important information in a relatively short time span. Routine population-based surveys strictly about cancer screening KAB are relatively rare. We believe that a comprehensive cancer screening instrument, such as the one drafted and focus group-tested for this study, has not been attempted elsewhere. As discussed earlier, questions were drawn from a variety of other questionnaires and combined to form this omnibus screening survey. Thus, feasibility and implementation information specific to this type of comprehensive questionnaire was not available in the literature. Using focus groups provided a timely, inexpensive approach to obtain this important information.

Although the questions had been used in past surveys, we found that there were wording and comprehension problems in the new, integrated version. Additionally, there was an overall lack of understanding about cancer screening in general. This information likely would not have been gleaned from a pre-test study alone. In a quantitative pre-test scenario, we might have simply found incomplete answers or non-responses. Instead we were able to gain insight into the misunderstandings and comprehension problems. The qualitative methodology gave us in-depth, contextual information that clarified the potential non-responses and allowed for wording adjustments to improve comprehension.

In addition to revealing the difficulties with specific questions, the focus groups informed us about the overall lack of understanding the general population has regarding specific cancer screening methods as well as cancer screening in general. This was a rather surprising and extremely important discovery that makes us question just what is being assessed by the items on cancer screening surveys such as the National Population Health Survey. Past KAB cancer screening surveys have focused on specific screening methods rather than a range of various methods; therefore, very little information is available regarding the feasibility of conducting a comprehensive survey.

The contextual information gained through the use of this qualitative study fleshed out the general lack of understanding about cancer screening and indicated that an omnibus survey might not be feasible. We found that asking many in-depth, probing questions about a wide range of cancer screening methods was too challenging for the focus group participants. The focus group participants were more comfortable with fewer questions about one or two specific screening tests for the same number of sites. This information was critical to decisions about implementing the survey and most likely would not have been generated through a pilot study. A pilot study would probably have indicated low response rates, but not the reasons for such low rates. Our qualitative study uncovered this important detail and redirected our efforts to a more feasible approach.

In addition, this qualitative technique was highly useful in determining how people respond to questions regarding sensitive health information. The group discussions provided us with people's initial reactions to highly personal questions. In this respect, the process would be very useful to any researcher working in the health field and interviewing the lay population regarding personal and sensitive health information. For instance, a large portion of this survey required respondents to divulge personal information regarding frequency and reasons for many screening tests (e.g. Pap tests, digital rectal exams). Quite often, those of us working in the health field become very comfortable discussing this information and almost desensitized to the very personal nature of such questions. It is easy to forget that many individuals do not feel as comfortable discussing personal health testing, particularly to a stranger and over the telephone. The detailed group discussions provided insight into these feelings in addition to generating a better understanding of how to approach sensitive topics with the general population.

Along the same lines, the group discussions provided insight into the language that the general population feels comfortable with when discussing personal health information. Certain phrases and wordings came up repeatedly among the participants, indicating the types of words and phrases that the general population is comfortable with. The depth of comfortable disclosure was also revealed by the focus groups. Most individuals were comfortable describing general reasons for having screening tests, but were reluctant to discuss specific symptoms or health problems. Once again, this type of in-depth detail would not be generated from a pilot study. Not only did we gain an idea of the levels of comfort the population has with screening questions, but we learned the reasons behind their discomfort, which will enable us to communicate better with the general public about this research topic.

It is important to note, however, that the use of focus groups in the process of questionnaire development should not be viewed as a substitute for the conventional pre-test. The pre-test is necessary to complement the focus groups because it provides a final check of the questionnaire in the actual interview setting.13 This is particularly useful for telephone-administered surveys, since having respondents complete a survey in written format does not provide the same experience as completing it over the telephone. The survey needs to be pre-tested over the telephone so that such things as normal phone line noise and the respondent's ability to complete questions while completely dependent on a verbal message are a part of the test situation.5

The limitations of focus group evaluations must be noted as well. Focus groups are not the only methodological alternative for questionnaire design. Compared with individual interviews, the focus group researcher has less control of the interview and the data generated.12,21 Thus, a great deal of the information may be unusable. Group influences must also be considered. You cannot be sure that the response a person gives in a group setting is the same as one that would be given in an individual interview.13 Individual interviews might also be preferred for complex topics because the interviewer can use probe and follow-up questions to explore issues that may not be brought up in a group setting.21 Additionally, because ethnographic interviews are less structured than focus groups, unanticipated issues are more likely to be discovered.14 Clearly, there are many qualitative techniques that are beneficial in the survey development process.

Conclusion

This study provides further evidence to support the use of focus group interviews as a valuable tool in the questionnaire development process. The technique is often used before constructing the specific questions in a survey, but our study shows that it is also useful after questions have been generated. In fact, we gained valuable information about questions adopted or adapted from previous questionnaires. As a result, the research team was in a better position to make some important decisions regarding the format, content and implementation of the KAB cancer screening survey. The groups also provided useful feedback with respect to problems associated with language, wording and comfort levels, which improved the quality of this survey instrument. Moreover, the focus groups produced information about discussing highly sensitive and personal health topics that might have been otherwise overlooked by the researchers. We were able to learn the reasons behind non-responses and respondent discomfort, information critical to ensuring good communication about this research topic, and all in a short time span.

Furthermore, because comprehensive, population-based surveys measuring KAB in cancer screening are relatively rare, little is known about the feasibility of their implementation. Our study revealed valuable findings in this respect: an omnibus screening questionnaire would likely not be feasible. The qualitative methodology uncovered much in-depth, contextual information critical to understanding the views of the general population regarding cancer screening and questionnaires, something a pre-test may not have done.

Overall, our experience of using qualitative methodology in the form of focus group interviews to inform the survey development process was very positive. It helped us to conceptualize the important contribution that qualitative research can bring to quantitative methodologies. This technique can assist the quantitative investigator to ask useful questions in a useful way. Additionally, the technique could prove useful in generating introduction letters and informed consent information—both of which are integral parts of survey research. Thus, focus groups should not be left out of the instrument development process since both quantitative and qualitative methodologies can be used jointly to produce a highly effective research technique.

Acknowledgements

The authors wish to thank the Centre for Behavioural Research and Program Evaluation (CBRPE), National Cancer Institute of Canada, for the financial support to undertake this study, with funds provided by the Canadian Cancer Society. Both Ms Kindree and Dr Ashbury participated in this study while employed in the Centre as a Research Associate and Associate Director, respectively. Dr Goel is supported in part by a National Health Scholar Award from Health Canada. The survey was developed through a grant provided by the Laboratory Centre for Disease Control, Health Canada. Finally, the study team wishes to thank the women and men who participated in this study and offered their time to facilitate the development of the instrument.

References

    1. Advisory Committee on Cancer Control (National Cancer Institute of Canada). Bridging research to action: a framework and decision-making process for cancer control. Can Med Assoc J 1994:151(8):1141-6.

    2. De Grasse CE, O'Connor AM, Perrault DJ, Aitken SE, Joanisse S. Changes in women's breast cancer screening practices, knowledge and attitudes in Ottawa-Carleton since 1991. Can J Public Health 1996;87(5):333-8.

    3. Bryant H, Mah Z. Breast cancer screening attitudes and behaviours of rural and urban women. Prev Med 1992;21:405-18.

    4. Lightfoot N, Conlon M, White J, Holohon K, McChesney C, Beauvais J. Cervical cancer and cervical cancer screening: adolescents' knowledge, attitudes and awareness. Cur Oncol 1997;4(2):112-8.

    5. Mercer S, Goel V, Ashbury F, Iverson D, Levy I, Iscoe N. Canadian men's knowledge, attitudes and beliefs on prostate screening. Can J Public Health. In press 1997.

    6. Green LW, Kreuter MW. Health promotion planning. An educational and environmental approach. 2nd ed. Mountain View (CA): Mayfield Publishing Company, 1991.

    7. Babbie E. The practice of social research. Belmont (CA): Wadsworth, 1989.

    8. Ashbury FD, Gospodarowica M, Kaegi E, O'Sullivan B. Focus group methodology in the development of a survey to measure physician use of cancer staging systems. Can J Oncol 1995;5(2):361-8.

    9. Dillman DA. Mail and telephone surveys. The Total Design Method. USA: John Wiley and Sons, 1978.

    10. De Vries H, Weijts W, Dijkstra M, Kok G. The utilization of qualitative and quantitative data for health education program planning, implementation and evaluation: a spiral approach. Health Educ Q 1992;19(1):101-5.

    11. Steckler A, McLeroy KR, Goodman RM, Bird ST, McCormick L. Toward integrating qualitative and quantitative methods: an introduction. Health Educ Q 1992;19(1):1-8.

    12. Morgan DL. Future directions for focus groups. In: Morgan DL, editor. Successful focus groups. Advancing the state of the art. Newbury Park (CA): Sage, 1993.

    13. Desvousges WH, Frey JH. Integrating focus groups and surveys: examples from environmental risk studies. J Official Statistics 1989;5(4):349-63.

    14. Bauman LJ, Greenberg Adair E. The use of ethnographic interviewing to inform questionnaire construction. Health Educ Q 1992;19(1):9-23.

    15. Wolff B, Knodel J, Sittitrai W. Focus groups and surveys as complementary research methods. A case example. In: Morgan DL, editor. Successful focus groups. Advancing the state of the art. Newbury Park (CA): Sage, 1993.

    16. Basch CE. Focus group interview: an underutilized research technique for improving theory and practice in health education. Health Educ Q 1992:19(1):411-48.

    17. Knodel J. The design and analysis of focus group studies. In: Morgan DL, editor. Successful focus groups. Advancing the state of the art. Newbury Park (CA): Sage, 1993.

    18. Shumaker SA, Hill DR. Gender differences in social support and physical health. Health Psychol 1991;10:102-11.

    19. Myers RE, Ross EA, Wolf TA, Balshem A, Jepson C, Millner L. Behavioral interventions to increase adherence in colorectal cancer screening. Med Care 1991;29:1039-50.

    20. Morgan DL. Focus groups as qualitative research. Newbury Park (CA): Sage, 1988.

    21. O'Brien K. Improving survey questionnaires through focus groups. In: Morgan DL, editor. Successful focus groups. Advancing the state of the art. Newbury Park (CA): Sage, 1993.  



Author References

Tricia Kindree, Health Management Co-ordinator, Magna International Inc. Health Centre, 455 Magna Drive, Aurora, Ontario  L4G 7A9, Tel: (905) 713-9950, Fax: (905) 713-9948, E-mail: tricia_kindree@magna.on.ca
Fred D Ashbury, Assistant Professor, Department of Public Health Sciences, University of Toronto; and Affiliate, Centre for Health Studies, York University; and Principal, CAPPE Consultants, Toronto (Ontario)
Vivek Goel, Institute for Clinical Evaluative Sciences; and Associate Professor, Department of Public Health Sciences, University of Toronto, Toronto, Ontario
Isra Levy, Adjunct Professor, University of Ottawa, Ottawa, Ontario
Tammy Lipskie, Cancer Bureau, Laboratory Centre for Disease Control, Health Canada, Ottawa, Ontario
Robin Futcher, Centre for Behavioural Research and Program Evaluation, National Cancer Institute of Canada, Toronto, Ontario

 

[Previous][Table of Contents] [Next]
Last Updated: 1998-10-20 Top