Public Health Agency of Canada
Symbol of the Government of Canada

E-mail this page





Resource Library

Hepatitis C Prevention, Support and Research Program
Health Canada

Get the Facts: Mid-term evaluation report

III. The Program Evaluation

Purpose

The Hepatitis C Prevention, Support and Research Program is based on a number of key principles, including the use of evidence-based programming investment decisions and the need to maintain accountability to the public. The evaluation of the Program conducted in 2002 was initiated as part of a commitment to these principles, and the need to report to the Treasury Board Secretariat on implementation, delivery issues and concerns, and the progress toward desired outcomes.

The evaluation process was intended to meet three objectives:

  • Contribute to better decision-making on how to deliver the Program most effectively, and to provide strategies for continuous improvement;

  • Provide an assessment of the progress under the Program toward achievement of planned outcomes (in particular, the goal of supporting the primary clients); and

  • Provide objective information to assist with decisions on Program priorities and future activities.

Timing of Data Collection

Planning for the mid-term evaluation began in June, 2001, when terms of reference for the study were developed in consultation with the regions and the Evaluation Advisory Committee. The Program contracted with Barrington Research Group to assist with the evaluation. The work was carried out between January and December 2002. This report is based on data from that work.

Methodology

A summary of the evaluation methodology is presented in the Data Collection Matrix (Appendix 1). Both qualitative and quantitative data were collected from a wide variety of sources (Appendix 2).

The design of the evaluation process was based on a logic model, which was developed as part of the Program evaluation framework. The framework was a collective effort involving both national and regional staff, members of the Program Advisory Group and other key stakeholders. For the purpose of the mid-term evaluation the logic model was simplified and reduced from five separate logic models to one integrated model. The Program Evaluation Logic Model (Figure 1) represents the means by which the Program is expected to achieve outcomes. The model identifies the main components of the Program and depicts links between the main activities, outputs, and immediate, intermediate and long-term outcomes.

To assess the Program performance, the evaluation focused on four key areas: Scope of the Problem , Program Implementation, Achievement of Program Outcomes, and Program Lessons Learned.

Scope of the Problem

The intent in this key area was to review the magnitude of the hepatitis C problem in Canada. An attempt was made to compare the scope of hepatitis C with that of hepatitis B (HBV) and the human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS), but many obstacles were encountered. Significant differences in how data are collected for the three diseases made comparison difficult. Expert advice from epidemiologists indicated that in light of this situation, comparisons could result in information or interpretations that could be misleading. Any attempt to draw similarities and differences between infectious diseases should be discussed in a multidisciplinary group setting and be grounded in scientific evidence.

Interviews with the primary clients of the Program provided a human perspective on the disease, as part of providing a comprehensive and balanced picture of hepatitis C in Canada.

Program Implementation

The intent in this area is to focus expressly on program delivery and, in particular, to compare what was planned and what was actually achieved. Information was gathered from diverse sources: a review of Program documents; interviews with Program staff and other stakeholders; surveys on implementation from the community-based support projects and care and treatment support organizations; progress reports from community-based support projects; and case studies. This information was compared with goals identified in the activity and output components of the Evaluation Logic Model.

Achievement of Program Outcomes

The intent of work in this area was to determine whether the Program has made progress toward the achievement of outcomes identified in the Logic Model.

Program Lessons Learned

Work in this area was intended to identify issues that could be used by Program stakeholders and by Health Canada to improve Program delivery, reach and impact. This information was collected informally through use of all other methods and instruments developed for the evaluation.

Surveys and interviews were designed in consultation with Health Canada staff, health experts and researchers. Participants were identified by Health Canada staff and the evaluators, and in the case of the Health Expert Survey and the Researcher Implementation and Outcome Achievement Survey, peer reviewers were consulted in the process of participant selection.

Figure 1. Hepatitis C Prevention, Suuport and Research Program Evaluation Logic Model

Figure 1

*Support refers to hepatitis C care and treatment support and community-based support.

 

The interviews and surveys completed are listed below. (For more detail, please see Data Collection Matrix, Appendix 1.)

  • Community Case Study - Primary Client Interview
  • Community Case Study - Secondary Client Interview
  • Community Case Study - Board and Advisory Group Interview
  • Community Case Study - Staff/Volunteer Interview
  • Other Stakeholder Interviews
  • Researcher Implementation and Outcome Achievement Survey
  • Community-based Support Program Implementation and Outcome Achievement Survey
  • National Staff and Regional Staff and Advisory Group/Committee Interview
  • Health Expert Survey.

As well, a review of pertinent documents was conducted, and a scan of published literature originating in Canada was completed (see List of Technical Reports, Appendix 9).

Case studies were developed, based on the methodology in the work of Robert K. Yin (Yin 1989) and Eleanor Chelimsky (United States General Accounting Office, 1990). Interviews for this segment of the evaluation were conducted with community projects across the country: YouthCo AIDS Society in Vancouver, BC; Lethbridge HIV Connection in Lethbridge, AB; Kamamakus Theatre Troupe in Prince Albert, SK; Winnipeg Hepatitis C Resource Centre in Winnipeg, MB; Kingston Street Health Centre in Kingston, ON; Hepatitis C Foundation of Quebec in Montreal, QC; and Hepatitis C Moncton Inc. in Moncton, NB.

Analysis and Presentation of Findings

Specific data from the variety of sources used were analyzed as follows.

Quantitative Data

Prior to analysis, data were entered into SPSS version 10.0 for analysis, using a double-entry validation technique.

Qualitative Data from Case Studies

All qualitative data were entered into NVivo for analysis; initial codes were determined in consultation with the Data Collection Matrix. Use of this conceptualization to code data strengthens the validity of the findings, as they can be compared directly with data obtained from other sources and through other methods. In addition, case study reports were reviewed by the research team for accuracy; this collaborative approach provided opportunities to discuss interpretation of the findings, reinforcing their validity. Case study reports were also sent to Health Canada regional program staff for review, and to confirm the reliability and validity of the findings.

The consultant provided findings in a two-part report. The first part documented findings related to Program implementation and progress toward achievement of Program outcomes, followed by comment and conclusions. The second part consisted of seven case study reports, providing detailed information of activities in seven locales.

Strengths and Limitations of the Findings

Strengths

The fact that multiple sources of data were used reduces the potential for bias and provides greater opportunity for a balanced picture of the Program. Evaluators further took steps to reduce bias by balancing stakeholder information with Program documentation and input from primary clients wherever possible. This strategy has been documented as a means to enhance validity (Silverman, Ricci and Gunter, 1990).

Another strength is the high response rates achieved in the surveys done and the high degree of participation in interviews with Program staff and stakeholders.

Limitations

Across all methods of data collection, a purposeful sampling strategy was employed (Paton, 1990). The complex nature of the Program, timelines and available resources presented many challenges for the application of quasi-experimental techniques. As well, because the population is unknown in most cases (the total number of health experts, primary clients, etc.), a random probability sampling strategy was not feasible. The purpose was to collect data from sources that were most likely to be the most informative.

Another limitation of the evaluation is that at this point, the best that can be accomplished in a single examination is a general understanding of the Program outcomes and outcome achievement; it is not appropriate to expect a definitive statement about the success of the Program or about what work has yet to be done - it is a given that challenges remain ahead for the Program.

It is acknowledged that a complete program impact analysis has yet to take place and will be important to future evaluations of the Program.

Key Points

  • The evaluation process is linked to the Logic Model.

  • Evaluation is a one-time look at the Program, which is in development; a complete impact analysis is part of the overall strategy of the Hepatitis C Program.

  • The evaluation focused on four areas: scope of the problem, program implementation, achievement of outcomes, and lessons learned.

  • The methodology incorporated data collection from a variety of sources.

[Previous] [Table of Contents] [Next]