Evaluating Prior Learning Assessment Programs: A Suggested Framework

[Print Version]

January – 2011

Special Issue: Prior, Experiential and Informal Learning in the Age of Information and Communication Technologies

Evaluating Prior Learning Assessment Programs: A Suggested Framework

Nan L. Travers and Marnie T. Evans
SUNY/Empire State College, USA

Abstract

Over the past two decades, American institutions have been expected to include systematic program reviews to meet accrediting standards, either by independent or governmental review agencies. Program evaluation is critical for several reasons: it provides systematic ways to assess what needs improvement or what needs changing and it provides ways to validate practices, whether to internal or external audiences (Mishra, 2007). Most program evaluative models are focused on academic programs, which don’t fit the uniqueness of prior learning assessment programs. This paper proposes an evaluative framework for prior learning assessment programs, which takes into account the type of work within prior learning assessment programs and uses program portfolios, similar to how students are asked to document their work.

Introduction

Quality assurance in higher education (Lenn, 1992) remains a top priority as resources continue to diminish and demands for excellence increase. Over the past two decades, American institutions have been expected to include systematic program reviews to meet accrediting standards, either by independent (e.g., Middle States Association of Schools and Colleges, Western Association of Schools and Colleges) or governmental review agencies (e.g., Council for Higher Education in Israel, Education Ministry of Russia). Program evaluation is critical for several reasons: It provides systematic ways to assess what needs improvement or what needs changing and it provides ways to validate practices, whether to internal or external audiences (Mishra, 2007).

Much has been written on quality assurance (e.g., Mishra, 2007), but little has focused on appropriate frameworks for evaluating prior learning assessment programs (Van Kleef, et al., 2007; Van Kleef, forthcoming 2011). The Council for Adult and Experiential Learning (CAEL) has published ten standards for prior learning assessment programs (Fiddler & Marienau, & Whitaker, 2006; Whitaker, 1989), which address established practices in prior learning assessment programs (see Table 1). In addition, over the last decade, CAEL (e.g., Flint & Associates, 1999; Glancey, 2007; Hart & Hickerson, 2008; Klein-Collins, 2006, 2010) has studied institutional practices, validating the ten standards. These standards articulate principles that guide program practices but do not address ways in which programs can evaluate their own effectiveness. Freed (2006) analyzed prior learning assessment processes in Texas public universities using the ten standards as a framework and concluded that although there were similarities across the programs studied, many of the institutions could benefit from an evaluation of their current policies, procedures, and practices in terms of overall quality.

Hoffman, Travers, Evans, and Treadwell (2009) studied 34 prior learning assessment programs across higher education institutions in the United States and Canada and determined five critical factors impacting program structures. These critical factors are 1) institutional philosophy statements and policies supporting prior learning assessment practices; 2) institutional support, including financial, administrative and faculty buy-in; 3) prior learning assessment program parameters that set the structures for how credit is assessed and applied; 4) faculty evaluator and content expert professional development; and 5) program feedback and evaluation processes. The results from this research indicated a strong correlation among the factors and PLA program practices (correlations range: r = .84, p < .01 to r = .42, p < .05), implying that best practices tend to be more prevalent when these five factors are in place. However, only 23% of the institutions in the study had any formal evaluation process of their program.

Programs in this study that did report more formal evaluative processes tended to rely on input from those involved in the PLA process (e.g., students, faculty). In addition, some reported that they collected student outcome results (e.g., completion and persistence rates). None of the programs indicated that they had a systematic framework through which they evaluated their program.

Accreditation processes (e.g., Middle States Association of Schools and Colleges) provide institutions with some type of overarching framework and critical questions from which the institution can conduct a self-study to assess their institution and programs. For example, the New England Association of Schools and Colleges (2005) states that each of its standards examines a dimension of institutional quality and that by examining the ways in which an institution meets these standards “the Commission assesses and makes a determination about the effectiveness of the institution as a whole (p. 1).” Through specific guiding statements within each standard, the institution documents the ways in which it “has clearly defined purposes appropriate to an institution of higher learning; has assembled and organized those resources necessary to achieve its purposes; is achieving its purposes; and has the ability to continue to achieve its purposes (p. 1).”

The self-study process allows wide-based participation to examine current practice and areas for improvement. Areas identified for improvement provide the backbone from which a program can concentrate its plans for next steps. This type of process can be an effective way to explore different aspects of a program, not just institutions as a whole. To conduct a self-study of a specific program, an appropriate framework with standards needs to be developed that matches the nuances of that program type.

The prior learning assessment program has unique qualities compared to other academic programs. For example, program review practices, such as curriculum committees, are not in place and often institutions do not institute a periodic program review as typically would be required for an academic department program. Rather, without some of the common practices in academic program development, the set of standards for prior learning assessment programs differs and requires its own set of protocols for program evaluation. The history of prior learning assessment programs has countless examples of programs trying to prove their worth and effectiveness under the scrutiny of more traditionally bound critiques. Prior learning assessment programs need to find ways to demonstrate how they are effective and academically rigorous. By using similar types of evaluative structures, the effectiveness of the programs can be equated to other program evaluation processes.

The Ten-by-Five Framework

The ten-by-five framework is a matrix designed to provide a systematic exploration of the ways in which a prior learning program is successful and to identify ways in which the program might need improvement. Based on the CAEL ten standards and the Hoffman, Travers, Evans, and Treadwell (2009) five critical factors, the ten-by-five matrix provides a structure from which a program can conduct a self-study. The CAEL ten standards are used in this framework because most institutions use these standards to form their prior learning assessment programs and these standards are widely accepted by many accreditation agencies. The five critical factors (Hoffman, Travers, Evans, & Treadwell, 2009) provide specific areas in which to focus each standard within the self-study.

As a qualitative evaluative tool, the matrix is designed to provide a comprehensive framework from which to explore an institution’s policies and practices in more detail and determine areas for improvement. The matrix is structured with the CAEL ten standards on the vertical axis and the five critical factors across the top, horizontally, giving 50 areas to address in the study. Within the matrix, each cell was divided to provide opportunity to report current practice, assess the outcome of this practice, and make suggestions for improvement (see Table 1).

For example, the CAEL standard one reads: “Credit or its equivalent should be awarded only for learning, and not for experience.” For each standard, the self-study would explore how this standard is implemented through the five critical factors: institutional mission and commitment; 2) institutional support (financial, administrative, and faculty); 3) PLA program parameters; 4) PLA evaluator development; and 5) program feedback and evaluation. To assess the first standard, an institution would look at its mission, philosophy, and policies. Are these institutional tenets supporting that credit is awarded for learning and not just for experience? How is this standard supported through financial structures, the administrative mindset, and faculty buy-in? What types of structures are in place within the PLA program to support the standard? How are the evaluators trained to determine the differences between experience and learning? In what ways are the outcomes being assessed?

Although completing all ten standards against the five critical factors is a daunting task, the analysis can reveal some very important areas through which a program could focus future planning. For example, at SUNY/Empire State College, the evaluator training does address the differences between experience and learning. However, at a deeper look, the materials used in this training could provide even more support for evaluators to understand effective ways to assess learning and make distinctions between learning and experience. To assess that credit is awarded for learning and not for experience per se, the college uses a faculty committee to review the evaluator’s evaluation of the student’s learning and credit recommendations and make final credit award decisions. In addition, administrative staff members review the final credit awards and evaluator recommendation. The tiered review system provides checks and balances to the decisions.

The matrix prompts for a description of practice, an assessment of that practice and suggested next steps. For the example above, the institution could begin to explore ways in which to define and assess the effectiveness of these practices. These deeper explorations into the questions about a program allow, through the self-study process, a way to highlight possible areas for improvement and program planning.

Conclusion

Quality assurance is critical for any type of program to understand how it succeeds and how it must improve. The assessment process must be comprehensive enough to endure multiple questions and critiques and any scrutiny. Prior learning assessment programs need to find methods to use to assess their policies, practices, and outcomes in ways that align to other academic programming and the institutional processes. The more others within an institution can understand the program and its integrity, the greater the program will be accepted and accessed across the institution.

The ten-by-five matrix self-study framework is proposed as an approach that an institution could use to delve into a comprehensive exploration of its policies, practices and outcomes. The self-study nature of this framework provides a qualitative approach that structures itself around the CAEL principles, which are well established, and the recent Hoffman, Travers, Evans and Treadwell study on five critical factors for PLA programs. These ten standards and five critical factors provide lenses through which an institution can ask tough questions about its program and determine ways to improve and plan for future directions.

Although the ten-by-five matrix is designed as a comprehensive framework for prior learning assessment programs, the framework itself has not been assessed. To date, the framework has received initial anecdotal application (e.g., SUNY/Empire State College); however, it needs to be tested more systematically in an institutional setting to determine whether 1) the ten-by-five matrix self-study framework is effective to evaluate a prior learning assessment program, and 2) what an institution might learn about its program from using a self-study framework, such as this one. Future research is planned to study the effectiveness of the framework on assessing programs and improving practices, and currently institutions are being sought to participate in this study.

Note: The authors will be conducting a study on the ten-by-five matrix self-study framework. For more information on how your institution can participate in this study, please contact Nan Travers (nan.travers@esc.edu) or Marnie Evans (marnie.evans@esc.edu) at SUNY/Empire State College.

Table 1

References

Hoffman, T., Travers, N. L., Evans, M., & Treadwell, A. (2009, September). Researching critical factors impacting PLA programs: A multi-institutional study on best practices. CAEL Forum and News.

Fiddler, M., Marienau, C., & Whitaker, U. (2006). Assessing learning: Standards, principles & procedures. Council for Adult and Experiential Learning: Chicago.

Freed, R. (2006). An investigation of prior learning assessment processes in Texas public universities offering nontraditional baccalaureate degrees. Denton, Texas. UNT Digital Library, http://digital.library.unt.edu/ark:/67531/metadc5279/.

Glancey, K. (2007). Statewide PLA policy. Chicago: Council for Adult and Experiential Learning.

Hart, D., & Hickerson, J. (2008). Prior learning portfolios: A representative collection. Chicago: Council for Adult and Experiential Learning.

Klein-Collins, R. (2006). Prior learning assessment: Current policy and practice in the U.S. Chicago: Council for Adult and Experiential Learning.

Klein-Collins, R. (2010). Fueling the race to postsecondary success: A 48-institution study of prior learning assessment and adult student outcomes. Chicago: Council for Adult and Experiential Learning.

Lenn, M. P. (1992). Global trends in quality assurance in higher education. World Education News and Reviews, 5(2).

Mishra, S. (2007). Quality assurance in higher education: An introduction. The Director National Assessment and Accreditation Council (NAAC) and Commonwealth of Learning (COL): Karnataka, India.

New England Association of Schools and Colleges (2005). Standards for accreditation. http://cihe.neasc.org/standards_policies/standards/standards_html_version/

Van Kleef, J. (forthcoming 2011). Quality in prior learning assessment and recognition: A background paper. Recognition of prior learning: An anthology. Viby, Denmark: National Knowledge Centre for Validation of Prior Learning.

Van Kleef, J., Amichand, S., Carkner, M., Ireland, M., Orynik, K., Potter, J. (2007). Quality assurance in  PLAR: Issues and strategies for postsecondary institutions. Ottawa:  Canadian Council on Learning.

Whitaker, U. (1989). Assessing learning: Standards, principles and procedures. Chicago: Council for Adult and Experiential Learning.






ISSN: 1492-3831