Language of Evaluation: How PLA Evaluators Write about Student Learning

[Print Version]

January – 2011

Special Issue: Prior, Experiential and Informal Learning in the Age of Information and Communication Technologies

Language of Evaluation: How PLA Evaluators Write about Student Learning

Nan L. Travers, Bernard Smith, Leslie Ellis, Tom Brady, Liza Feldman, Kameyla Hakim, Bhuwan Onta, Maria Panayotou, Laurie Seamans, and Amanda Treadwell
SUNY/Empire State College, USA

Abstract

Very few studies (e.g., Arnold, 1998; Joosten-ten Brinke, et al., 2009) have examined the ways in which evaluators assess students’ prior learning. This investigation explored the ways that evaluators described students’ prior learning in final assessment reports at a single, multiple-location institution. Results found four themes; audience, voice, presentation of the learning, and evaluation language. Within each theme, further sub-themes are defined. These results are significant for training evaluators on how to discuss student learning and for institutions to consider in relationship to the purpose behind the evaluations. Further research and implications are discussed.

Keywords: Prior learning assessment; assessing learning

Introduction

Procedures for assessing prior learning vary across institutions, from reviews of the portfolio alone to interviews with the student or a combination of both, conducted by a single evaluator, a pair of evaluators, or a team (Hoffman, Travers, Evans, & Treadwell, 2009). Research (e.g., Klein-Collins, 2006, 2010; Travers, in press) has examined the ways programs are organized and the results of institutional and student outcomes from program participation. Very few studies (e.g., Arnold, 1998; Joosten-ten Brinke, Sluijsmans, & Jochems, 2009) have examined evaluator practices.

In a recent study, Hoffman et al. (2009) examined prior learning assessment (PLA) program practices across 32 American and Canadian institutions. Among other results, the research found that few programs had formal evaluator training, similar to other research findings (e.g., Lee-Story, 2001). Of those who reported training programs in the Hoffman et al. study, very few reported that the programs discuss ways in which to write about or report on students’ learning. Joosten-ten Brinke et al. (2009) examined the perceptions of students, tutors (faculty), and assessors on PLA practices and had the assessors rate their skills for performing assessments. This study concluded that “there should be more training for tutors and assessors in the required knowledge and skills for assessment of prior learning (APL), such as supporting portfolio development, giving follow-up advice, writing motivational reports, and generally understanding the whole APL procedure” (p. 72). In addition, Joosten-ten Brinke et al. found that both tutors and assessors rated their skills on writing reports extremely low.

A review of the literature (Travers, in press) found no studies that examined the ways in which evaluators write their assessment reports. Although not all institutions require a written report (Hoffman et al., 2009), understanding how practiced evaluators describe student learning through written reports would give insight on how to help faculty and evaluators better understand the assessment process.

Background

This study took place at the State University of New York (SUNY) Empire State College. PLA is available to all students pursuing an associate or bachelor’s degree. The college has seven regional locations within New York and a Center for Distance Learning, which includes International Programs, serving about 20,000 students nationally and internationally. Each undergraduate student engages in an individualized degree-planning process, which includes the ability to request advanced-standing credit through prior learning assessment. At any given time, over 2,000 students are engaged in the prior learning assessment process, with over 1,000 active evaluators available to assess students’ learning.

The evaluation process includes the evaluator reviewing a learning portfolio (which includes a learning description and supporting materials) and conducting an interview with the student. At SUNY Empire State College, the interview is considered an integral part of the assessment process because it allows the evaluator to gain a better understanding of the depth and breadth of the student’s learning, which cannot be acquired through assessing the portfolio alone. The interview provides an opportunity to engage the student in a dialogue about his or her learning and to probe for information that cannot be gleaned from a written text. Over the 40 years of using portfolios and interviews, SUNY Empire State College has found that the interview significantly augments the evaluator’s assessment of a given student’s learning.

The evaluator is required to write an assessment report that includes a description of the student’s learning and justification for the recommended credit amounts, level, and specific designations (e.g., general education, liberal arts, science). Guidelines based on policy are provided to evaluators to structure their written evaluations. This report becomes part of the student’s degree-plan portfolio, which is reviewed and approved in a two-step process: first by a faculty committee (on behalf of the Center the student attends) and then by a centralized office (on behalf of the College). At this point, the credit is awarded to the student and is part of his or her official transcript.

The Study

Methodology

Blind to the research team, 70 evaluator reports were collected across the College through a stratified random selection process. The reports were gathered from officially approved degree programs, which meant that each had undergone the approval process. All identifiable information was removed from each report. The 10-member research team underwent a norming process prior to reviewing and coding the reports.

Initially, a rubric was developed based on the college’s evaluator report policy to use as a framework to review the reports. However, after the first reading of the reports, the team found the rubric inadequate for capturing what was being said in the reports. The team devised a textual analysis approach to determine what the evaluators were saying about student learning. This investigation was not interested in judging the quality of the evaluator reports but focused on what could be learned about the language of evaluation.

Missing from the analysis are any in-depth interviews with or observations of evaluators to better understand their thinking process as they evaluated the students’ learning. This analysis stayed focused on what could be learned about the different approaches, language constructions, and voices that evaluators chose in order to present their judgments of student learning.

Results

Based on the team’s textual analysis, four overarching themes were identified to classify how evaluators approached describing student learning: audience, voice, presentation of the learning, and evaluation language. The following sections elaborate on each of these themes.

Audience

Audience is not a point usually discussed when writing about the ways in which to evaluate learning; however, there seemed to be different audiences to whom the evaluators wrote their reports. In fact, the perceived audience seemed to shape different evaluator writing styles. The ways in which the evaluators approached their writing seemed to focus on different audiences. Three different types of audiences were identified in the reports: students, peer reviewers (academic and professional), and administrators.

Students.

Some evaluators seemed to direct their responses to the student in that they not only described the student’s learning but also identified additional learning that could take place or gave a quick, pithy “lesson.” Often, for this type of audience, the evaluator seemed to have something important to say about the topic overall and made some type of statement to sum up the critical elements of learning in this topic. For example, one evaluator states, “Data modeling is not just representing data in tables; it is the abstraction to data structures that promote data use in an application.” Another evaluator writes, “He successfully demonstrated advanced-level knowledge of learning components that can be translated into the following motivational concepts, principles, and theories: motivators, extrinsic and intrinsic awards, ‘demotivation’ factors, two-factor theory, self-determination theory (specifically, competence feedback), and Hawthorne effect.” Both of these examples “teach” the reader something about the topic, above and beyond describing the student’s learning.

Some comments seemed to be written to help the student affirm his or her learning or where this learning could go next. For example, one evaluator states, “His explanation of how to handle these issues promptly, objectively, and professionally is important in avoiding harassment and discrimination litigation.” This type of comment appears to have a lesson for the student built into the learning description; the evaluator, in the process of assessing prior learning, is nevertheless teaching the student something about the student’s actions and the implications of his or her knowledge. Although not all reports in this category may have been directed specifically to the student being evaluated, the evaluator presented a lesson to an audience as though they needed to know more about the topic.

Peer reviewers.

Some of the reports seemed to address peer reviewers, either those in the profession or other faculty. To understand some of the terminology or expressions used by the evaluator, the reader would need a level of sophistication and familiarity with the terminology or topic. Even if the reader did not know the topic, the style of these reports was such that the reader knew the evaluator was an expert in this field and could speak to the topic with familiarity. In many ways, this gave the reader security that the writer was a content expert and also understood what learning had taken place.

The style of writing, therefore, took on a voice of authority around both the topic and the learning that had taken place. For example, one evaluator states, “We discussed factors, such as order cost and lead time,” and another evaluator writes, “She also developed an awareness of some different types of poetry—that of self-expression, cathartic expression…” In these cases, topic-specific vocabulary or topics anchored the report in the culture of the field. In other cases, the ways in which the evaluator wrote embedded the learning within the culture of the field. For example, a statement such as, “She independently employed her knowledge in color and design…and the results spoke eloquently to the viewer,” would appear to have meaning within the art field but might not have the same meaning in another field of study.

Many evaluators listed the student’s knowledge in terms of course objectives. Although this aspect of the report varied in format (e.g., list, behavioral objectives), it provided the reader with an inventory that could be used to identify what was and was not present in the student’s learning. These approaches seemed to be equated with a pre-defined set of knowledge skills and competencies (a convergent approach to knowledge), and the report justified that the student’s learning matched this set.

Administrators.

Other reports seemed to address administrators. These reports documented the learning as if to satisfy policy and provide evidence. The writing style tended to be more formal and direct to the requirements. For example, statements such as, “He has presented certificates from these seminars, prepared an intelligent, accurate, well-written, seven-page paper…” and “I reviewed [his] essay in detail and then conducted an in-depth phone interview with him to evaluate his level of learning. Then, I compared this learning to college-level courses and learning in Total Quality Management (TQM) and related topics,” satisfy the reader that a systematic process took place. Other statements, such as “The learning is advanced and liberal as [she] was able to articulate often sophisticated and complex theoretical concepts (we are presenting the evaluators’ accounts as presented to the institution without making claims about their veracity), provided the reader with justifications for the credit recommendations using educational constructs.

Voice

The style of voice with which the evaluators wrote can be divided into five subthemes: professional authority, evaluator as observer/reporter, evaluator as editor, student voice, and outside authority.

Professional authority.

The professional-authority voice often simply stated that the learning had taken place, and by virtue of the experience or expertise of the evaluator, the learning was confirmed. Reports with this voice tended to lack any description of the student’s experiences and the learning that occurred. When the evaluator used his or her authority to justify the learning, the reader never learned much about the student’s knowledge.

For example, comments such as, “It is my recommendation that [he] be granted four advanced-level credits—liberal arts—for ‘Gangs in Society.’ I base this recommendation on my previous experience and study”; “From my experience and expertise in the Spanish language and in foreign-language teaching, I have determined that [he] has demonstrated competency in the Spanish language”; and “I have come to realize that even with a PhD and years of experience [sic], the family court system is incredibly difficult to negotiate. Therefore, I believe [she] deserves to gain credit for her advocacy of this system” use the voice of authority to declare the learning.

All evaluators must use their professional authority to make judgments and recommendations for credit. Some evaluators, however, depended on an authoritative voice as justification for stating that the learning occurred. In subtle ways, most evaluators made statements that were based on their personal judgments. For example, comments such as, “it is my belief that she already possesses an expert knowledge” and “I believe [she] deserves to gain credit for her…learning experience” use an authoritative voice, but usually these evaluation reports included further explanation as to why the judgment was determined.

Evaluator as observer/reporter.

Many reports were written from an observer’s or reporter’s perspective. This style of writing provided the sense that if the reader were present during the evaluation process, he or she would also observe the same phenomena and would reach the same conclusions. Often evaluators began with statements such as, “He knows how to use layers to organize his drawings, and knows how to combine objects into blocks”; “[She] gave examples of …”; “[He] demonstrated an excellent knowledge of foundational concepts such as…”; and “[She] also pointed to law enforcement approaches.”

Although some of these reports simply listed the student’s knowledge, many described the learning in more detail. For example, one evaluator writes,

From her internship to assisting directors…, she learned the process of script analysis, developing a costume and plot, research, and provisions of wardrobe within a set budget. We discussed her sources for research, for which she developed a bibliography for me of numerous magazines, books, websites, and picture collections. We also discussed her viewing of existing films for ideas of a specific period, especially within the last 30 years of film. She demonstrated a firm grasp of available sources, and has learned how to use them in her work as an assistant and as a designer…

The evaluator continues in this account to describe more of the interview and the documentation that the student provided. The detailing gives the reader a sense of what the student provided, of the interview that took place, and of the justification as to why the student has the knowledge.

Evaluator as editor.

Many evaluators wrote what was observed and then provided an iterative perspective about the learning or the topic. These reports went beyond the actual student learning and either provided insight into the student’s abilities or furthered the reader’s understanding of the evaluator’s knowledge. For example, statements such as (note: emphasis is added for illustrative purposes), “He ultimately knows better how to talk to the officials without disrespect, anger, and in a non-threatening manner that only one sensitive to officiating would understand”; “The student displayed wonderful listening and analytical reasoning skills, which would serve her well in any undergraduate or graduate college-level program”; “He is articulate and is able to convey his knowledge into action: hence the sign of a good teacher”; and “[She] is dedicated and devoted to continued learning and training” give the reader editorial comments on the student’s learning beyond what the evaluator observed. In other words, the evaluator offers the reader his or her understanding of the meaning or significance of the student’s learning and not simply an account of what the student apparently knows.

Some of the editorial comments addressed overall knowledge of the topic. For example, comments such as, “I would expect a person familiar with inventory concepts to have some familiarity with newer approaches, such as…” and “Typically, college instruction in this subject area infers [sic] that human service workers must be able to deliver basic service activities,” provide a context of the field through the evaluator’s perspective. In these cases, the context provided a comparison against which the student’s learning was evaluated. In other cases, statements such as, “These concerns implicate religious and secular perspectives, as well as ethical and moral considerations about the power to put someone to death” were more editorial than contextual and did not directly address the student’s learning.

Student voice.

Some evaluators used the student’s voice as a way to document learning. For example, statements such as, “[She] spoke of this as being an important part of what she learned and something she continued to use long after the workshop was over” and “[She] explains that transition services are paramount in human services delivery to ensure the long-term success…” put the student in the center of the statements. In each example, the student is the protagonist and is attributed with direct action (e.g., spoke, mentions, demonstrated). Through these accounts, the reader is offered a version of unmediated access to what the student knows.

In many ways, this voicing is similar to that of the evaluator as reporter: the reader has a sense that if she or he were present, the same things would be observed. The difference between these two themes, however, is that in the former, the evaluator uses his or her own observations as the subject; in the student voice evaluations, the student’s own words are offered to the reader.

Outside authority.

In some cases, evaluators used an outside authority on which to base their recommendations. Often, these evaluations would use established course curricula or outcomes. For example, statements such as, “I also found [his] mastery of benefits administration very comparable to upper-level credits given at Cornell University’s Industrial Labor Relations School certification program for HR Benefits Administrators”; “Typically, college instruction in this subject area…”; and ”It is typically expected that upon completion of coursework in ‘Human Services Delivery Systems’ the student should be able to…” move the justification to an outside authority. The evaluators used what has already been established in academe as the voice of authority and, in many ways, relinquished their own authority for that of others.

Presentation of the Learning

The learning itself seemed to influence the style in which evaluators described the learning. Three main themes emerged from examining the ways evaluators wrote about the learning: learning distinguished from experience, learning within different contexts, and learning within different fields.

Learning distinguished from experience.

A major premise of assessing prior learning is that credit is given for learning, not for experience per se (e.g., Fiddler, Marienau, & Whitaker, 2006). Experience in itself does not give rise to learning, but it is the ways in which one reflects upon the results of experience and applies these insights that provides the foundation for learning (Keeton, Sheckley, & Griggs, 2002). Examination of the ways evaluators presented the student’s learning as being distinctly separate from the student’s experiences per se revealed that this concept is much more complex than simply addressing the learning without the experience.

In some cases, the learning was not well defined as being separate from the student’s experience. For example, statements such as, “His vast managerial, teaching, and hands-on experiences make him an expert in the area of Management Information Systems and Project Management backed up by PMP Certification” rely on the experiences as a way to account for the learning. These types of evaluations, however, do not make a clear distinction between the experience and the knowledge gained, and the learning is implied within the experiences.

In other cases, however, the learning was fused within the experience, and it seems as though the learning cannot be described without the experience. Often, in these cases, the learning is more procedural in nature. For example, statements such as, “[She] demonstrated an excellent knowledge of foundational concepts such as value, composition, form, and line study” and “[She] has experimented with pacing and line length, varying this from poem to poem, in an attempt to strengthen the existing images and their impact on the reader” describe an experiential process that is integral to the learning. In other words, the learning and experience evolve together; one cannot be described without the other.

This raises the possibility that perhaps some types of learning are so interrelated with the experience that describing the learning cannot be done without also describing the experience. For example, in performing and studio arts areas, an experiential portfolio would be expected to demonstrate the learning. In language acquisition, the ability to demonstrate the use of the language would be expected as part of the evaluation process. In each of these examples, the language of evaluation uses experiential terminology (e.g., demonstrated, showed) that captures the relationship of the learning within the context of the experience.

Different knowledge domains (Keeton, Sheckley, & Griggs, 2002; Travers, in press) may provide different ways to describe the learning. For example, declarative knowledge would require an assessment of the vocabulary, theories, and principles of a topic (e.g., “she was able to identify and discuss the service activity”), while procedural knowledge would require a different type of assessment (e.g., “[she showed her] process of designing a technical manual”).

Learning within different contexts.

The description of the learning was observed to have different styles based on the context of where or how the learning was acquired. Descriptions differed when learning was acquired through performance, on-the-job experiences, or personal experiences.

Performance-based learning descriptions used processes or methods to describe the learning. For example, statements such as, “[He] describes a process of learning the movements, themselves, individually and in increasingly complex combinations with others” and “His teaching style is modeled on… a thorough demonstration of the basic movements, which the students then show” describe the learning in terms used in performance. In the arts, phrases such as, “good-eye for layout”; “solid approach to writing across large audiences”; “her awareness of rhythm and pacing and how these are important to develop an ear for and use in a way that enhances the imagery and words of the poem”; and “[her] color choices spoke eloquently” are examples of how the writing style seemed specific to the performance-based field. In other words, the language and culture of the performance-based field shaped the ways in which the evaluators wrote about the students’ learning.

Written evaluations about the students’ learning gained from on-the-job experiences tended to focus on skill development. Often, evaluators would list the tasks that a student had done on the job and the skills that he or she had acquired. One example of this type of listing is as follows: “[She] understands common processes followed in project planning, which includes the need for staff training, the need for providing support, planning schedules, assessing staff needs, preparing budgets, marketing and promotion, and facilities planning.” Others use the work environment to couch the learning; for example, “[He] accurately describes the importance of ensuring his company’s human assets are in the correct ‘fit’ (job) to maximize efficiency and profitability”; “Many of her experiences with [her company] dictated the process for evaluation”; and “The student’s job assignments provide opportunities for selecting and using instructional techniques.”

The third type of contextual style was related to when a student acquired the learning from personal experiences. In these evaluations, the human element came through in the evaluator’s account. For example, statements such as, “[She] helps women find meaning in their trauma and how it impacts their lives as a whole”; “[She] explains that transition services are paramount in human service delivery to ensure the long-term success for the student”; “He ultimately knows better how to talk to the officials without disrespect, anger, and in a non-threatening manner that only one sensitive to officiating would understand”; and “The student displayed wonderful listening and analytical reasoning skills… She possesses an expert knowledge of the sociological and historical backgrounds of traditional Italian family life and the role that the Italian family still plays in modern society” are again contextual to the type of learning acquired by the student.

The accounts appeared to be different based on the context within which the student acquired the learning. Learning based on performance, on-the-job experiences, and personal experiences all seemed to have their own narrative style based on the ways that the evaluators described how the student acquired the learning. Qualities such as vocabulary choice, types of detail, imagery, and other aspects of tone and texture shifted in the ways that the evaluators wrote, based on the context of the students’ learning. The reader of these reports could understand by the writing style the context of where and how the students’ learning was acquired.

Learning within different fields.

In a similar manner, the evaluators’ writing styles changed based on the field within which the students’ learning belonged. Word choices were specific to the field, and the content within the learning and certain presentation styles gave credence to the learning belonging to the field. In other words, the style in which the evaluators wrote their recommendations matched the field in which the learning belonged.

For example, in the field of poetry, one would expect ideas such as, ““her awareness of rhythm and pacing… enhances the imagery and words of the poem.” In the legal field, one evaluator states, “[She] demonstrated an advanced and keen understanding of the tensions underlying many legal debates today. These tensions include…” This example continues by describing many different tensions that the student understood from varying perspectives and how “morality and ethics interrelate with legal perspective on various social issues.” Again, word choices and phrases are very much part of the field. These stylistic choices situate the learning within a particular community.

Evaluation Vocabulary

The actual vocabulary and phraseology choices made by evaluators had different styles as well. Differences were observed in the ways that evaluators used glossing terminology. Many evaluators used terminology that has a greater meaning to a specific group but does not convey that meaning to a more general audience; these often were trade-specific words or generalizations understood within a limited audience. Naturally, in writing about learning, there are terminologies that work like code words or clichés or that provide shorthand for what is trying to be described. Some of the language used by evaluators was found to gloss the student learning to some degree. These types of glosses fell into four general areas: cultural, field-specific, educational, and institutional.

Cultural glossing.

Some of the evaluators used words that emanated from their own culture or the culture of their field. These words have more meaning within the specific culture than in describing the student’s unique learning. For example, statements such as, “[She] has learned how to make her poems more clear and precise” and “[She] is able to discuss the concept of intentionality and appropriately analyze a case situation in relation to the concept of intentionality” would seem to have meaning within a particular community of learners but may not have meaning across all communities. Glossing in these cases helps to provide context and culture to the learning without having to go into lengthy explanations.

If one knows poetry, not only would one understand what it means to make a poem “clear and precise,” one would also know the struggle and the practice that it takes. To further the culture of writing poetry, this evaluator also states, “In looking at her poems, I can see where she has applied some restraint in her use of metaphor… I can also see where she has experimented with pacing and line length.” One starts to read much more into what has gone on for the student by the way that the evaluator has used certain terminology, thus giving insight into the culture of poetry writing.

Field-specific glossing.

Some evaluators used words that clearly have meaning specific to the field but that may not be widely understood or even known by anyone outside the field. For another reader from the field, these words would provide an indication of learning that is present without the need for much explanation. For example, statements and phrases such as, “[He] has acquired significant learning in the area of Systems Analysis”; “[his] mastery of the regulatory requirements…”; “[his] depth of learning in management concepts…”; “[she] is knowledgeable about the cycles of sexual assault”; and “[he] is very proficient in computer programs such as Photoshop, Illustrator, and Quark” are all dependent on field-specific words to carry the meaning behind what has been learned.

Educational glossing.

Many evaluators used educational terms that set the evaluation in an educational culture. These words are common to the educational environment and are generally accepted as having meaning but are bound by the educational culture. For example, words such as “understands” and “adequately demonstrated knowledge” were frequently used to describe a student’s learning. These words gloss over what the student knows; what does “understand” mean? However, these words bring comfort to an educational audience and are generally accepted as a way to describe learning.

Glossing often occurred when evaluators summarized the learning. Examples such as,  “[he] prepared an intelligent, accurate, well-written seven-page paper” and “his grasp of ethics as related to training is satisfactory” provide a quick overall assessment but do not provide much information regarding the underlying learning. Within the culture of education, words such as “intelligent” or “satisfactory” are part of the vocabulary, so the gloss is successful by situating the description in terms comfortable to the community.

Institutional glossing.

Some evaluators used glossing terms that were specific to the institution. As within any institution, particular language is used that would not make sense outside of that environment. For example, when an evaluator states that he “reviewed the student’s degree plan and found no redundancy,” the concepts are specific to SUNY Empire State College and have little or no meaning outside of the college perimeters.

Discussion

The results of this study have several significant applications. Whether or not the institutional practice is to have an evaluator write a narrative evaluation, there are evaluator positions and styles that impact the way in which the evaluation is conducted. Even if no report is written in the system under study, there is a private narration by the evaluator as he or she tries to determine the learning that a student has acquired. In addition, there are frequently discussions between the staff responsible for the assessment of prior learning and the evaluators regarding the student’s learning that is being assessed. The evaluator will adopt positions and styles to make sense of what has been observed and to draw conclusions regarding the student learning. Being clear about these positions can make a difference in the outcome of the assessment.

The audience is important. Is the evaluator directing the assessment to the student, colleagues, or administration? In this study, evaluator writing styles were often directed to a single audience, but at times the audience was unclear or the evaluator mixed the audiences and shifted how particular parts of the report were written. In these cases, the evaluator seemed sometimes to be writing to the student, sometimes to colleagues, and sometimes to the administration. For example, when the outcome of the evaluation was directed toward the student as a reader, there tended to be a developmental quality to the recommendation, teaching the reader about the topic. When it was directed to colleagues (within the field or academe), the evaluator provided justification appropriate to peers. Often, the justification was augmented with “evidence” of the learning. When the report was directed to an administrative audience, policy often structured the outcome. For example, policy requires that an interview takes place, so these reports indicated that the interview did take place. When evaluators wrote to more than one audience, there was a mix of styles in the report.

Ultimately, the purpose of the report, rather than the evaluator, should define the audience. If the report’s purpose is to be part of the student’s learning process then the student is the audience, and the report would be expected to read differently than if the purpose of the report is to justify the learning to colleagues or to satisfy policy. An institution needs to be clear on the purpose of the evaluation, how the evaluation is to be used, and thus, who the audience is. This type of clarity would help evaluators in the way they approach evaluating the learning.

The approach that evaluators used to voice recommendations gave insight into the evaluators’ viewpoints and influenced the reader’s understanding of the student’s learning. Voicing was more than a stylistic form of writing—it provided evaluators with a way to pose viewpoints and voice judgments. These positions ranged from an inner authority to an outer authority. The inner authority came in three forms: professional authority, evaluator as reporter, and evaluator as editor. The outer authority took the form of the voice of the student or the voice of academe. In some cases, evaluators blended these different forms, which made it difficult to differentiate among the student’s learning, the evaluator’s knowledge, and what was being reported on learning that is acceptable from the perspective of the field.

One significant difference in the way evaluators voiced the students’ learning was in the use of observational versus editorial voicing styles. In some reports, when the evaluator gave pure observations or used the student’s voice to justify the learning, there lacked overall statements about the learning. The observations or student statements were used as evidence of the learning and left to stand on their own. The reader did not really gain an understanding of the learning per se. On the other hand, some of these reports were extremely effective because the reader had a sense of the learning that took place, as though the reader was a firsthand witness alongside the evaluator.

In contrast, some evaluators had a tendency to comment about the learning, adding specific viewpoints or biases into the report. Some used comments about the student as a learner, while others provided contextual or content information to augment the learning being described. Sometimes the distinction between what was being said about the student’s learning and what the evaluator was providing about his or her own knowledge was hard to determine. The blending of what the student knew, what the evaluator knew, and/or what was expected in the field provided the reader with the least clarification on what the student knew. In some cases, however, the editorializing provided insight as to the point of view from which the evaluator was basing the evaluation, and, therefore, the reader understood how the evaluator had come to his or her decisions regarding the learning. The most effective reports seemed to occur when the evaluators combined the observational and editorial styles, using observations and then making concluding comments about those observations and the student’s learning, as determined by the observations.

When evaluators voiced their professional authority as justification of the students’ learning (e.g., “from my years of experience, I determine…”), there often lacked any other justification beyond the evaluator’s own professional credentials and experiences. In contrast, some reports were very effective because the reader knew that the evaluator had credentials and was a content expert. The reports that were effective usually had additional information about the student’s learning to support the professional-authority justification.

The effect of using a voice from an outside authority (e.g., justifications from college courses) as the criterion by which to judge students’ learning provided another type of professional authority. In these cases, the evaluator used “evidence” from the community of other faculty or field experts to add weight to the judgments being made. This was effective in cases where the learning was being equated to well-known or established criteria. It was also limiting for learning that did not fit nicely into a pre-defined scope, such as a course analogue.

The purpose of the evaluation, again, seems integral to the evaluators’ styles of voicing. For example, if an institution’s purpose behind the evaluation is only to have evaluators give credit recommendation, and if the audience is administrative, then justification from the professional-authority voice or the academic voice might be the most appropriate as there would be no need to go into any depth around the student’s learning. However, if the purpose of the report is to describe and document the learning, then the ways in which an evaluator documents the learning and voices the justifications can play a critical role in bringing to the reader the kinds of learning and knowledge the student has acquired. In these cases, the role of documenting the evaluator’s professional authority may only be beneficial to assure that a content expert conducted the evaluation. In other words, the institution’s purpose behind the evaluation can shape the ways in which evaluators document and describe students’ learning.

In addition, there may be different types of learning that require different approaches to describe them. For example, knowledge that has developed declarative structures may be very different from those that have developed procedural structures. If an evaluator expects to hear facts, theories, and other declarative outcomes, and the student describes procedures, it could either be interpreted as a lack of knowledge or more experiential in nature. Understanding how to describe the learning from its different forms and structures is critical to determine the learning that the student has. The field or the culture within which the learning is embedded also gives structures to the learning that may not normally be captured by using standard assessment perspectives.

Conclusion

The language styles used by evaluators in the reports documenting students’ learning played a critical role in translating student learning into institutional expectations. The style of writing (i.e., audience, voicing, and terminology glossing) that an evaluator used to document and describe students’ learning encapsulated the learning into established discipline and institutional mores and cultures. The styles in which the evaluators wrote the evaluation reports made a difference in terms of how the reader interpreted the students’ learning that took place and its equivalency to college-level learning.

Dynamics of culture permeated all the themes. The culture of the field, the culture of the institution and academia, and the culture of the environment within which the student acquired the learning seemed to impact the styles of writing. Word choices for presenting the learning were often culturally based, and clear differences in word choices were observed across different topics or learning domains (e.g., strategic, procedural).

The act of discussing learning is difficult. In higher education, there is a tacit belief that tends to prevail that faculty know how to describe knowledge. The act of prior learning assessment is to make the learning that is within one individual explicit to another via an evaluator. This mediating translator function of the evaluator is a very unique role in that the evaluator has to assess learning that took place outside of the classroom and equate it to learning that could have taken place inside the institution. The evaluator’s role is unlike that of a classroom-based faculty member who conducts assessment in that the classroom-based faculty member is able to witness students’ learning within a structured context that has been designed by the same individual assessing the learning. Special attention needs to be paid to assisting prior learning assessment evaluators with recognizing students’ learning and with articulating the translation of that learning into structures so that a third person can understand the student learning that took place.

This research study has raised more questions than answers. In what ways can a mediating agent explain learning embodied within one individual to another, without severely imposing bias, self, and cultural beliefs? What are some best practices in the ways used to voice the observations and describe the learning? How can language best be used to describe and document the learning in order to translate it into educational currency (e.g., credits)? What is actually meant by college-level learning against which students’ learning is being assessed, and how does that translate to what an evaluator can observe through student portfolios and/or interviews?

Further research needs to be conducted to get a better understanding of the issues raised by this exploration. Two major directions planned for this work in the future are as follows: 1) to look at how, within SUNY Empire State College, we can use the results of this study to better guide evaluators in writing recommendations; and 2) to explore these themes further and validate them by exploring additional reports and interviewing evaluators. In addition, further research needs to be done to get a better understanding of the processes in which evaluators engage while evaluating students’ prior learning.

Acknowledgements

The authors wish to extend a warm appreciation to Nan Travers, Bernard Smith, and Leslie Ellis for compiling this paper and a special thanks to Dareth McKenna for her administrative assistance with the project. In addition, the authors wish to recognize the administration at SUNY Empire State College for supporting this research effort.

References

Arnold, T. M. (1998). Portfolio-based prior learning assessment: An exploration of how faculty evaluate learning (Doctoral dissertation). The American University, District of Columbia. Dissertations & Theses: The Humanities and Social Sciences Collection (Publication No. AAT 9917494).

Fiddler, M., Marienau, C., & Whitaker, U. (2006). Assessing learning: Standards, principles, and procedures. Chicago: Council for Adult and Experiential Learning.

Hoffmann, T., Travers, N. L., Evans, M., & Treadwell, A. (2009) Researching critical factors impacting PLA programs: A multi-institutional study on best practices. CAEL Forum and News, September 2009.

Joosten-ten Brinke, D., Sluijsmans, D. M. A., & Jochems, W. M. G. (2009, March). Quality of assessment of prior learning (APL) in university programmes: Perceptions of candidates, tutors, and assessors. Studies in Continuing Education, 31(1), 61–76.

Keeton, M. T., Sheckley, B. G., & Griggs, J. K. (2002) Effectiveness and efficiencies in higher education for adults: A guide for fostering learning. Dubuque, IA: Kendall/Hunt.

Klein-Collins, R. (2006). Prior learning assessment: Current policy and practice in the U.S. Chicago: Council for Adult and Experiential Learning.

Klein-Collins, R. (2010). Fueling the race to postsecondary success: A 48-institution study of prior learning assessment and adult student outcomes. Chicago: Council for Adult and Experiential Learning.

Lee-Story, J. H. (2001). Crediting experiential learning: An examination of perceptions and practices in postsecondary hospitality management and general management programs (Doctoral dissertation). Florida Atlantic University, Florida. Dissertations & Theses: The Humanities and Social Sciences Collection (Publication No. AAT3013060).

Travers, N. L. (in press). United States of America: PLA research in colleges and universities. In J. Harris, C. Wihak, & M. Breier (Eds.), Researching prior learning. Leicester, United Kingdom: National Institute for Adult Continuing Education (NIACE).



PID: http://hdl.handle.net/10515/sy5513v85



ISSN: 1492-3831