Home PageAthabasca University Web site

Chapter 13

Supporting Asynchronous Discussions Among Online Learners

Joram Ngwenya, David Annand, & Eric Wang
Athabasca University

Introduction

Many universities now offer Internet-based education. Most research studies have determined that the Web is an effective teaching medium, with student learning outcomes at least equivalent to those of classroom-based students (see, for example, Gerhing, 1994; Golberg, 1997; McCollum, 1997).

Web-based courses generally reflect many features of the traditional academy: they generally have specified start and end dates and limited entry points, and they consist of cohorts of students who proceed through each course at about the same pace. This cohort model lends itself to a group-based, online learning experience. Commercial online learning management systems (LMSs), usually assume an underlying cohort-based learning model, and attempt to replicate many desirable features and activities derived from classroom-based learning contexts. This strategy, in turn, enables increased interaction and knowledge construction among learners. Not surprisingly, most research about online education is informed by these cohort-based learning experiences (see, for example, Arbaugh, 2001; Burke, 2001; McEwen, 2001; Rourke & Anderson, 2002).

However, there is also a long tradition of open education that addresses the needs of learners who for one reason or another do not fit the classic mould of higher education. In large open and distance education institutions, such as many of the “mega universities” described by Daniel (1997), or in smaller variants, like Athabasca University in Canada, the primary objective of the learning model is to provide a greater degree of flexibility for students. In the more flexible of these institutions, learners may enrol in courses throughout the year (continuous enrolment) and proceed through these courses at their own pace. Assignments and examinations can often be completed at any time and in any order. The relatively unpaced nature of this “individualized” model appeals to learners who have significant other responsibilities, such as full-time jobs and families, or who, for some reason, require flexible alternatives to acquire course credits to transfer into other external programs.

These two, somewhat divergent views of higher education appear to have resulted in differing conceptions of the relative importance of mediated, two-way communication in the distance education process, as discussed in the following section.

The Interaction Debate

Back to top of pagetop

Holmberg (1983) conceptualized distance learning as essentially an individual act of internalization. Thus, he saw instructional design that supported learner autonomy and independence as important for learners at a distance. He asserted that distance education institutions needed to provide open access and unpaced courses, and should not require group learning activities (pp. 64-65).

Keegan (1990) characterized effective distance education processes as “reintegrating” the teaching and learning acts; that is, replicating as many of the attributes of face-to-face communication as possible, yet maintaining learner autonomy. Interpersonal communication at a distance did not need to be limited to more direct forms of instructor-student interaction, such as telephone conversations or teleconferencing, but could also be recreated through appropriate design and use of printed instructional materials. In this instance, reintegration occurred when printed learning materials were easily understood, anticipated potential learner problems, provided carefully constructed course objectives and content, and contained ample practice questions and related feedback. Like Holmberg (1983), Keegan considered the more important characteristics of adult distance education to be learner independence and personal responsibility for educational outcomes and processes.

However, not all writers agree that learner autonomy and independence continued to be the chief hallmarks of adult learning after the advent of various forms of online communication. Garrison (1988) expressed the need for a balanced approach between teacher-centered relationships found in face-to-face education, and to a lesser extent, traditional distance education, and the tendency to stress learner-centered relationships in the emerging electronic learning environment. The ability of instructors and learners to communicate openly and collaboratively, and to determine the appropriate, delicate balance between the needs, values, and perspectives of both parties were particularly strong and promising features of the advent of interactive electronic communication technologies (pp. 125-126).

Garrison (1989) argued that dialogue and debate were essential for learning, because these forms of two-way communication allowed learners to negotiate and structure personally meaningful knowledge. Teaching necessarily transmitted societal knowledge, but a rounded learning experience needed to foster critical analysis processes in order to bring personal perspectives to bear and create new understanding for both the teacher and student (pp. 7, 19).

Holmberg (1990) took exception to these assertions. He argued that the vast majority of distance education continued to be based on a correspondence model, characterized by student independence, separation in space and time, and the use of printed material as the primary means of instruction. This model could be supported with various means of two-way communication, depending in part on financial considerations, and in part on instructor and student preferences. Mediated communication had always been a primary characteristic of distance education, he maintained, but merely supplemented the traditional correspondence-based model of distance education. As a result, the nature of distance education may have evolved, but it had not been revolutionized with the introduction of online communication technologies.

Garrison and Shale (1990) responded that Holmberg's conception of distance education was deficient, because it relied on enabling technologies to define the phenomenon. Correspondence study, they argued, had arisen as a result of technological innovations—the mail and telephone systems. These systems were being replaced by newer, more effective, mediated two-way electronic communication systems. A more integrative, technologically independent view of distance education, one that focused on the essential educational feature of learning, was needed. Garrison and Shale defined this feature to be sustained, two-way communication between instructor and learner.

Various writers, including Jonassen, Davidson, Collins, Campbell, and Banaan-Haag (1995), developed this conception of online learning even further. To them, sustained two-way asynchronous communication not only enables greater instructor-learner communication, but most importantly, enables the social construction of knowledge among learners at a distance. This constructive effect occurs when online learning environments require, among other things, “negotiation of meaning and reflection on what has been learned” (p. 21).

This relatively distinct divide between theorists appears to be essentially unresolved at present. One view (represented by both Holmberg and Keegan) conceptualizes the process of distance education as involving primarily flexible, unpaced learning that facilitates learner independence and autonomy. Others (such as Garrison) conceive the distance education process as now being transformed into one of sustained two-way communication, where significant, frequent interaction between instructor and learner and among learners is the essential, enabling learning feature. It is noteworthy that, in practice, this dichotomy appears to manifest itself in the degree of pacing incorporated into course and program structures. This factor is discussed further in the next section.

Technology and Types of Interactions
in Online Learning Environments

Back to top of pagetop

The means of interaction among two or more people depends on their relative locations in time and space, as illustrated in Table 13-1.

Table 13-1. Types of interaction in learning environments.

Table 13-1.
Types of interactions in learning environments.

Using this schema, and by definition, distance learning can only take place in quadrants 2 and 4. It is in these areas that teaching and learning activities occur in different places, requiring some form of technological mediation. Technologies that facilitate synchronous online learning (e.g., desktop video conferencing, chat, and audioconferencing) fall into quadrant 2 (different place, same time). Asynchronous technologies (e.g., computer conferencing, e-mail) fall into quadrant 4 (different place, different time).

However, this representation does not take into account the relatively paced or unpaced nature of online courses. “Place” is extraneous to the analysis if only forms of communication that must be used among physically dispersed individuals are considered. As a result, this variable can be replaced with “Pace,” to gives us the more descriptive schema of online learning shown in Table 13-2.

Table 13-2. Types of interactions in online learning environments.

Table 13-2.
Types of interactions in online learning environments.

For instance, synchronous forms of technology-mediated communication, such as desktop video conferencing, generally occur in quadrant 1 (same pace, same time). Asynchronous forms of communication, such as computer conferencing, occur in quadrant 3 (same pace, different time).

In both paced and unpaced online learning environments, various types of interpersonal, mediated communications are possible: student to student, student to class, instructor to class, and student to instructor. However, Table 13-3 illustrates that, in practice, there are relatively few forms of electronic technology that are both supportable by the learning institution and suitable for the unpaced online learning environment.

Table 13-3. Technologies that facilitate interactions in online learning environments.

Table 13-3.
Technologies that facilitate interactions in online learning environments.

Tables 13-2 and 13-3 illustrate that technologies exist to facilitate all forms of synchronous and asynchronous interaction in paced, online learning environments—the type of interaction envisioned by Garrison (1989, 1990) and Jonassen et al. (1995). However, facilitating interaction among learners in an unpaced online setting is still problematic, despite rapid advances in technology and online learning management systems, because most online learning systems have evolved from classroom-based educational models and group-based support systems. Although online technologies can be adapted to facilitate some forms of interaction—for instance e-mail to allow learner-learner communication—organizational and systems problems engendered by the rolling nature of student registrations may make these practices difficult to implement.

Presumably, other means, such as the use of carefully structured instructional material (whether online or printed) must be used at present to provide meaningful unpaced learning experiences to students at a distance. These strategies are very similar to those promoted by Holmberg (1983, 1990) and Keegan (1990). The failure to distinguish among relative degrees of pacing in distance education courses or programs, and the organizational and learning system differences that result, may account for varying conceptualizations of the appropriate means to achieve “interaction” in the distance education literature.

As a result of this analysis, it also seems clear that unpaced online learning must address some important practical challenges. The balance of this chapter describes the development of an online learning system prototype designed to facilitate learner-instructor interaction, and a limited form of learner-learner interaction, in an unpaced online environment. The system appears to provide learners with maximal amounts of flexibility, yet to rectify an important practical gap in unpaced online learning: the means to communicate effectively with peers and instructors, and thereby facilitate group-based learning. However, many of the features of this system can also be applied to paced online learning environments, thereby addressing some needs of learners and instructors that are common across all online learning models.

The ASKS System

Back to top of pagetop

Collaboration among students in an unpaced online learning environment is difficult because, by definition, they do not belong to a cohort, and their courses are designed to be self-paced. As a result, even two students who begin a course on the same day can quickly be at different points within it. Interactions among learners cannot be easily facilitated, monitored, or evaluated. Furthermore, increased interaction in unpaced online environments can significantly increase costs to the institution (Annand, 1999).

The ASKS (asynchronous knowledge sharing) prototype is designed to overcome these difficulties. It uses discussion boards with capabilities characteristic of most group decision support systems (Nunamaker, Dennis, Valacich, Vogel, & George, 1991). Learners and instructors access the system directly via the Web. The main student screen is divided into three areas, as shown in Figure 13-1: knowledge sharing topics in the left-hand pane, the main menu in the top part of the right-hand pane, and the topic headings just below the main menu.

Figure 13-1. Student main screen.

Figure 13-1.
Student main screen.

Each knowledge sharing topic has four parts: a closed or open file folder icon just to the left of the topic, the topic itself, the number of entries created by a student for the related topic (shown in parentheses), and a trash can icon showing the number of entries that have been deleted. Each knowledge sharing topic is described briefly, in a phrase similar to the subject line in an e-mail message.

When the file folder icon for an applicable topic is opened, the individual student's entries related to the topic are displayed in the right-hand pane. In this case, six entries have been made by the student related to the topic, “System Advantages.” Each response to the knowledge sharing topic is accompanied by the date an entry was entered or last modified, the size (in kilobytes) of the response, a short description of the entry, and a link to a more detailed explanation.

Topic submissions can be created by clicking the “Compose” button. This action brings up the editing screen shown in Figure 13-2.

Figure 13-2. Topic editing screen.

Figure 13-2.
Topic editing screen.

This screen has the look and feel of most e-mail systems. A subject line provides a brief description of the response. The “Explanation” area is similar to the main body of an e-mail. Students may compose their detailed responses to the given topic here, if they wish. If no explanation is entered, the system default reports “No explanation, point self-explanatory.”

A student cannot view others' responses to a knowledge topic until they have made and submitted their own. When entries are submitted, they are accessible to the instructor for reviewing, and unavailable to the originating student for further editing. Other students cannot view these submissions until the instructor has reviewed them.

The last item in the right-hand pane of the student main screen (Figure 13-1) is the “Instructor's Comments” section. If the instructor has evaluated an entry, a “new mail” icon and the date of the evaluation appear in this section of the originating student's screen. Entries that have been rejected by the instructor appear with a red “X” icon. Other possible instructor comments are “Not sent to instructor yet,” for entries that have not yet been submitted for evaluation, and “Awaiting evaluation,” for entries that have been submitted but not reviewed by the instructor.

Clicking the date in the “Instructor's Comments” column opens the screen shown in Figure 13-3.

Figure 13-3. Instructor's comments on an individual entry.

Figure 13-3.
Instructor's comments on an individual entry.

This screen provides each student with the instructor's feedback on their submissions in a private workspace. If the instructor is not satisfied with the overall quality of submissions from a particular student, they can provide hints to the student. The “Hints” button is hidden until the instructor has commented on all entries made by the student. Clicking on this button brings up an instructor feedback screen like that shown in Figure 13-4.

Figure 13-4. Instructor hints.

Figure 13-4.
Instructor hints.

The instructor's overall comments are shown in red. The summary of student Mary Swift's responses is shown in the left-hand column. In addition, a list of points not mentioned by the student, but submitted by others in the virtual cohort, is shown on the right-hand side of the screen. The instructor can choose the amount of other students' contributions disclosed to a participant. The student then submits additional responses until the instructor is satisfied. At this point, some or all of the student's responses can be viewed by others in the virtual cohort, and commented upon by peers if desired or required by the instructor. Viewing may be restricted by the instructor to new points not yet raised by the other students, to provide a more succinct knowledge base. As well, the cohort size can be restricted by submission date; for example, all contributions made in January in one course. This strategy creates online cohorts that are not based on a rigid schedule of submission deadlines, as in a paced environment, but rather are based on students' similar place in a course within a particular period of time. As a result, cohorts can be formed spontaneously and without instructor mediation.

The ASKS Instructor Environment

The main screen for instructors, Figure 13-5, shows the student submissions awaiting evaluation.

Figure 13-5. Instructor main screen.

Figure 13-5.
Instructor main screen.

In this case, there are three related to the knowledge sharing topic, “System Advantages”: one from Mary Swift, and two from John Doe. Clicking Mary Swift's name opens the evaluation screen shown in Figure 13-6.

Figure 13-6. Submission evaluation screen.

Figure 13-6.
Submission evaluation screen.

The ASKS system streamlines the instructor evaluation process through several means. The upper left-hand part of the screen shows the student's submission to be evaluated. The upper right-hand part shows a summary of points already contributed by the cohort, as selected by the instructor in previous evaluations. The bottom left-hand part of the screen (“Evaluation”) enables the instructor to judge a particular response in terms of those of other cohort members (“Class Matching”), clarity of presentation (“Articulation”), and the importance of the point to the knowledge sharing topic (“Relevance”).

With respect to Class Matching, one of three possible evaluations is selected. The entry may be judged to be similar to a current class entry, to be a new entry for the cohort, or to be unacceptable in its current form. Selecting any one of the three options fills the feedback box in the bottom right-hand part of the screen with a randomly selected preset comment, suitable to the evaluation type selected. As a result, instructors do not have to type in comments for every entry they evaluate. However, the comments can be easily modified if the instructor feels that more descriptive feedback is needed.

After all the entries on a knowledge sharing topic are evaluated for a particular student, another comment screen automatically appears. This screen enables the instructor to enter an overall assessment of the student's entries, and also gives the student permission to view other students' contributions. The default setting enables access to all the entries. The instructor can choose to keep some entries hidden, as an encouragement to the student to come up with the missing points. Comments to the student can also be modified to assist this process. These comments are then posted, and become available to the student for viewing either in the “Instructor's Comments” section of the student screen, if the student's overall contribution is satisfactory (see Figure 13-1), or as “Hints” if it is not (see Figure 13-4).

A student's overall class participation mark for a given knowledge sharing topic is automatically calculated by ASKS, and is based on four criteria: attendance, participation, articulation, and relevance. Relative weights are pre-assigned to each of these categories by the instructor. As an example, let us assume that the weights assigned by the instructor to the four grading criteria are as shown in Table 13-4.

Table 13-4. Criteria weighting.

Table 13-4.
Criteria weighting.

The computation of individual student grades for a hypothetical class is illustrated below. The example assumes a class of three students and 10 critical thinking topics, with the class generating five unique responses for each topic. In reality, the class size, number of topics, and unique responses generated for each topic will vary. The assumed number of responses raised by each student for each topic are shown in Table 13-5. A black box indicates that a student did not contribute to a particular topic.

Table 13-5. Assumed student contributions to class responses.

Table 13-5.
Assumed student contributions to class responses.

An attendance mark is awarded for each topic that a student addresses. In this example, Student 1 received 100% (10/10) for attendance because all topics were addressed. Students 2 and 3 received 90% (9/10) and 80% (8/10) attendance scores, respectively. These scores are then weighted according to the attendance factor assigned in Table 13-4 to form part of the student's overall mark. The formula is given below.

Math Formula 1

Individual participation marks are awarded based on the number of responses raised by each student compared to those raised by the whole class. In the example above, Students 1, 2, and 3 raised 36, 40, and 40 responses, respectively. The class as a whole raised 50 unique responses. As a result, Student 1 received 36/50 or 72% for participation. Students 2 and 3 each received 80% (40/50). Each of these marks is then weighted according to the participation factor assigned in Table 13-4. The formula is shown below.

Math Formula 2

Articulation is a criterion for evaluating how well a student response has been written. Articulation marks for each student response submitted are awarded on a scale of 1 to 5 by the instructor at the time of submission. The articulation scores for the three example students for each of the ten topics are shown in Table 13-6. Black boxes indicate responses that a particular student did not raise.

Table 13-6. Individual students' articulation scores. (Student 1)


Table 13-6. Individual students' articulation scores. (Student 2)


Table 13-6. Individual students' articulation scores. (Student 3)

Table 13-6.
Individual students' articulation scores.

To obtain the denominator used to calculate the articulation score for an individual student, the system multiplies the number of responses raised by the student by the highest possible score on the articulation scale. In this example, the highest possible score is 5. Therefore, the best articulation score for the 36 responses raised by Student 1 (see Table 13-5) would be 180 (36 × 5). To obtain the numerator, each student response is multiplied by the articulation value assigned to the response by the instructor. The final articulation mark is expressed as a percentage of the numerator and denominator. The articulation score for Student 1 would be 140/180 or 78%. Similarly, the best articulation score for the 40 responses that Student 2 raised would be 200 (40 × 5). The articulation score for Student 2 would be 144/200 or 72%. For Student 3, the calculation would be 110/200 = 55%. Each of these marks is then weighted according to the articulation factor assigned in Table 13-4. The mathematical formula is given below.

Math Formula 3

Relevance, or perceived substance of each submission from a particular student, is determined by the instructor on a scale of 1 to 7 at the time the response is reviewed (see Figure 13-7). At that time, it becomes a new class response. All other students who subsequently mention this response are assigned the same relevance score. Table 13-7 shows the assumed relevance scores for the 50 class responses that the three students raised.

Table 13-7. Relevance scores assigned to each class response.

Table 13-7.
Relevance scores assigned to each class response.

For each class response that a student mentions, the relevance score is tabulated and compared to the class total for that topic. The overall relevance score for a student is the average of the student's score for all the topics attempted. For example, assume that the relevance scores shown in Table 13-8 were assigned for each student in the class.

Table 13-8. Individual students' relevance scores. (Student 1)


Table 13-8. Individual students' relevance scores. (Student 2)


Table 13-8. Individual students' relevance scores. (Student 3)

Table 13-8.
Individual students' relevance scores.

Student 1 would get an overall relevance score of 223/310 or 72%. Student 2 would receive a score of 89% (249/280). Student 3 would receive a score of 100%. (Note that this student mentioned all the class responses in the topics attempted and was awarded the maximum mark for relevance, even though not all topics were addressed.) This mark is then weighted according to the relevance factor assigned in Table 13-4. Mathematically, this value is expressed as

Math Formula 4

A summary of the class participation marks for all three students is shown in Table 13-9.

Table 13-9. Summary of individual students' marks.

Table 13-9.
Summary of individual students' marks.

This information is automatically prepared in report form for each student. Each report also contains an automatically composed summary of individual student performance. This summary is tailored according to where a student is located on two, 2 × 2 matrices. The instructor can set the parameters of this summary to dichotomize student performance as acceptable or unacceptable. ASKS then generates student-specific comments based location within these matrices.

The first, or efficiency, matrix locates a student in one of four quadrants according to attendance and participation marks, as shown in Table 13-10.

Table 13-10. Efficiency matrix.

Table 13-10.
Efficiency matrix.

The second, or effectiveness, matrix locates a student in one of four quadrants according to articulation and participation marks, as shown in Table 13-11.

Feedback is generated for each student in the form of a five-paragraph summary report. The first paragraph provides an overall comment on each student's contributions. The second paragraph provides a summary comment related to efficiency, and the third paragraph provides detailed suggestions or encouragement to improve articulation and relevance of the contributions. The fourth paragraph summarizes student effectiveness, and the fifth paragraph provides detailed suggestions for improvement in the areas of articulation and relevance. A copy of this feedback is also forwarded to the instructor.

Table 13-11. Effectiveness matrix.

Table 13-11.
Effectiveness matrix.

For example, recall the marks for Student 1 (Table 13-9). Assume the instructor programs ASKS to deem marks above 75% in a given category to be acceptable, and those at 75% or lower to be unacceptable. Based on this cutoff, Student 1 would fall into Quadrant 1 in the efficiency matrix (attendance = 100% = acceptable; participation = 72% = unacceptable), and would also be categorized in Quadrant 1 in the effectiveness matrix (articulation = 78% = acceptable; relevance = 72% = unacceptable). Detailed feedback would be provided as shown in Appendix 13A.

As currently implemented, the ASKS system is something of a hybrid between traditional group decision support systems and an automated system of “adaptive guidance” proposed by Bell and Kozlowski (2002). They proposed this technique as a means of enhancing learners' self-regulation processes and to improve the efficiency of the learning process. Among other features, intelligent agents were proposed to monitor and assess learner progress, and provide tailored feedback. ASKS uses instructors as intelligent agents, but allows them to provide this adaptive guidance more efficiently. It provides automatic instructor access to prior group knowledge, streamlines an instructor's ability to assess student contributions, and provides tailored, automated responses to students as a result of this assessment process. In the near to medium term, this strategy may suffice to create a greater sense of instructor immediacy in the learning process, a factor found to increase student satisfaction in online courses (Arbaugh, 2001).

ASKS also provides a permanent and growing course knowledge base for students to access. Figure 13-7 illustrates such a knowledge base.

Figure 13-7. ASKS course knowledge database structure.

Figure 13-7.
ASKS course knowledge database structure.

In the student evaluation example above, three students participated in one online class. Obviously, the number of students in each class can be expanded. However, the ASKS system also allows the group knowledge accumulated in a number of classes to be easily assembled into one large course knowledge database, made accessible to students as deemed appropriate by the instructor. In this way, an expanding and instructor-vetted database is made available to inform the learning processes of future students.

Future Plans

Back to top of pagetop

The ASKS system is currently being evolved within the School of Business at Athabasca University from a prototype system developed in Microsoft Access and Cold Fusion to a production-based system adapted to delivery via Lotus Notes and Domino. More groupware characteristics are planned. At present, students are not able to communicate easily with each other without instructor intermediation. Planned enhancements include the ability of unpaced students to be assigned arbitrarily to groups with other students at a similar point in a course. Students could then communicate within these groups before submitting group-based assignments. As currently enabled in ASKS, these group contributions could then be evaluated by the instructor and posted for other groups to review and critique.

The system needs to be evaluated to determine, for instance, to what degree it facilitates student-to-student, student-to-instructor, student-to-class, and instructor-to-class interactions; whether students and instructors consider it easy to use; whether it is perceived as fair by students in terms of evaluating individual contributions to online discussion groups; and whether it is cost effective. Davis (1989) showed that many of these factors are major determinants of the acceptance of new technology, and proposed an evaluation model. This model will likely be used as the basis for follow-up research with both instructors and students.

Conclusion

Back to top of pagetop

The ASKS system allows students in both paced and unpaced online learning environments to participate in grouped assessment activities. It also permits instructors to assess individual contributions quickly, and provides tailored, automated feedback to students, thereby increasing the immediacy of feedback and reducing instructor workload.

The ASKS system was initially designed as a means for students in unpaced online learning environments to participate in group discussion and knowledge-building exercises by creating online virtual cohorts. Although an unpaced online learning environment provides an important degree of flexibility for students, very few existing technologies are suitable for promoting interactions among learners in this model. By incorporating features such as adaptive guidance, instructor immediacy, and collaborative learning into both paced (cohort-based) and unpaced (individualized) online learning environments, ASKS may signal the establishment of online technologies that will reconcile differing perceptions about the role of interaction evident in the distance learning literature to date.

ASKS addresses some of the problems associated with group participation in any online environment. First, the system enables the instructor to build a repository of model responses that can easily be incorporated into tailored feedback for students. Second, the system allows the instructor to evaluate each contribution efficiently. Meaningful feedback can be constructed for each student from an existing database. Individual student contributions can be evaluated quickly, and the instructor does not need to recall either the frequency or quality of prior contributions from a particular student. This factor reduces the subjective element common to the evaluation of online discussions.

From the student's point of view, private workspaces allow individual students to create a permanent record of their ideas on a topic. The ASKS system also removes the advantage for students who make early submissions to online discussions. ASKS solves this problem by evaluating each students' submissions in a private workspace.

However, group knowledge building is facilitated when students are then given access to other cohort members' submissions. Students can view the cohort's common pool of submissions, build on this knowledge to create new ideas, and submit these for evaluation and further knowledge sharing. ASKS can also expand on this concept by allowing student access to course knowledge databases that can be vetted by the instructor, and created and expanded easily. Overall, the system promises to increase the amount and quality of interaction in both paced and unpaced online learning environments, and probably in a more cost-effective manner.

References

Back to top of pagetop

Annand, D. (1999). The problem of computer conferencing for distance-based universities. Open Learning, 14(3): 47-52.

Arbaugh, J. B. (2001). How instructor immediacy behaviors affect student satisfaction and learning in Web-based courses. Business Communication Quarterly, 64(4), 42-54.

Bell, B., & Kozlowski, S. (2002). Adaptive guidance: Enhancing self-regulation, knowledge, and performance in technology-based training. Personal Psychology, 55, 267-306.

Burke, J. A. (2001). Collaborative accounting problem solving via group support systems in a face-to-face versus distance learning environment. Information Technology, Learning, and Performance Journal, 19(2), 1-19.

Daniel, J. (1997). The mega-university: The academy for the new millenium. In The new learning environment—a global perspective. Proceedings of the 18th ICDE World Conference (p. 22). University Park, PA: International Council for Distance Education.

Davis, F. (1989). Perceived usefulness, perceived ease-of-use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-338.

Garrison, D. R. (1988). Andragogy, learner-centredness and the educational transaction at a distance. Journal of Distance Education, 3(2), 123-127.

Garrison, D. R. (1989). Understanding distance education: A framework for the future. New York: Routledge.

Gerhing, G. (1994). A degree program offered entirely online: Does it work? In D. Foster & D. Jolly (Eds.), Proceedings of the Third International Symposium on Telecommunication in Education, (pp. 104-106), November 10-13, 1994, Albuquerque, New Mexico.

Golberg, M. (1997). CALOS: First results from an experiment in computer-aided learning for operating systems. Proceedings of the ACM's 28th SIGSCE Technical Symposium on Computer Science Education (pp. 48-52), San Jose, California.

Holmberg, B. (1983). Guided didactic conversation in distance education. In D. Sewart, D. Keegan, & B. Holmberg (Eds.), Distance education: International perspectives. London: Croom Helm.

Holmberg, B. (1990). A paradigm shift in distance education? Mythology in the making. International Council for Distance Education Bulletin, 22, 51-55.

Jonassen, D., Davidson, M., Collins, M., Campbell, J., & Banaan-Haag, B. (1995). Constructivism and computer mediated communication in distance education. American Journal of Distance Education, 9(2), 7-26.

Keegan, D. (1990). The foundations of distance education (2d ed). London: Routledge.

McCollum, K. (1997). A professor divides his class in two to test value of online instruction. Chronicle of Higher Education, 43(24), A23.

McEwen, B. (2001). Web-based and online learning. Business Communication Quarterly, 64(2), 98-103.

Nunamaker, J., Jr., Dennis, A., Valacich, J., Vogel, D., & George, J. (1991). Electronic meeting systems to support group work. Communications of the ACM , 34 , 40-61.

Rourke, L., & Anderson, T. (2002). Using peer teams to lead online discussions. Journal of Interactive Media in Education, 1. Retrieved May 5, 2004, from http://www-jime.open.ac.uk/2002/1/rourke-ander
son-02-1-t.html

Appendix 13A: Model Student Feedback

Back to top of pagetop

Dear (Name).

Your final mark for the discussion part of Course XYZ is 76%, calculated as follows:

Appendix 13A. A table showing the calculation of the final mark for the discussion part of Course XYZ.

I hope that your learning experience has been an enjoyable one. Overall, you have addressed all the topics in the course and have presented your thoughts well.

However, though you touched on all the required topics, the overall number of your contributions was fairly limited. This adversely affected your grade.

In the future, you should consider addressing other aspects of each topic. For instance, one strategy would be to argue one particular point of view for a given topic, then counterbalance this with a somewhat opposing viewpoint.

Also, although you presented your points well, many of the themes of your responses were not directly relevant to the topic, or did not sufficiently identify some key concepts.

In the future, you should more carefully consider the given topic before responding. As well, you might spend more time reviewing pertinent information in the course material beforehand.

Regards,
Instructor X

Back to top of pagetop