A Review of Trends in Distance Education Scholarship at Research Universities in North America, 1998-2007

[Print Version]

October – 2010

Online Instructional Effort Measured through the Lens of Teaching Presence in the Community of Inquiry Framework: A Re-Examination of Measures and Approach

Peter Shea and Jason Vickers
University at Albany, State University of New York, USA

Suzanne Hayes
University at Albany and Empire State College, State University of New York, USA

Abstract

With more than 4 million students enrolled in online courses in the US alone (Allen & Seaman, 2010), it is now time to inquire into the nature of instructional effort in online environments. Reflecting the community of inquiry (CoI) framework (Garrison, Anderson, & Archer, 2000) this paper addresses the following questions: How has instructor teaching presence (Anderson, Rourke, Garrison, & Archer, 2001) traditionally been viewed by researchers? What does productive instructor effort look like in an entire course, not just the main threaded discussion?  Results suggest that conventional research approaches, based on quantitative content analysis, fail to account for the majority of teaching presence behaviors and thus may significantly under represent productive online instructional effort.

Keywords: Teaching presence; community of inquiry; higher education; content analysis

Purpose

Online learning in higher education continues to grow at a rapid rate.  The Department of Education reports that online students generated more than 12 million course enrollments in 2007-2008 (Parsad & Lewis, 2008) with more than one in four of all college students enrolled in at least one online course (Allen & Seaman, 2010). It is clear that adequate preparation of instructors who venture into this new mode of teaching and learning is vital to its successful implementation. Given that today’s growth in distance higher education continues to be driven largely by developments in asynchronous online learning (Allen & Seaman, 2008; Parsad & Lewis, 2008; U.S. Department of Education, National Center for Education Statistics, 2008) it is necessary that we focus our attention on models that represent the full range of instructional design, pedagogical, and managerial roles, i.e. activities that encompass the work of the online instructor in predominantly asynchronous environments.

Recent meta-analytic and traditional reviews of research indicate that the learning outcomes for online students are at least equivalent (Bernard, Abrami,Lou, Borokhovski, Wade, Wozney, Wallet, et al., 2004; Allen, Bourhis, Burrell, & Mabry, 2002; Tallent-Runnels, Thomas,  Lan, Cooper, Ahern, Shaw, et. al., 2006; Zhao, Lei, Yan, Lai, & Tan, 2005) and may be superior to (Means, Toyama, Murphy, Bakia & Jones, 2009) those of classroom students. Means et al. (2009) concluded that the superior performance of online students may be a function of time on task (p. 51).  It is clear that the transformation of classroom instruction to online instruction is a time-intensive process for faculty with frequent reports that online teaching requires more time (Dahl, 2003; Dziuban, Shea, & Arbaugh, 2005; Hislop, 2001; Tallent-Runnels et al. 2006) than comparable classroom instruction.  One goal of this paper is to understand the nature of this instructional effort as evidenced in full online courses through the conceptual lens of teaching presence (Anderson et al., 2001).

This paper attempts to address the following overarching questions:  How has instructor teaching presence traditionally been viewed by researchers? What does productive instructor effort look like in an entire course (not just the main threaded discussion)? How does evaluating instructor teaching presence at a course-level change the way this construct has been traditionally described?  What additional behaviors do instructors exhibit that have not been captured by the existing model of teaching presence? Toward this end, we re-examine the widely referenced community of inquiry model (Garrison et al., 2000) with the purpose of enhancing the conceptual representativeness of the teaching presence construct.  We set out to achieve this through an analysis of teaching presence behaviors occurring both within and outside the main threaded discussion area of online courses. 

Theoretical Framework / Perspective

The CoI framework developed by Garrison et al. (2000) is based on a model of critical thinking and practical inquiry. The authors posit that learning occurs through the interaction of students and their instructor and is manifest as three highly integrated elements that contribute to a successful online learning community: social presence (SP), teaching presence (TP), and cognitive presence (CP).

The focus of this paper is teaching presence, which has been defined as “the design, facilitation and direction of cognitive and social processes for the purpose of realizing personally meaningful and educationally worthwhile outcomes” (Garrison et al., 2000).  Others have also described it as the “binding element in creating a community of inquiry” (Garrison, et al., 2000, p. 96) and as the source of “online instructional orchestration” (Shea et al., 2010, p. 17). Using quantitative content analysis of postings in asynchronous discussion forums, Anderson, Rourke, Garrison, & Archer (2001) identified three categories and related indicators: instructional design and organization (DE), the facilitation of productive discourse (FD), and direct instruction (DI) (2001).  It is through the use of these indicators that researchers attempt to measure “how present the instructor is in the virtual classroom” (Benbunan-Fich, Hiltz, & Harasim, 2005, p. 27).

Instructor teaching presence is hypothesized to be an indicator of online instructional quality.  Empirical research has supported this view with evidence indicating strong correlations between the quality of teaching presence and student satisfaction and learning (Bangert, 2008; Picciano, 2002; Shea, Pickett, & Pelz, 2003).  We suggest that using teaching presence to measure instructional effort therefore has the advantage of measuring conceptually productive instructional activity rather than atheoretical indicators, such as overall numbers of posts (e.g., Davidson-Shivers, 2009) or hours spent online (e.g., Lazarus, 2003).

We argue that research on the teaching presence construct has been constrained by the following four limitations.  First, there is a need to revisit two of the original three teaching presence elements. Although teaching presence as it was first delineated by Anderson et al. (2001) encompassed three dimensions, DE, FD and DI described above, factor analysis by Shea, Li, and Pickett (2006) found that the three elements failed to cohere as three separate constructs.  Instead only two factors were identified: instructional design and organization and directed facilitation, the latter a revised category incorporating elements of both FD and DI. In this research, with several thousand students, analysis of survey responses suggested that students could not distinguish direct instruction, as defined in the CoI framework, as a construct distinct from facilitation of discourse.

The second limitation relates to design and organization (DE). This indicator was originally described as encompassing course structure, group and individual activities, timelines, and expectations (Anderson et al., 2001). Although the authors indicated that a majority of design takes place prior to the beginning of the course, we posit that the second component, “organization,” refers to an insufficiently documented but robust category of instructor tasks that are centered around “organizational, procedural, administrative tasks” and “procedural and decision-making norms” (Berge, 1995) Comparable instructor responsibilities have been identified by Coppola, Hiltz, and Rotter (2002), Blignaut and Trollip (2003), and Morris, Xu, and Finnegan (2005).We further suggest that effective “organization” has implications for a more articulated conception of productive online instructional effort.

The third limitation relates to the locus of research investigating teaching presence which has been limited largely to threaded discussions. We were unable to identify studies that examined instructor teaching presence outside of online discussion or announcement areas (see Table 1). In order to fully understand the online instructional role we suggest that there is a need to document all observable instances of the three CoI presences. We intend to begin to close this gap by analyzing instructor interaction with students where important communicative processes take place: main threaded discussion area, course e-mail, private folders, instructor announcements, and areas where students pose general course-related questions. The need for examining entire courses has been discussed in previous research (Anderson et al., 2001; Archer, 2010; Shea et al., 2010; Shea et al., 2009; Shea, Vickers, et al., 2009).

Lastly, a careful review of the original teaching presence indicators developed by Anderson et al. (2001) reveals that they are largely reliant upon the threaded discussion activities of the instructor and thus fall short in identifying and articulating the full range of online collaborative tasks and effort demonstrated by both instructors and students.

Table 1

Methods and Data Sources

Quantitative Content Analysis

We used quantitative content analysis to analyze CoI measures of teaching presence to compare a purposive sample of two identical sections of a fully online course taught by instructors who appeared to have very different ways of engaging with their students.  The data for this research includes all of the content from two fully online upper-level courses in business management offered during the fall 2007 term by a state college in the Northeast United States that specializes in distance and adult education for non-traditional learners. Each section was identical, designed by content experts and instructional designers and was taught by experienced instructors, who were not the course designers.  

The course had five modules of instruction and contained a variety of learning activities, including discussions, individual case studies, research papers, and group assignments. The following data sources were used for this study: five two-week long discussion forums, four small-group student discussion areas used to prepare a position paper, one full-group discussion where students presented their position papers as the basis for a class debate, course announcements, private folders for one-to-one student/instructor communication, a public ask-a-question area, and instructor e-mail, syllabus, and orientation materials, as well as module mini-lectures, assignments, and instructions.

Sample and Coding

The sample for this study may be considered the individual messages coded in the two courses. The coders analyzed a total of 10 whole-class discussions and three small-group discussions across all five modules in each course.  They examined 672 discussion posts in course A and 601 discussion posts in course B.   Each sentence was examined using the message as the unit of analysis.  In addition, the coders analyzed all course announcements, e-mails, individual private folders, and question-and-answer areas, using the message as the unit of analysis for a total of 438 additional messages. Lastly the coders applied teaching presence measures to all course documents, which included all syllabus and orientation materials as well as module mini-lectures, assignments, and instructions. In all, 41 course documents were coded.  The paragraph was the unit of analysis for these items.  The coders also examined 102 student course artifacts, such as case studies and research papers. In total, 1,711 messages and 143 documents were reviewed by each coder. In all 3,422 individual analyses of the 1,711 messages were conducted by the two coders.    

Inter-rater reliability was computed using Cohen’s kappa and Holsti’s coefficient of reliability.  Previous research suggests that symmetrical imbalances in the marginal distributions of the coding table is problematic and can lead to low kappadespite high levels of observed agreement (Feinstein & Cicchetti, 1990). Because of this, Holsti’s coefficient of reliability, which measures percent agreement, was also used to calculate inter-rater reliability. Our choice to utilize two measures of reliability follows Garrison et al.’s (2000) original research as well as Rourke, Anderson, Garrison, and Archer’s (2001) coding suggestions. After calculating initial inter-rater reliability, the coders met to negotiate disagreements. This procedure of initial and negotiated coding also follows the protocols of Garrison and others in this line of research.  It allows researchers to uncover errors in coding and to understand meaningful versus non-meaningful disagreements. Where disagreement remains after negotiation, authentic distinctions between coders exist.   Inter-rater reliability metrics for this analysis are included in Appendix A, Table 1.

Coding Scheme Revisions

As briefly described in a related study (Shea et al., 2010), our concerns that the original community of inquiry indicators were constrained by their focus on threaded discussions led us to re-examine Anderson et al.’s (2001) original teaching presence coding scheme. Given that Anderson et al. were working nearly a decade ago, it is in no way surprising that the emphasis of their work was on computer conferencing and the interaction that distinguished this form of distance learning from previous forms. It is our contention however that more recent models of online learning reflect significantly greater productive instructor work than found in threaded discussions alone. We used several approaches to revisit the original categories including examining other theoretical frameworks. We also identified revisions as a result of omissions and conflicts identified during the coding process and of assessing the impact of all changes on the overall coherence of the coding scheme.

Design and Organization (DE)

Based on revisions published by Akyol (2009) a new indicator was added: making macro-level comments about course content.  No changes were made to the remaining original indicators:  designing methods, establishing time parameters, utilizing the medium effectively, and establishing netiquette.  The original indicator setting curriculum was expanded to include  assessment, which was also added by Akyol (2009) and was confirmed after examining the course syllabus, orientation, and other documents and is in line with other research on effective methods for online course design (e.g., Palloff & Pratt, 2007; Simonson, 2009).

Responding to technical concerns was relocated from direct instruction (DI) and was added to utilizing the medium effectively, as many well-designed online courses include extensive instructions and other technical information to help students optimize their use of the online learning environment and to  anticipate and prevent avoidable technical problems.  It might also be noted that responding to technical issues is not a conventional component of direct instruction and may actually be more appropriately handled by a professional help desk in light of frequent reports that online instruction is more time intensive than traditional instruction.

Facilitating Discourse (FD)

We retained five of Anderson’s et al. six original indicators for the FD category: identifying areas of agreement/disagreement; seeking to reach consensus/understanding; encouraging, acknowledging, or reinforcing student contributions; setting climate for learning; and, drawing in participants prompting discussion.  

Three of Anderson’s original DI indicators were moved to the FD category because they were more closely aligned with this process.  The first, presenting content/questions was renamed presenting follow-up topics for discussion.  This was an attempt to identify ad hoc situations where the instructor or students presented content or questions to enhance learning. Focusing discussion on specific issues was amended to refocusing to better address instances where the instructor intervened to help participants focus on relevant issues and stay on topic.  Lastly, summarizing discussion was reassigned here because the purpose of this task is not only to review discussion contributions but also to highlight key concepts and relationships to further facilitate and sustain discourse.

Direct Instruction (DI)

Once this category was restructured to reassign indicators more closely tied to discourse to FD, it became necessary to further identify and describe other dimensions of the instructor’s role in effectively presenting content in the online learning environment.  We turned to Shulman’s (1986) conceptualization of direct instruction as effective uses of “analogies, illustrations, examples, explanations and demonstrations” (p. 1022).  As a result, a separate indicator was established for each of the above:  Providing valuable analogies, offering useful illustrations, conducting supportive demonstrations, and supplying clarifying information.  We retained one of Anderson’s et al. original seven indicators: Injecting knowledge from diverse sources, e.g., textbook, articles, internet, personal experiences.  The remaining original DI indicator, diagnosing misperceptions, served as the starting point for establishing a fifth category of indicators to address the assessment of learning activities within and beyond threaded discussion. It is clear that providing assessment is a central role of instructors, both online and in the classroom, but one that seems underrepresented in the CoI framework.  (See also Akyol, 2009.)

Assessment (AS)

We identified a potential fourth dimension of teaching presence, assessment. New indicators for assessment were derived as a result of examining the entire content of both courses for patterns of assessment. They include both formative and summative assessment across a broad range of instructor and student activities that occur within an online course. Two areas were closely identified with individual student assessment, namely participation in discussions and the completion of individual assignments. It was in these two new indicators that we incorporated Anderson's et al. original DI indicator diagnosing misperceptions. We also introduced a third form of assessment based on the role of the instructor in evaluating course design and the effectiveness of learning activities. The new indicators were as follows: giving formative feedback for discussions, providing formative feedback for other assignments, soliciting formative assessment on course design and learning activities from students and other participants, delivering summative feedback for discussions, supplying summative feedback for other assignments, and soliciting summative assessment on course design and learning activities from other participants. See Appendix B for the full revised teaching presence coding scheme.

Research Questions

This paper represents work in an ongoing project to examine online learning through the community of inquiry framework with a goal of enhancing and further articulating the model. To accomplish this we both revised categories within the framework and undertook extensive analysis of online courses using quantitative content analysis. To extend previous work we utilized the revised teaching presence indicators to examine components of courses not typically included in previous analyses to address the following research questions.

1. Where does teaching presence occur in online courses?
2. How do instructors employ communicative functionality within the course to   demonstrate teaching presence?
3. In what ways do students demonstrate teaching presence?
4. Does teaching presence shift over time?
5. Does teaching presence correlate with learning outcomes reflected in instructor-assigned grades?

Results

1. Where does teaching presence occur in online courses? 

Initial examination of course discussions indicated that the two instructors exhibited very different patterns of teaching presence.  Both instructors appeared engaged with their students in the first module as indicated in Table 2.  However instructor B appears to have been far less involved in subsequent modules.  

Table 2

Instructor A continued to demonstrate teaching presence in all discussions; whereas, instructor B appeared to reduce participating significantly and then ceased to post to the main discussion area. A conventional analysis focused on discussion transcripts might view this as an example of abandonment on the part of the instructor.  Table 3 indicates that overall levels of teaching presence activity outside the discussions were comparable between the two instructors.  Instructor A had a total of 153 teaching presence indicators. Instructor B had 167 total teaching presence indicators. These indicators were tallied by joint coding of e-mails, private folders, bulletin board/announcements, and question areas, which reflect significant instructional effort.
 
Instructor teaching presence activity in areas external to the main discussion accounted for an unexpected proportion of total instructor activity.  For example, instructor A’s non-discussion activity accounted for 88% of his overall teaching presence measures relative to his discussion forums, which contributed just 16%.  For instructor B, who took an active role in only the first discussion and faded from view during the remaining four, this accounted for only 10% of his total teaching presence.  Yet this same instructor compensated for his absence with non-discussion activities which contributed 90% of his teaching presence measures (see Table 3).

Table 3

A different view comparing instructional effort by both instructors in and out of threaded discussions is reflected in Figure 1. As can be seen, instructor teaching presence occurred with much greater frequency outside of threaded discussions.

Figure 1

2.  How do instructors employ communicative functionality within the course to demonstrate teaching presence?

Another perspective on the expression of teaching presence can be seen in the various ways in which different instructors utilize course functions to interact with students.  Table 4 indicates that while instructor A communicated primarily through the private folder function, instructor B interacted predominantly through course e-mail.

Table 4

3.  In what ways do students demonstrate teaching presence?

As reported previously (Shea et al., 2010) we found that overall teaching presence varied widely both within and between the courses for both the instructor and the students. In threaded discussions, both instructors began the courses with similar levels of involvement in terms of teaching presence and then reduced their presence substantially as can be seen in Figures 2 and 3 below.

Figure 2

Figure 3

These results suggest that students’ teaching presence may have a “floor” threshold level and when the instructor's participation within the threaded discussion drops to zero students attempt to recreate “instructional equilibrium.” Figure 3 documents slightly higher levels of teaching presence on the part of the students in course B despite the lack of instructor teaching presence after the second module.

In addition to the regular discussion in module 2, students were instructed to participate in a “debate” of outsourcing, and students were assigned to argue either for or against the topic. Students were divided into four groups (Pro 1, Pro 2, Con 1, and Con 2) and required to collaboratively author a position paper. This resulted in four tangible products, including  a position paper, either for or against the practice of outsourcing, which was to be used as the starting point for each groups’ participation in the fifth class discussion, the debate.  When examining the class debate activities in module 2, we identified very different patterns of activity.

Although five total discussions (three preparatory sections and two whole-group discussions) were coded in connection with these learning activities, it is important to note that the tasks and outcomes of discussion areas were very different from the rest of the course. We found that TP codes were not reliable when used to code discussion areas that were not based on whole-class threaded discussion, e.g., discussion areas where students were focused on the collaborative development of a product. Table 5 reflects inter-rater reliability for attempts to code. As a result, our team decided to discontinue coding debate sections and chose to focus on issues that may have caused recurrent disagreements.

Table 5

Because these four preparatory discussions in module 2 were not strictly focused on intellectual exchange but had a more concrete and practical purpose, namely authoring each groups’ position paper, the researchers questioned the relevance of the teaching presence codes after attempting to code and negotiate two of these preparatory discussions.  Although some of the teaching presence codes appeared to be applicable – setting time parameters, drawing in participants, presenting follow-up topics for discussions – the discourse was less reflective of content based knowledge construction and more focused on the process of effective collaboration to produce a group product. 

We gained insight into our lack of agreement in coding the debates by examining Curtis and Lawson’s (2001) coding scheme for online collaboration, which is based on Johnson and Johnson’s (1996) major behaviors in collaborative learning situations (p. 26).  Curtis and Lawson examined student-to-student interactions in e-mail messages and postings to group discussion boards to identify the following behavior categories:  planning, contributing, seeking input, reflection/monitoring, and social interaction.  When we compared the revised teaching presence indicators with this coding scheme, we found that the first three indicators better represented student-to-student collaborative actions and tasks that were focused on product-based outcomes, such as group-authored written work.  Given that such student-to-student collaborative interaction could be coded reliably using the teaching presence construct led us to question whether there may be a need to focus more attention on the distinct roles of learners in online education separate from the role of the instructor. 

 4. Does teaching presence change over time?

When accounting for instructor teaching presence in all areas of a course, we see that there is a certain ebb and flow to teaching presence. Figure 4 illustrates how both instructors exhibited similar levels of teaching presence in modules one, two, and four. Instructor A’s teaching presence increased greatly in module 3, and instructor B’s teaching presence saw a dramatic increase in module 5.

Figure 4

A closer examination of itemized teaching presence behaviors reveals increases in assessment within the two modules in question (see Table 6) and an increase in design and organization for instructor A.

Table 6

5. Does teaching presence correlate with learning outcomes reflected in instructor-assigned grades?

Finally, we sought to understand whether and to what degree teaching presence can be correlated with learning outcomes reflected in instructor assessments of student learning. To accomplish this we compared teaching presence evidenced within module 3 in course B with grades given on the case study assignment directly related to the online discussion for that module. The research team selected this module because there was a close correspondence between the topic of discussion and the nature of the assignment. The correlation between the expression of teaching presence and assignment grades of the students (n = 17) was statistically significant, r = .55, p < .05

Discussion

Scholarly Significance

These results have a number of implications for research and practice in the rapidly developing arena of online teaching and learning. While other research has investigated instructor interaction throughout an entire course (e.g., Davidson-Shivers, 2009), this project is one of the first studies to comprehensively document productive instructional effort, utilizing a theoretical framework developed for online learning.  Through meticulous coding of thousands of online instructional activities our investigation revealed that the work of the online instructor may be significantly underrepresented by conventional analyses originating in research on computer conferencing.  We suggest that the bulk of online instructional effort occurs outside such fora and that to gain additional insight into the nature of online instruction it is necessary to examine work occurring throughout the entire course.

Our research also revealed that restricting analysis of teaching presence to discussion areas may present too narrow a view of individual instructor’s effort. Some instructors may take a strategic approach by participating in early discussions to model how to formulate probing questions and by providing direct feedback with the goal of withdrawing once this scaffolding is completed.  As a result, we suggest that this traditional research approach can overlook important aspects of the expression of teaching presence.
 
We further suggest that gaining insight into online teaching requires a conceptual framing.  The analysis conducted here not only documents instances of effort, such as frequencies of teacher posting, but confirms the accepted categories of pedagogical work that includes instructional design, facilitation of productive discourse, and direct instruction. At the same time, this study also confirms a fourth TP dimension, assessment.  When considered together these constructs represent initial steps towards a more encompassing explanatory model of the effort involved in teaching and learning in online environments.

Research is beginning to recognize the importance of feedback in a community of inquiry (e.g., Kupczynski, Ice, Wiesenmayer, & McCluskey, 2010).  When analyzing only threaded discussions, the opportunity to see the significant effort associated with assessment is greatly reduced. As Table 3 illustrates, instructors A and B provided the majority of assessment outside of threaded discussions (93% and 97% respectively). Our results show that a majority of instructor B’s teaching presence (64%) was assessment of some form, and almost all of that was provided outside threaded discourse. In order to fully understand and represent teaching presence in an online course, research should recognize the importance of  understanding and measuring assessment and  looking for it in areas it is most likely to occur (i.e., outside threaded discourse).

These results also document a significant correlation between instructional effort reflected in frequency of teaching presence behaviors and learning outcomes evidence through instructor-assigned grades on closely related assignments.  This result is significant in light of past critique (e.g., Rourke & Kanuka, 2009) of the CoI framework, complaining of a gap in evidence between the conceptual model and evidence of “objective” measures of learning in online courses.  We suggest that these results represent a tentative step towards closing that gap.

Our analysis of the discourse of students engaged in the logistics of group projects (e.g., collaboration around preparing for debates) indicates that it does not conform to the patterns of teaching presence identified in other kinds of student interaction, such as whole-class discussion. These anomalies suggest that students are engaged in forms of interaction in the service of accomplishing learning goals that are unaccounted for in the community of inquiry framework as it currently exists. We believe that these exceptions represent fertile ground for extending the framework.  Students communicating around group learning tasks reflect forms of learner self- and co-regulation (e.g. Zimmerman & Schunk, 2001) and highlight the role of effective learners as distinct from effective teachers.  In activities typical of collaborative educational models learners need to engage in forms of planning, monitoring, and strategy characteristic of learner qua learners in order to be successful.  These behaviors are distinct from those taken on by instructors.  We conclude that further articulating the kinds of self- and co-regulation that are appropriate to the online environment should be a goal of future research (see e.g., Shea & Bidjerano, 2010).

Practitioner Significance

These results also have implications for practice as they relate to instructor behavior and instructional design of online courses.  If students’ perceptions indicate that they place a premium on instructor interaction (Anderson, 2003; Shea et al., 2006) instructors must actively manage students’ expectations about the nature of online learning and the role of the instructor in this process. Online instructors can accomplish this by taking the time to communicate that online courses are not teacher-centered models of learning and by explaining the rationale behind student-to-student interaction in negotiating shared meaning through discourse. We also recommend that instructors make clear to their students to what extent and in what capacity they will participate in course discussions. 

Once the course is underway, instructors who choose not to participate actively in discussions should continue to make visible their direct involvement in the course.  This can be accomplished by using the announcement feature to comment on discussion group progress, by posting class reminders, and by communicating privately with students who are ineffective in their postings or who fail to participate.  Instructors can also create opportunities for students to develop their own forms of teaching presence by taking an active role in the initial discussion, modeling how to ask questions that probe and add depth.  Later on in the course, instructors can assign roles to students where they can moderate, summarize, and integrate multiple viewpoints.

In terms of instructional design, our findings related to the strong correlation between student grades for case studies and the frequency of student teaching presence behaviors in instructor B’s course suggest a positive relationship between learning outcomes and online instructional effort as described by the teaching presence construct. Although prior research states that higher levels of cognitive presence (integration and resolutions stages) are unlikely to occur in online discussions (Garrison et al., 2000; Schrire, 2006; Kanuka,  Rourke, & Laflamme 2007; Vaughn & Garrison, 2006; Stein, Wanstreet, Engle, Glazer, Harris, Johnston, Simons, & Trinko, 2006), we believe that there is value in pursuing integrative design for cognitive presence. One promising approach is to relate discussion content to other learning activities as a way to create opportunities for students to probe deeply and to draw meaningful connections between concepts and topics addressed in public discourse and in their own private cognition as they work on individual written assignments.  When follow-up assignments are tied to the public discourse that is facilitated through teaching presence, our results show a strong correlation between objective measures of learning (grades) and this element of the community of inquiry framework.  Instructors and instructional designers should make efforts to tie discussions and follow-up learning activities together to gain this benefit.

These results have implications for other practitioners involved in the online education enterprise, including administrators. When considering the increasingly common practice of monitoring online instructors in some institutional settings (e.g., Epstein, 2010), it is important to realize that instructors can establish their presence in varied and subtle ways.  In this study we found that the effectiveness of the instructor did not depend on participation within the threaded discussion per se, but that responsiveness and effective interaction with students was carried out through a variety of forums, including the ask-a-question area, email, and other modes of communication.  We suggest that benchmarks for effective interaction be communicated to instructors and that institutions provide training and support for online faculty around teaching presence.  We also encourage institutions that practice monitoring of faculty to communicate policies about such monitoring and to consider its likely impact on organizational trust (e.g., Knox, 2010).  At a minimum, such policies should consider the whole course and the instructional effort and forms of teaching presence reflected outside the narrow band of activity occurring solely in online discussions.

Study Limitations and Future Research

Content analysis is a time- and labor-intensive process.  This study was based on the careful review of thousands of individual messages by multiple coders. However there are a number of limitations. Because this study used a purposive sample of two archived course sections, and analysis did not begin until approximately eight months after each course ended, it was not feasible to ask the instructors or students through interviews or surveys to reconstruct their participation.  In the future these findings might be expanded by examining a broader mix of courses and instructional styles and by conducting interviews to learn more about the intentional and unintentional efforts that online instructors make in manifesting their teaching presence by focusing on both when and where they focus their instructional effort.   Finally, surveys of student attitudes might reveal their perceptions of the effectiveness of these varied approaches.

Conclusions

The current research is among the first to look at an entire course using CoI as an investigative tool. While theoretical constructs hold true, questions of reliable application of categories and indicators as a coding tool across an entire course are raised. When the nature of the communicative event moves from threaded discussion to collaborative groups of a different nature (e.g., jigsaw-type activities), the current teaching presence coding scheme may not apply.  We suggest that the role of online students may require further articulation and that the theoretical and empirical literature on self-regulated learning may be particularly relevant to the demands of the online environment (Shea & Bidjerano, 2010).

In order to fully represent a community of inquiry in online environments, we concur with previous researchers (e.g., Anderson 2001; Archer 2010) that researchers need to begin looking at entire courses and not just at threaded discussions or survey data. Because categories and indicators currently employed in CoI research have been primarily conceived through analysis of threaded discussions, future research needs to critically examine their applicability at a course-wide level and to make appropriate changes in order to effectively and reliably measure all three forms of presence within the community of inquiry framework.

References

Allen, E., & Seaman, J. (2008). Staying the course: Online education in the United States, 2008.  Wessley, MA: Sloan Consortium.

Allen, E. & Seaman, J. (2010). Learning on demand: Online education in the United States 2009.  Needham, MA: Sloan Consortium.

Allen, M., Bourhis, J., Burrell, N., & Mabry, E. (2002). Comparing student satisfaction with distance education to traditional classrooms in higher education: A meta-analysis. American Journal of Distance Education, 16(2), 83–97.

Akyol, Z., & Garrison, D. R. (2008). The development of a community of inquiry over time in an online course: Understanding the progression and integration of social, cognitive and teaching presence. Journal of Asynchronous Learning Networks, 12(3), 3–22.

Akyol, Z. (2009). Examining teaching presence, social presence, cognitive presence, satisfaction and learning in online and blended course contexts (Unpublished doctoral dissertation). Middle East Technical University.

Anderson, T., Rourke, L., Garrison, R. D., & Archer, W. (2001). Assessing teaching presence in a computer conferencing context. Journal of Asynchronous Learning Networks, 5(2), 1–17.

Archer, W. (2010). Beyond online discussions: Extending the community of inquiry framework to entire courses. The Internet and Higher Education, 13(1-2), 69-69.

Bangert, A. (2008).  The influence of social presence and teaching presence on the quality of online critical inquiry. Journal of Computing in Higher Education, 20(1), 34–61.

Benbunan-Fich, R., Hiltz, S. R., & Harasim, L. (2005). The online interaction learning model: An integrated theoretical framework for learning networks. In S. R. Hiltz & R. Goldman (Eds.), Learning together online: Research on asynchronous learning networks (pp. 19–37). Mahwah, NJ: Lawrence Erlbaum.

Berge, Z. L. (1995). Facilitating computer conferencing: Recommendations from the field.  Educational Technology, 35(1), 22–30.

Bernard, R.M., Abrami, P.C., Lou, Y., Borokhovski, E., Wade, A., Wozney, L., Wallet, P. A., Fiset, M., & Huang, B. (2004). How does distance education compare to classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research, 74(3), 379-439. 

Blignaut, S., & Trollip, S. R. (2003). Developing a taxonomy of faculty participation in asynchronous learning environments: An exploratory investigation. Computers & Education, 41(2), 149–172.

Braun, N. M. (2008). The effect of student interaction on group performance in asynchronous learning environments (Unpublished doctoral dissertation). University of Minnesota.

Coll, C., Engel, A. & Bustos, A. (2009). Distributed teaching presence and participants' activity profiles: A theoretical approach to the structural analysis of asynchronous learning networks. European Journal of Education, 44(4), 521–538.

Coppola, N.W. Hiltz, S. R ., & Rotter, N. G. (2002). Becoming a virtual professor: Pedagogical roles and asynchronous learning networks. Journal of Management Information Systems, 18(4), 169–189.

Curtis, D. D. &  Lawson M. J. (2001). Exploring collaborative online learning.  Journal of Asynchronous Learning Networks, 21(1), 21–34.

Dahl, J. (2003). How much are distance education faculty worth? Distance Education Report, 7(14), 5–7.

Davidson-Shivers, G. (2009). Frequency and types of instructor interactions in online instruction. Journal of Interactive Online Learning, 8(1), 23–40.

Dziuban, C., Shea, P., & Arbaugh, B. (2005). Faculty roles and satisfaction in asynchronous learning networks. In S. Hiltz & R. Goldman (Eds.), Learning together online: Research on asynchronous learning networks (pp. 169–190). Mahwah, NJ: Lawrence Erlbaum.

Epstein, J. (2010, June 11). Another kind of academic career path. Inside Higher Ed. Retrieved from http://www.insidehighered.com/news/2010/06/11/aaup.

Feinstein, A. R., & Cicchetti, D. V. (1990). High agreement but low kappa: I. The problems of two paradoxes. Journal of Clinical Epidemiology, 43(6), 534–549.

Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2-3), 87–105.

Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking and computer conferencing: A model and tool to assess cognitive presence. American Journal of Distance Education, 15(1), 7–23.

Garrison, D. R., & Archer, W. (2000). A transactional perspective on teaching and learning: A framework for adult higher education. New York, NY: Elsevier Science.

Gorsky, P., & Blau, I. (2009). Online teaching effectiveness: A tale of two instructors. International Review of Online and Distance Learning, 10(3), 1–27.

Hislop, G. W. (2001, Oct 10-13). Does teaching online take more time? Paper presented at the 31st ASEE/IEEE Frontiers in Education Conference,  Reno NV, Session TIF.

Ice, P., Curtis, R. Phillips, P,. & Wells, J. (2007). Using asynchronous audio feedback to enhance teaching presence and students’ sense of community. Journal of Asynchronous Learning Networks, 11(2), 3–25.

Johnson, D.W., & Johnson, R.T. (1996) Cooperation and the use of technology.  In D. H. Jonassen (Ed.), Handbook of research for educational communications and technology (pp. 1017–1044).  New York: Simon and Schuster MacMillan. 

Kamin, C. S., O'Sullivan, P., Deterding, R. R., Younger, M., & Wade, T. (2006). A case study of teaching presence in virtual problem-based learning groups. Medical Teacher, 28(5), 425–428.

Kanuka, H., Rourke, L., & Laflamme, E. (2007). The influence of instructional methods on the quality of online discussion. British Journal of Educational Technology,  38(2), 260-271.

Knox, D. (2010). A good horse runs at the shadow of the whip: Surveillance and organizational trust in online learning environments. The Canadian Journal of Media Studies, 7 (Special Congress Issue). Retrieved from http://cjms.fims.uwo.ca/issues/07-01/dKnoxAGoodHorseFinal.pdf.  

Kupczynski, L., Ice, P., Wiesenmayer, R., & McCluskey, F. (2010). Student perceptions of the relationship between indicators of teaching presence and success in online courses. Journal of Interactive Online Learning, 9(1), 23–43.

Lazarus, B. D. (2003). Teaching courses online: How much time does it take? Journal of Asynchronous Learning Networks, 7(3), 47–54.

Ling, L.H. (2007). Community of inquiry in an online undergraduate information technology course. Journal of Information Technology Education, 6, 153–168.

Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2009). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. Washington, D.C.: U.S. Department of Education, Office of Planning, Evaluation, and Policy Development.

Morris, L.V., Xu, H., & Finnegan, C.L. (2005), Roles of faculty in teaching asynchronous undergraduate courses. Journal of Asynchronous Learning Networks, 9(1), 65–82.

Omale, N., Hung, W. C.,  Luetkehans, L., & Cooke-Plagwitz, J. (2009). Learning in 3-D multiuser virtual environments: Exploring the use of unique 3-D attributes for online problem-based learning, British Journal of Educational Technology, 40(3), 480–495.

Palloff, R., & Pratt, K. (2007). Building online learning communities: Effective strategies for the virtual classroom. San Francisco: John Wiley & Sons.

Parsad, B., & Lewis, L. (2008). Distance education at degree-granting postsecondary institutions: 2006–07 (NCES 2009–044). National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. Washington, DC.

Picciano, A. (2002). Beyond student perceptions: Issues of interaction, presence, and performance in an online course. Journal of Asynchronous Learning Networks, 6(1), 21–40.

Rourke, L., Anderson, T., Garrison, D. R., & Archer, W. (1999). Assessing social presence in asynchronous text-based computer conferencing. Journal of Distance Education, 14(2), 50–71.

Rourke, L., Anderson, T., Garrison, D. R., & Archer, W. (2001). Methodological issues in the content analysis of computer conference transcripts. International Journal of Artificial Intelligence in Education, 12(1), 8–l 22.

Rourke, L., & Kanuka, H. (2009). Learning in communities of inquiry: A review of the literature. Journal of Distance Education 23(1), 19–48.

Schrire, S. (2006). Knowledge building in asynchronous discussion groups: Going beyond quantitative analysis. Computers & Education, 46(1), 49–70.

Shea, P., & Bidjerano, T. (2010). Learning presence: Towards a theory of self-efficacy, self-regulation, and the development of a communities of inquiry in online and blended learning environments.  Computers & Education, 55(1), 1721–1731.

Shea, P., Hayes, S., & Vickers, J. (2009). A re-examination of the community of inquiry framework: Social and content analysis.  American Educational Research Association Annual Meeting, San Diego, CA.

Shea, P., Hayes, S., Vickers, J., Gozza-Cohen, M., Uzuner, S., Mehta, R., et al. (2010). A re-examination of the community of inquiry framework: Social network and content analysis. Internet and Higher Education, 13(1-2), 10–21.

Shea, P., Pickett, A.M., & Pelz,W.E. (2003).A follow up investigation of teaching presence in the Suny learning network. Journal of Asynchronous Learning Networks, 7(2), 68–80.

Shea, P., Sau Li, C., & Pickett, A. (2006). A study of teaching presence and student sense of learning community in fully online and web-enhanced college courses. The Internet and Higher Education, 9(3), 175–190.

Shea, P., Vickers, J., Hayes, S., Gozza-Cohen, M., Uzuner, S., Mehta, R., et al. (2009). Understanding online learning: Cognitive presence and the SOLO taxonomy. Paper presented at the 15th Annual Sloan-C International Conference on Online Learning.

Shin, J. K. (2008) Building an effective international community of inquiry for EFL professionals in a asynchronous online discussion board (Unpublished doctoral dissertation). University of Maryland, Baltimore County.

Shulman, L. S. (1986).  Those who understand: Knowledge growth in teaching.  Educational Researcher, 15(2), 4–14.

Simonson, M., Smaldino, S., Albright, M., & Zvacek, S. (2009). Teaching and learning at a distance: Foundations of distance education. New York, NY: Pearson.

Stein, D., & Wanstreet, C. (2003). Role of social presence, choice of online or face-to-face group format, and satisfaction with perceived knowledge gained in a distance learning environment. Paper presented at the Midwest Research to Practice Conference in Adult, Continuing, and Distance Education.

Stein, D. S., Wanstreet, C. E., Glazer, H. R., Engle, C. L., Harris, R. A., Johnston, S. M., et. al. (2007). Creating shared understanding through chats in a community of inquiry. Internet and Higher Education, 10(2), 103–115.

Stodel, E., Thompson, T., & MacDonald, C. (2006). Learners’ perspectives on what is missing from online learning: Interpretations through the community of inquiry framework, International Review of Research in Open and Distance Learning, 7(3), 1–22.

Tallent-Runnels, M. K., Thomas, J. A., Lan, W. Y., Cooper, S., Ahern, T. C., Shaw, S. M, et. al. (2006). Teaching courses online: A review of the research.  Review of Educational Research76(1), 93–135.

U. S. Department of Education. National Center for Education Statistics. (2008). Distance education at degree-granting postsecondary institutions: 2006–07 (NCES2009–044). National Center for Education Statistics, Institute of Education Sciences, Washington, DC. Retrieved from http://nces.ed.gov/pubs2009/2009044.pdf .

Vaughan, N., & Garrison, D. R. (2005). Creating cognitive presence in a blended faculty development community. The Internet and Higher Education, 8(1), 1–12.

Vaughan, N., & Garrison, D. R. (2006). How blended learning can support a faculty development community of inquiry. Journal of Asynchronous Learning Networks, 10(4), 139-152.

Whipp, J., & Lorentz, R.A. (2009). Cognitive and social help giving in online teaching: An exploratory study. Education Tech Research Development, 57, 169–192

Zhao, Y., Lei, J., Yan, B., Lai, C., & Tan, H.S., (2005). What makes the difference? A practical analysis of research on the effectiveness of distance education. Teachers College Record, 107(8), 1836–1884.

Zimmerman, B., & Schunk, D. (2001). Self regulated learning and academic achievement: Theoretical perspectives. Mahwah, NJ: Lawrence Erlbaum Associates.

Appendix A

Appendix A

Appendix B

Coding Scheme for Teaching Presence Showing Revisions

Please refer to PDF version of this article or click Supplementary files link on the right side of the screen.






ISSN: 1492-3831