A Review of Trends in Distance Education Scholarship at Research Universities in North America, 1998-2007

[Print Version]

October – 2010

A Review of Trends in Distance Education Scholarship at Research Universities in North America, 1998-2007

Randall S. Davies, Scott L. Howell, and Jo Ann Petrie
Brigham Young University, USA

Abstract

This article explores and summarizes trends in research and scholarship over the last decade (i.e., 1998-2007) for students completing dissertations and theses in the area of distance education.  The topics addressed, research designs utilized, and data collection and analysis methods used were compiled and analyzed.  Results from this study indicate that most of the distance education research conducted by graduate students in this period of time has been descriptive, often addressing the perceptions, concerns, and satisfaction levels of various stakeholders with a particular distance education experience.  Studies of this type typically used self-report surveys and analyzed the data using descriptive statistics.  Validating the concern of many distance education scholars, there was a lack of graduate student research aimed at developing a theory base in distance education.  On a positive note, projects directly comparing distance education with traditional face-to-face classrooms to determine the merit of specific programs declined significantly in 2007 as compared to 1998.  This result might indicate that distance learning is becoming accepted as a viable and important educational experience in its own right.  Another encouraging finding was the decreased emphasis on studies focused on technology issues, such as those analyzing the quality of distance education technology and questioning educators’ ability to provide an acceptable technology-enabled distance learning experience. 

Keywords: Distance education; graduate student research; research utilization

Introduction

When considering the general state of distance education research, an important starting point is to examine what is published in scholarly journals and to conduct a review of theses and doctoral dissertations (Moore & Kearsley, 2005).  Several articles over the past decade have chronicled the research trends of studies published in major distance education journals (e.g., Berge & Mrozowski, 2001; Lee, Driscoll, & Nelson, 2004; Ritzhaupt, Stewart, Smith, & Barron, 2010; Zawacki-Richter, Baecker, & Vogt, 2009).  This article explores and summarizes trends in research and scholarship over the period of 1998-2007 for students completing dissertations and theses in the field of distance education.  More specifically, the topics addressed, research designs utilized, and data collection and analysis methods used were compiled and analyzed.            

General State of Distance Education Research

There is little doubt that distance education is an innovative and expanding field (Allen & Seaman, 2007).  In 1995 only one-third of the institutes of higher education in the United States offered distance education courses (Lewis, Snow, & Farris, 1999).  The most recent national study (2006-07) on distance education sponsored by the Department of Education indicates that “two-thirds (66%) of 2-year and 4-year Title IV degree-granting postsecondary institutions reported offering online, hybrid/blended online, or other distance education courses” (Parsad & Lewis, 2008, p. 2).  For a variety of reasons, distance education and online learning are appealing to students, teachers, and administrators in many fields.  But even with this level of acceptance and use, many researchers acknowledge that unless the amount and quality of distance education research and scholarship are improved, substantial improvements in teaching and learning are unlikely (Lee, Driscoll, & Nelson, 2004; Moore & Kearsley, 2005).

Naidu (2005) observed that the majority of distance education research has been descriptive (i.e., studies that describe how or what is being done in a case study context) and kindly suggested that the rigor and quality of much of this research is suspect.  While high-quality descriptive research has its place and contributes to the development of a working knowledge of important aspects of the field, some argue that distance education must develop new scientific models using more rigorous research methodologies (Tallent-Runnels et al., 2006).

Another common type of research used to study distance education programs and initiatives is evaluation research, which examines the effectiveness of distance education practices (Moore & Kearsley, 2005).  Often, effectiveness evaluations are based on a comparison with traditional face-to-face classrooms (Gaytan, 2007; Tucker, 2001).  The typical criteria for measuring the effectiveness of distance education instruction focus on analyses of student achievement in, attitude toward, and satisfaction with the learning experience (Phipps & Merisotis, 1999).  Critics of this practice point out the poor methodological design of some of these comparison studies and the questionable quality of assessment instruments used to gather comparison data; they also suggest that studies simply comparing faculty and student perception of and satisfaction with distance learning and traditional models of face-to-face instruction are rather weak evidence of value (Beaudoin, 2004; Bernard, Abrami, Lou, & Borokhovski, 2004; Meyer, 2002; Tallent-Runnels et al., 2006).  More fundamentally, while most studies show distance education to be as effective as traditional education (Meyer, 2004; Russell, 1999; Saba, 2000; Simonson, 2002; Zhao, Lei, Yan, Lai, & Tan, 2005), the need to validate the importance and viability of distance education based on comparisons with face-to-face learning experiences seems to expose a deep-rooted insecurity within the distance learning community—a fear that distance education is regarded as a somewhat substandard and less valued educational practice.  This phenomenon has prompted calls for more formative evaluation practices to address concerns regarding the need for (1) improving the distance education experience, (2) establishing acceptable principles of best practice, and (3) developing standards of quality by which distance education practices can be judged (Beaudoin, 2004; Garrison & Anderson, 2003; Meyer, 2004; Sherry, 2003). 

Finally, while it should not be assumed that quality distance education research does not exist (Meyer, 2002), many distance education scholars express concern regarding the perceived emphasis on the pragmatic rather than the theoretical.  They point out the apparent inadequacy of research aimed at establishing a solid theory base from which distance education can develop (Beaudoin, 2004; Garrison & Anderson, 2003; Saba, 2003; Moore & Kearsley, 2005).  New scholars typically learn to conduct research in graduate school as they complete thesis and dissertation projects.  For this reason, an analysis of research topics and methods in graduate schools promises to provide an important perspective and update on the state of research in the field.

Research Methods

This study used content analysis techniques to determine trends in research topics or purposes, research designs, and types of data collection and analysis methods.  A thematic analysis was employed to determine the most frequently addressed topics and most commonly used designs and methods in order to explore changes in these aspects of graduate student research in distance education for the period of 1998-2007.

Manuscript Selection Criteria and Process

Moore and Kearsley (2005) point out the difficulty researchers have in accessing all the relevant graduate student research on the topic of distance education.  Internet technologies make this task possible, but some studies are not labelled as distance education research per se, and many manuscripts have been submitted with abstracts only.  For this study, abstracts alone were insufficient for the desired analysis; full-text manuscripts were needed.  In addition, the time and effort involved in reading and categorizing a decade’s worth of available research manuscripts presented a daunting task.   This study sampled manuscripts at three points in the last decade (i.e., 1998, 2002, & 2007) to uncover any trends that may exist.

The sample used in this study includes all full-text English doctoral dissertations and master’s theses located using the descriptor distance education submitted to the ProQuest Dissertation and Theses Database (PQDT) in 1998, 2002, and 2007.  PQDT (formally known as UMI) is a commercial database housing a searchable archive of published dissertations and theses (see proquest.com).  This database provided a suitable pool of graduate student research from North America from which we could study this issue.  A representative from ProQuest disclosed to the authors that PQDT receives 97.2% of all dissertations and theses from research universities in the United States (276 of 284) and 87.2% (41 of 47) of those from Canadian research universities (personal correspondence, May 17, 2010). 

A keyword search using the general search criteria for the phrase distance education was performed.  The thesaurus for the Education Resources Information Center (ERIC), sponsored by the United States Department of Education, added distance education to its controlled vocabulary on October 24, 1983.  No similar phrases or terms related to distance education were included in the search criteria for manuscripts; however, the following related terms are referenced to distance education as part of the ERIC thesaurus: asynchronous communication, blended learning, computer-mediated communication, continuing education, correspondence schools, educational radio, educational television, electronic learning, extension education, external degree programs, geographic isolation; handheld devices, home study, independent study, laptop computers, lifelong learning, mass instruction, nontraditional education, online courses, open universities, outreach programs, part-time students, synchronous communication, telecommunications, telecourses, virtual classrooms, virtual universities, and web-based instruction (see www.eric.ed.gov).

In 1990, ERIC had 1,260 academic submissions associated with the controlled vocabulary distance education.  By 1995 the number of citations in this category had increased to 2,709, and by the time of this writing, the number had increased to just under 12,000.

Manuscript Coding

Each manuscript selected for analysis was read and coded by two of the seven graduate students who participated in the manuscript coding process.  All raters, who were paid by the hour, were trained in the coding process, and random quality checks were performed to ensure a satisfactory level of coding, with training updates provided as needed.  Each manuscript was categorized on the general topics addressed in the study, the research designs utilized, and the data collection and analysis methods used. Initial inter-rater reliability was determined; however, all discrepancies in ratings were arbitrated by an independent third rater to establish a definitive final count in each area.  Many individual manuscripts addressed more than one topic or utilized multiple data collection and analysis methods.  All principal topics addressed and methods used in each study were included in the count.  Results of the classifications for each of the four areas were compared across years. 

Classification of Coding Categories

Categories for coding were determined using an a priori approach. Topic categories largely follow those identified in a similar study conducted by Lindsay, Wright, and Howell (2004).  Table 1 provides a summary of the topic categories with a description of category contents.  Quantitative research designs were identified from research texts; however, qualitative research designs do not share the same degree of specificity and therefore were generally classified as qualitative survey research (i.e., surveys with open-ended questions), ethnographic studies, or narrative phenomenological studies (see Table 2).   Data analysis techniques were identified from research texts; however, since qualitative analysis methods were only generally described by student researchers, they are categorized together.  Qualitative analysis usually included segmenting (organizing) data from open-ended surveys, interviews, and observations then describing patterns found in the responses or observations. 

Table 1

 

Table 2

Findings and Discussion

Certainly this experience has demonstrated for the researchers the variability in the quality of current graduate student research.  This study does not, however, attempt to judge the quality or appropriateness of the methods graduate students utilized to conduct distance education research.  This analysis is primarily descriptive with the intention of understanding what topics graduate students studied and what methods were employed in their research.

Trends in Research Topics

Table 3 presents the distribution of research topics addressed each year.  Approximately 100 research papers were extracted for each of the years sampled in this study.  The sample includes all manuscripts fitting the selection criteria each year.  The “percentage of total” columns in the table do not add to 100% as some of the studies addressed more than one topic.  For example, some papers considered both student and faculty issues in the same study.  The frequency counts provided represent how often specific purposes or general topics were addressed by graduate student researchers.

Table 3

Teacher, student, and administrative issues.

Based on the research being conducted in this study, the data trends seem to suggest a fairly consistent research emphasis on student and faculty issues.  These categories include topics that address the perceptions of stakeholders, i.e., their attitudes toward, satisfaction with, and thoughts regarding specific distance education experiences.  Governance and administrative issues as research topics also fall into this general area of research.  They typically follow a similar type of research design and, though less frequent, have been fairly consistent as topics of interest.

Methods testing.

Research that tests methods falls into the category of evaluation research often labelled as media comparisons due to the tendency of researchers to compare distance and traditional instructional practices.  Testing distance education methods has also been fairly consistent as a purpose of many studies. Yet while the frequency of methods testing studies has remained fairly consistent, the trend has moved away from comparisons with face-to-face classroom experiences.  In 1998, 12 of the 17 methods testing studies (71%) determined the effectiveness of the distance education initiative by a comparison with a traditional face-to-face learning experience; in 2007, this number dropped to 5 of the 18 studies (28%).

Instructional design and pedagogy.

Studies that have considered the design and pedagogy involved in distance education learning situations have also been fairly common, although graduate students’ interest in studying such topics seems to have declined slightly since 1998.

Technology issues.

An interesting trend in research topics is the decrease in studies addressing technology.  Apparently, concern for whether distance education technology would be reliable or advanced enough to facilitate the demands of distance education has diminished considerably.  While technology issues were a large concern in 1998, with 17 studies addressing this issue, only 3 studies researched this topic in 2007. 

Research no-shows.

Several areas of research seem to be of less interest to graduate students.  Distance education theory is the most notable in the list of infrequently studied topics or purposes, along with economic issues, scalability, historical foundations of distance education, and studies involving an international perspective.  To be fair, many students cited distance education theory, or in some way tested theory, in their studies.  Every study analyzed in this sample included a literature review of some sort.  But in this sample, only a couple of graduate student studies each year focused their research directly on theory development or exploration.

Trends in Research Designs

The frequency of various research designs utilized each year in graduate student research is reported in Table 4.

Table 4

As Naidu (2005) suggests, most student research seems to be descriptive.  A pattern from the research studies analyzed in this sample indicates a strong and increasing reliance on survey research designs and case studies involving self-report evidence from stakeholders.  The number of random controlled trials and casual comparative (i.e., ex post facto) designs increased, but consistently the method for establishing comparison groups was to select participants from existing groups or convenient samples (i.e., quasi-experimental designs).  The number of studies using qualitative surveys has increased (i.e., predominantly surveys using open-ended items with no specific predetermined variables of interest), but the frequency with which qualitative designs have been employed remains fairly small and consistent.

Trends in Research Data Collection and Analysis

Tables 5 and 6 present the various data collection techniques and data analysis methods used most often each year in graduate student research.  Since the predominant research design used in this sample involved survey research, it is understandable that the most commonly used data collection method involved surveys.  More than half of the studies utilized a survey of some type, including both self-report surveys and attitudinal scales.  Student researchers tended to use interviews as a principal source of qualitative data, although many qualitative studies used a variety of data collection methods, including surveys or analyses of existing documents and artifacts.  

Table 5

Table 6

Most of the studies in this sample used some descriptive data analysis (e.g., frequencies & percentages).  Studies identified specifically as using descriptive statistics were those that used this type of data analysis exclusively or predominantly.  A large number of student researchers did use descriptive statistics as their main analysis tools. 

In the studies from this sample, the qualitative data analysis methods were not described in specific detail; thus, qualitative data analysis methods were combined in the count for this study.  Typical qualitative data analysis seems to have included segmenting or organizing data from open-ended surveys, interviews, and observations then describing patterns found in the responses or observations.  Trends in the amount of qualitative data analysis being used seem proportionally similar to the number of qualitative data collection methods used.

Of interest in this data set is the frequent use of quantitative statistical analysis techniques involving t-tests and ANOVA analysis.  The use of such data analysis techniques seems high, given the data collection methods employed.  One observation from the coding of manuscripts that might help explain this apparent inconsistency is that students often used these types of analysis to make comparisons in survey results based on disaggregated groups of respondents.  While the appropriateness of this practice with survey data is suspect, given the type of data that surveys produce and the assumptions regulating the use of these analysis techniques (Reynolds, Livingston, & Willson, 2006), this is what was reportedly done, and it may help explain the disproportionate frequencies.

Conclusions

This analysis of dissertations and theses in the field of distance education provides a macro perspective that promises to inform future research and meta-analysis.  One limitation of this study is that it does not include student research conducted outside of North America.  Still, it is evident that during this past decade, while the number of dissertations and theses prepared in North America has remained fairly static, shifts in topics studied and research methods used have occurred. 

Consistent with Naidu’s (2005) observations regarding the types of research being conducted in the field of distance education at that time, this study found that over the past decade, most graduate level research has been descriptive.  More often than not, graduate students’ research has addressed the perceptions, concerns, and satisfaction levels of various stakeholders with a particular distance education experience.  These types of studies usually administered self-report surveys and analyzed the data using descriptive statistics. While there is value in conducting quality descriptive research, a lack of research addressing other important topics is evident. 

Validating the concern of distance education scholars regarding the lack of research intended to establish distance education theory (Beaudoin, 2004; Garrison & Anderson, 2003; Saba, 2003; Moore & Kearsley, 2005), this study found little graduate student research aimed at developing a theory base for distance education. Unfortunately, far too few studies explored new or challenged old theory.  Factors that may help explain this finding include the challenges associated with conducting any type of grounded theory research.  Many graduate students lack the experience, time, and resources needed to conduct this type of research.  Additionally, they may be limited in their access to the participants and educational situations needed to rigorously explore and establish distance education theory.  Unfortunately, the purpose for having graduate students conduct research is often to have them demonstrate their ability to conduct research rather than to conduct groundbreaking research.  Regrettably, these data suggest a lack of grounded theory research.  It may be incumbent on research institutes that study distance education to encourage students to engage more in theory-based research.  They might also consider more carefully the analysis methods used and the degree to which analysis techniques align with the data collection methods. 

On a more positive note, we were encouraged to see a notable trend away from instructional media studies that compare distance education with traditional instructional practices.  Evaluation that involves methods testing has been a consistent incentive for conducting research; however, between 1998 and 2007 far fewer graduate research projects attempted to determine the merit or worth of the specific distance education practice by making explicit comparisons with traditional face-to-face learning environments.  This decrease might indicate that we, as a community of researchers and perhaps society in general, are beginning to accept distance learning as an important and viable educational experience in its own right.

Another encouraging finding is the decreased number of studies focused on technology issues, particularly concern about the quality of technology and the ability of distance educators to provide an acceptable technology-enabled learning experience.  By most measures, the quality and availability of educational technology in schools has increased significantly as has the technological literacy of teachers and students (McMillan-Culp, Honey, & Mandinach, 2005; Russell, Bebell, O’Dwyer, & O’Connor, 2003).  This progress has not eliminated technology problems, but those in a distance learning setting seem to have accepted that technology problems will occur, and they cope with the challenges associated with technology use when they happen. This is a potential topic for further research.

References

Allen, I. E., & Seaman, J. (2007). Online nation: Five years of growth in online learning. Needham, MA: Sloan Consortium.

Beaudoin, M. F. (2004). Distance education leadership: Appraising theory and advancing practice. In Reflections on research, faculty and leadership in distance education. ASF Series, Volume 8 (pp. 91-101). Oldenburg, Germany: Oldenburg University Press.

Berge, Z., & Mrozowski, S. (2001). Review of research in distance education. American Journal of Distance Education, 15(3), 5-19.

Bernard, R. M., Abrami, P. C., Lou, Y., & Borokhovski, E. (2004). A methodological morass: How can we improve the quality of quantitative research in distance education? Distance Education, 25(2), 176-198.

Garrison, D. R., & Anderson, T. (2003). E–learning in the 21st century: A framework for research and practice. London, UK: Routledge/Falmer.

Gaytan, J. (2007). Vision shaping the future of online education: Understanding its historical evolution, implications, and assumptions. Online Journal of Distance Learning Administration, 10(2). doi:http://www.westga.edu/~distance/ojdla/summer102/gaytan102.htm

Lee, Y., Driscoll, M. P., & Nelson, D. W. (2004). The past, present, and future of research in distance education: Results of a content analysis. American Journal of Distance Education, 18(4), 225-241.

Lewis, L., Snow, K., & Farris, E. (1999). Distance education at postsecondary education institutions: 1997-98 (National Center for Education Statistics 2000-013). Washington, DC: U.S. Department of Education, Office of Educational Research and Improvement.

Lindsay, N., Wright, T., & Howell, S. (2004). Coming of age: The rise of research in distance education. Continuing Higher Education Review, 68, 96-103.

McMillan-Culp, K., Honey, M., & Mandinach, E. (2005). A retrospective on twenty years of educational technology policy. Journal of Educational Computing Research, 32(3), 279-307.

Meyer, K. A. (2002). Quality in distance education: Focus on online learning.  ASHE- ERIC Higher Education Report Series, 29(4). Retrieved from ERIC database. (ED470042)

Meyer, K. A. (2004). Putting the distance learning comparison study in perspective: Its role as personal journey research. Online Journal of Distance Learning Administration, 7(1).

Moore, M. G., & Kearsley, G. (2005). Distance education: A systems view (2nd ed.). Belmont, CA: Wadsworth Publishing Company.

Naidu, S. (2005). Researching distance education and e-learning. In C. Howard, J. V. Boettcher, L. Justice, K. Schenk, P. Rogers, & G. A. Berg (Eds.), Encyclopedia of distance learning (Vol. 4, pp. 1564-1572).  Hershey, PA: Idea Group, Inc.

Parsad, B., & Lewis, L. (2008). Distance education at degree-granting postsecondary institutions: 2006–07 (National Center for Education Statistics, Institute of Education Sciences 2009-044). Washington, DC: U.S. Department of Education.

Phipps, R., & Merisotis, J. (1999). What’s the difference? A review of contemporary research on the effectiveness of distance learning in higher education. Washington, DC: The Institute for Higher Education Policy.

Reynolds, C. R., Livingston, R. B., & Willson, V. (2006). Measurement and assessment in education. Boston, MA: Allyn and Bacon.

Ritzhaupt, A. D., Stewart, M., Smith, P., & Barron, A. E. (2010). An investigation of distance education in North American research literature using co-word analysis. International Review of Research in Open and Distance Learning, 11(1). Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/763/1516.

Russell, M., Bebell, D., O’Dwyer, L., & O’Connor, K. (2003). Examining teacher technology use: Implications for preservice and inservice teacher preparation. Journal of Teacher Education, 54(4), 297-310.

Russell, T. L. (1999). The no significant difference phenomenon. Chapel Hill, North Carolina State University: Office of Instructional Telecommunications.

Saba, F. (2000). Research in distance education: A status report. International Review of Research in Open and Distance Learning, 1(1). Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/4/24.  

Saba, F. (2003). Distance education theory, methodology, and epistemology: A pragmatic paradigm. In M. G. Moore & W. G. Anderson (Eds.), Handbook of distance education (pp. 3-20). Mahwah, NJ: Lawrence Erlbaum Associates.

Sherry, A. C. (2003). Quality and its measurement in distance education. In M. G. Moore & W. G. Anderson (Eds.), Handbook of distance education (pp. 435-360). Mahwah, NJ: Lawrence Erlbaum Associates.

Simonson, M. (2002). In case you’re asked: The effectiveness of distance education. The Quarterly Review of Distance Education, 3(4), vii-ix.

Tallent-Runnels, M. K., Thomas, J. A., Lan, W. Y., Cooper, S., Ahern, T. C., Shaw, S. M., & Liu, X. (2006). Teaching courses online: A review of the research. Review of Educational Research, 76(1), 93-135.

Tucker, S. (2001). Distance education: Better, worse, or as good as traditional education? Online Journal of Distance Learning Administration, 4(4). doi:http://www.westga.edu/~distance/ojdla/winter44/tucker44.html

Zawacki-Richter, O., Baecker, E. M., & Vogt, S. (2009). Review of distance education research (2000–2008): Analysis of research areas, methods, and authorship patterns. International Review of Research in Open and Distance Learning, 10(6). Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/741/1461.  

Zhao, Y., Lei, J., Yan, B., Lai, C., & Tan, H. S. (2005). What makes the difference? A practical analysis of research on the effectiveness of distance education. Teachers College Record, 107(8), 1836-1884.



PID: http://hdl.handle.net/10515/sy51g0j74



ISSN: 1492-3831