6P/ FB-model

December – 2006

Feedback Model to Support Designers of Blended-Learning Courses

Hans G. K. Hummel
Open University of the Netherlands

Abstract

Although extensive research has been carried out, describing the role of feedback in education, and many theoretical models are yet available, procedures and guidelines for actually designing and implementing feedback in practice have remained scarce so far. This explorative study presents a preliminary six-phase design model for feedback (6P/ FB-model) in blended learning courses. Each phase includes leading questions and criteria that guide the designer. After describing the model, we report research into the usability and quality of draft versions of this model. Participants in both a small usability pilot and an expert appraisal survey rated and commented on the model. We conclude that the overall quality of the model was perceived as sufficient, although experts recommended major revisions before the model could actually be used in daily practice.

Keywords: feedback; blended learning; instructional design model

Feedback in Blended Learning

Distance education and lifelong learning call for individualised support to large and heterogeneous groups of learners. In such large, up-scaled learning environments, direct teacher-student interaction is often not considered an economically feasible option. Furthermore, lifelong learners at various stages of their lives, coming from various contexts and having different backgrounds, will show more variation in learning history and learning profile (needs and preferences), and therefore will need more customized support than more traditional cohorts of students. Feedback can be considered as an important, if not the most important, support mechanism in a variety of educational contexts. It consists stimulating or corrective information about tasks students are performing (Mory, 2003). In more traditional education, feedback is often handled by teachers that provide students with tailor-made information in direct face-to-face interaction. When relatively large numbers of students need to be serviced by relatively few teachers, individualised support comes under pressure because of ‘bandwidth’ problems (i.e., constraints to the intensity of tutoring or available tutoring time per student; see Wiley & Edwards, 2003), its labour-intensive character and related costs. But also when the amounts of students remain low, we have to carefully consider which alternatives for providing individualised feedback would be most suitable given the specific educational context (Nelson, 1999).

Although extensive research has been carried out into feedback’s role in education, yielding many theoretical models (Butler & Winne, 1995), procedures and guidelines for actually designing and implementing feedback in educational practice have remained scarce so far. This study intends to decrease the current lack by providing teachers of distance or blended learning courses with an instructional design model for feedback (6P/ FB-model) describing procedures and guidelines on how to best provide feedback to their students in a variety of educational contexts.

Feedback

The concept of feedback in learning actually is an ‘umbrella concept’ that entails several meanings beyond the narrow meaning of feeding back information after task completion. Pellone (1991) argues that students should not only be told whether they have given the right answer (feedback), but also be stimulated for providing a correct answer (positive reinforcement), or prompted when they need more information when thinking about correct answers (cueing). Nowadays, both concrete, more product-oriented information after task execution (feedback) and abstract, process-oriented information before or during task execution (feedforward, feedthrough) are considered necessary for schema-based learning at every step of solving (complex) problems (Van Merriënboer, 1997). Note that this article (and the model it describes) broadly defines the concept of feedback to denominate both feedforward, feedthrough and feedback (as in its original, more narrow meaning).

For many centuries feedback has been considered to control and influence learning (Mory, 2003). Feedback always had the intention to steer the learning process based on a diagnosis of actual progress, and was considered to be a specific type of support on the level of concrete assignments or tasks. Feedback about progress on tasks can be expressed simply as either ‘right or wrong,’ but will more often also contain an evaluation on multiple facets, that might even be contradictory. Complex tasks may have not just one but several valid solutions, which will depend on the weights assigned to various (contradictory or competing) factors under considerations (i.e., economic criteria may outweigh environmental criteria when trying to find a good solution for the hole in the ozone layer).

Instructional guidelines on more process-oriented types of feedback appear to be scarce. Effects of feedback have primarily been studied in contrived experimental learning situations in the form of outcome feedback provided after a learner responds to relatively simple and self-contained tasks with simple solutions (Mory, 2003). Results from these studies cannot be used in constructivist learning based on complex, problem solving tasks containing many possible solutions. Feedback should then take the form of cueing or task-valid cognitive feedback that facilitate schema-based learning (Balzer, Doherty, & O’Connor, 1989; Narciss, 1999; Whitehall & MacDonald, 1993). Such process-oriented formats (feedforward, feedthrough) pay attention to the problem-solving process by providing general strategies and heuristics, enabling learners to construct or adapt schemata (Chi et al., 2001) and deduce a specific solution. For instance, Process Worksheets may contain a layout with keywords or driving questions (Land, 2000) reflecting a strategic approach. An exemplary type of a Process Worksheet could be a quality control checklist to be used during assignment preparation, containing various evaluation criteria (e.g., criteria for teaching law students to prepare and hold an effective plea in court). Some studies (Ley & Young, 2001; Mevarech & Kramarski, 2003; Hummel, Paas & Koper, 2004) have demonstrated positive effects of combining evaluation criteria in a Process Worksheet during assignment preparation, with later providing assignment evaluations in a Worked Example based on the same criteria (Renkl, 2002; Atkinson et al., 2000).

Blended learning

Embedding prefabricated feedback, based on prior learner experience and problems most often encountered, in learning materials is one way to offload teacher efforts. Such ‘common denominators’ will not suffice however, when learners encounter more specific problems – e.g., when solving complex problems. Combining face-to-face with support through online (virtual) learning environments offers new possibilities for ‘blended learning’ (Hannafin, Land, & Oliver, 1999; Jonassen, 1999; Van Eijl et al., 2004). Concrete implementations of feedback need to be tailored to specific requirements for each ‘blend’ (such requirements will be treated when we describe our feedback model).

Roles, procedures and guidelines for designing feedback in more traditional education (e.g., a combination of written learning material with teacher-based instruction) or in interactive computer programs meant for self-study, now need to be reconsidered for new technologies, as well as for new approaches to learning and for the shifts in feedback’s roles that we have introduced above (from product- to process-oriented information, and from supporting single to multiple solutions). This study aims to describe such roles, procedures and guidelines in a comprehensive design model for feedback in various educational context or ‘blend of learning’ (combining face-to-face and online learning in various proportions), and to examine the feasibility and usability of such an approach in practice.

Using new technologies offers new possibilities to cater for individual learner needs. For instance, learners now can receive personalised and timed feedback whenever they demand (Sales & Williams, 1988). Besides new technologies, new theories about learning demand a reconsideration of feedback’s role. For instance, within competence-based education the emphasis on corrective feedback on learning products will shift towards an emphasis on cognitive feedback on learning processes (Balzer, Doherty, & O’Connor, 1989). Feedback research over the last decades (for a review see Mory, 2003) has delivered many models, some of which (Butler & Winne, 1995; Harasim et al., 1995) stress feedback’s role in fostering self-regulation. How we should implement such models in concrete (blended learning) courses largely remains unresolved, however.

Feedbackmodel: Introduction

We aim for a feedback model that provides a usable, stepwise procedure containing concrete questions and guidelines to support the design of concrete feedback in blended learning courses. The next, third section of this article will describe the six steps of the model. The fourth section (methods) will describe two rounds of validation and testing we carried out with draft versions of the model: a pilot test where teachers were asked to apply and comment on the usability of the model, and an expert appraisal where experts were asked to comment on the usability and quality of the model. After presenting the results from both validations, we conclude this article with some recommendations for improvement and future research.

6P/ FB-model

Our six-phase model for designing feedback in blended learning (6P/ FB-model) provides a procedure or ‘design scheme’ for selecting adequate content and forms of feedback in blended learning courses. It is structured around six phases (or steps), that aim to support:

    1. Definition of concrete functions of feedback

    2. Determination of a desirable course of action when providing feedback

    3. Consideration of various situational aspects that need to be considered

    4. Application of important principles and practical guidelines

    5. Selection of possible forms and organisation of feedback

    6. Answering of some of the leading questions involved.

Figure 1. Six-phased feedback model (6P/ FB) for designing feedback in blended learning

Designing feedback also is a very complex, problem solving task containing many possible solutions. Therefore, information on procedures and guidelines when designing feedback should take the form of process-oriented cueing (or task-valid cognitive feedback) to facilitate such design schema-based tasks. Each phase of the 6P/ FB-model provides general strategies and heuristics, mainly in the form of leading questions (Table 1 contains the main leading questions for each phase), to be answered, and criteria, to be addressed for specific situations. Such questions and criteria will enable users to deduce a specific solution for their situation. Each step of the model contains various examples to illustrate possible answers to questions, and the model also contains an integrated Worked Example of feedback designed according to the procedure of the model. A phased approach implies that we feel certain elements of the design can only be made once others have been decided upon. In practice this will neither mean a strict sequence nor will it suffice to follow the steps just once. The design process may need many iterations, where phases will build on each other but might also be taken in parallel.

Underlying Theories

Based on our institute’s experiences in designing feedback for learning materials in both distance and regular education, a theoretically rather eclectic model emerged. Some of the phases (Phase 1, 2 and 4) stress feedback’s role as a controlling mechanism, considering the learner as a system to be steered externally. Such ideas strongly lean on system theory (Newell & Simon, 1972; Kramer & Smit, 1987; Roossink, 1990), applied to an educational context. Mechanisms like measurement, diagnosis and intervention are conceived to take place to control the (learning) system.

While this – rather objectivist – approach stresses the importance of monitoring and error correction in direct interaction between learners and teachers, we also included other situational elements to broaden this scope. Such elements can be found in Phases 3, 5 and 6 of the model. For instance, Phase 3 stresses the importance of considering – more constructivist – process-oriented and adaptable forms of feedback (Land, 2001; Sales & Williams, 1988), and other media for the mediation of feedback, like CSCL (Dillenbourg, 1996), peer-feedback (Prins, Sluijsmans, & Kirschner, in press; Sluijsmans, 2002; Topping, 1998) or by internal steering or self-regulation (Harasim et al., 1995; Butler & Winne, 1995). New approaches to learning stress that learners can to a large extent make the measurement and diagnosis, monitor and steer their learning progress themselves (i.e., internally), with the proviso that adequate process-oriented feedback is available.

Table 1. Main questions of the six-phase feedbackmodel (6P/ FB-model)

Feedbackmodel: Short description of phases

The actual documentation describing the 6P/ FB-model covers over 50 pages, and includes various figures, tables and examples to illustrate the procedure and leading questions on a practical level. Within this article we will have to limit ourselves to a short description of each of the phases, and can barely scratch the surface of the questions involved.

Phase 1: Define functions of feedback

This first phase stresses the distinction between functions and means. Important functions of feedback are: orientation (on task); controlling / stimulating the problem solving process (measuring); determining (most important) errors made during problem solving; determining the causes of errors; providing criteria; and providing adequate interventions (e.g., prompts or hints, corrective feedback / error messages, cognitive feedback). To establish the function ‘determining the cause of an error,’ various means can be used, like consultation over telephone (students can discuss their tasks with teachers), an interactive learning programme containing embedded feedback, or an electronic learning environment (students can share and discuss their tasks with peers, with teachers only responding when needed).

Phase 2: Determine course of action when providing feedback

For effective steering to occur, a specific course of action needs to be followed when providing feedback: measurement (get information from the system); diagnosis (comparing this information to certain criteria or norms); selecting and providing adequate interventions (e.g., cognitive feedback to improve the process). This phase relates feedback functions (from Phase 1) to the required course of action for providing feedback. It also stresses that possible approaches of learners, type of content and actions need to be determined first in order to draw up a scheme of possible errors and adequate feedback to address them. It would go beyond the scope of this article to treat the various controlling mechanisms entailed in system theory (Newell & Simon, 1972; Kramer & Smit, 1987; Roossink, 1990).

Phase 3: Consider various situational aspects

Besides the type of learning processes (e.g., memorising simple facts and figures versus acquiring complex problem solving skills), various situational aspects will further determine most adequate feedback:

  • Uniformity – An important group of aspects relate to the extent to which feedback can be provided in a uniform fashion to all students. Can feedback be designed in advance, embedded in the learning materials, providing more or less automated support? Should each student receive more or less tailored feedback, and be able to exercise influence on the appearance of the feedback? When tasks are relatively simple, feedback can mostly be designed in advance in a uniform fashion.

  • Allocation – Feedback should be provided either by persons (in various roles, like teacher or peer) or by computers, depending on the availability and efficiency of such resources. Feedback could be provided on demand of the learner (e.g., through a newsgroup in the LMS), or when the course provider sees fit (e.g, by adding information to a FAQ in the LMS). When a group of students are facing similar problems, (uniform) feedback can be provided through an LMS with peer-feedback. Where individual differences exist, personalised feedback will be required.

  • Numbers – Evidently, the amount of students enrolled in a specific course may limit the available time for personally provided feedback; alternatives for providing feedback more efficiently will then have to be conceived. When possible, students should work together, supported by peers or LMS (Prins, Sluijsmans, & Kirschner, in press; Sluijsmans, 2002; Topping, 1998).

  • Timing – Sometimes learners might not just want to control the appearance of feedback, but also the moment of its delivery. Will feedback be provided at fixed moments or when the learners demand it, just-in-time? For instance, procedural information about general problem solving strategies should best be offered just-in-time, while more specific information can best be offered in advance (Kay, 2001; Kester, 2003).

  • Orientation – We already mentioned that the role of feedback has shifted from product-oriented corrective feedback towards process-oriented cognitive feedback. When feedback also contains information about the problem solving processes (and the various factors involved to reach an adequate solution), then automatically the information density will increase. Higher order learning mostly requires process-oriented feedback.

  • Information Density – Related to the previous aspect, feedback can be rich or poor in information, going from simply ‘true or false’ to elaborate information. When solving complex problems, learners will require both abstract and concrete, and both product-oriented and process-oriented types of feedback. Table 2 presents some basic formats of cognitive feedback mapped on these dimensions (Hummel & Nadolski, 2002). Worked Examples can be offered when students need to apply this feedback on similar problems; Process Worksheets can be offered when students need to apply feedback to different problems.

  • Technology – We have to consider the availability and added value of various new learning technologies to realise the required functions of feedback.

Figure 2. Four basic formats for cognitive feedback

 

Phase 4: Apply important principles and practical guidelines

A number of feedback principles was derived from system theory: diagnosis should be process-based; feedback should contain information about both procedures and content; diagnosis should include both actual and prior performance; diagnosis should be of sufficient quality (reliable, valid, representative); criteria should be measurable; feedback should be aimed at both correction and stimulation of learning processes; feedback should foster a maximum amount of independence and self-guidance of students. Besides such general principles, a list of practical guidelines for concrete elaboration of feedback content was provided. Feedback should be based on concrete performance (and not on judgements of behavior); not be too abstract or concrete; be formulated in a positive and stimulating fashion; be both positive and negative; be explicit (and not ambiguous); etc. (e.g., Dirkx & Koopmans, 2000).

Phase 5: Decide on possible forms and organisation of feedback

After determining the functions, course of action and various situational aspects that need to be addressed, this step focuses on the actual form and organisation of adequate feedback. How can we achieve that the delivery of feedback will suit the needs of students? Where will they have access to this feedback? Which persons or facilities will be responsible for providing and maintaining such feedback? In a ‘function-means matrix’ the most important functions (based on the analysis in Steps 1 and 2) will now be matched to concrete forms (based on the analysis in Steps 3 and 4). For each function there may still be more forms of realisation. Our model contains a preliminary list of such forms.

Phase 6: Answer some final, leading questions

As a final check on the feedback forms that have now been selected and designed, a number of final questions has to be considered (preliminary decisions already made in Phase 3):

  • Can feedback be designed in advance? (also related to the aspects of ‘uniformity’ and ‘information density’)

  • Is personal contact needed? (related to the aspects of ‘allocation’ and ‘technology’)

  • Does contact have to be synchronous? (related to the aspect of ‘timing’)

  • Does contact have to be face-to-face? (also related to the aspects of ‘allocation’ and ‘technology’)

  • Do students need contact with teachers?

Feedback can be made less labour-intensive by using a LMS to monitor progress on assignments, and by using peer-feedback to offload teachers and tutors, or by using automated support to handle most familiar problems. Still, teachers are expected to remain important providers of feedback, especially when diagnosis of highly specialised or complex problem solving behaviour can not be catered for by computers.

Methods

Preliminary test results about the usability and quality of the first two versions of the 6P/ FB-model were collected on two occasions. A Beta-release was tested during a small pilot test, during which a small group of teachers applied the model. A pre-release version was surveyed by means of a questionnaire that experts used to rate the usability and quality of the model.
We will present some simple descriptive statistics to indicate the appreciation of the model. Additionally for the survey, we carried out an analysis of variance to check for possible differences in appreciation across various types of higher education institutions. Some qualitative analysis was carried out on the comments made by participants.

Participants in pilot test

Two teachers from each of the two Dutch universities (Open University and University of Twente) that developed the model (n = 4) participated in a first pilot. They were asked to apply the model on their courses in Active Learning and Applied Communications respectively, representing various “blends” of both distance (self-guided study in combination with a LMS) and regular education courses (a combination of classes, practicals, and working groups).

Materials for pilot test

Teachers used the Beta-release of the model. A questionnaire with 20 items was used to rate the quality and usability of each of the phases and the overall model (see the results section for an overview of the items). The questionnaire contained 15 closed questions, that had to be scored on five-point Likert-type scales, and five open questions for providing some background information and general comments.

Procedure for pilot test

Teachers were sent the document containing the Beta-release of the model via email about two weeks before the date of the pilot. They were asked to study the model and record all questions and comments they had about the model, and to select a representative portion of theory still requiring adequate feedback. The pilot sessions with each teacher were led by one of the model developers and lasted about two hours each. At the start of the session participants filled in the questionnaire. Then the session leader tried to clarify questions. Participants were asked to apply the model on the selected portion of their own course, allowing them to work for about a quarter of an hour on each phase. Results for each step had to be recorded on paper. Actions were observed, questions and utterances about unclarities were recorded by the session leaders. At the end of the session, participants were asked to again fill in the questionnaire, in order to collect changes in appreciation of the model by actually using it.

Participants in expert appraisal

We tried to make a representative sample of experts from the field of higher education by including staff members from both universities (18 members, divided over four universities), polytechnics (16 members, divided over six institutions), and educational research bureaus (six members, divided over five institutions); 14 experts were female and 26 were male. Complete questionnaires were received back from 22 experts (a response rate of 55%), of which eight were female and 14 were male, equally divided over the (types of) institutions (n = 9, 10, and 3 respectively).

Materials for expert appraisal

Experts used a pre-release of the model, in which comments from the pilot had been addressed where possible. The same questionnaire with 20 items was used to rate the quality and usability of each of the phases and the overall model (see the results section for the titles of the items).

Procedure for expert appraisal

Derived from authors’ personal contact lists we approached 40 national experts in the educational field, that had lead projects dealing with issues around feedback (in blended learning), by email, informing them of the aims and intended planning of the survey as well as the expected amount of time they would have to award it (about two hours). About two weeks later they received the material and were allowed two more weeks to study it and return the filled-in questionnaire. After sending one reminder by email, eventually 22 complete questionnaires could be processed.

Results

Pilot test

Table 2 contains an overview of the average scores before and after the pilot sessions. Because we are dealing with a very small sample of participants, these results can only be taken as a first impression. All closed questions (3-17) had to be scored on a Likert-type scale ranging from 1 [very unclear] to 5 [ very clear], with the exception of questions 15 and 16 that had to be scored from 1 [very low] to 5 [very high], and question 17 that had to be scored from 1 [strongly disagree] to 5 [strongly agree].

Table 2. Scores on closed items questionnaire (before and after pilot test sessions) (n = 4)

Qualitative analysis showed that the Beta-release still contained many conceptual unclarities (e.g., about subsuming feedforward and feedthrough under the overall concept of feedback) and some inconsistencies in the procedure (e.g., about relations between phases). In general, more theoretical parts of the model (e.g., about system theory, 4C/ ID-model, in Phases 1 and 2) were considered to be too abstract and in need of more practical illustrations. Participants expressed mixed opinions about the usability and quality of the model in the current version, with an average overall appreciation before and after the pilot ranging from M = 3.0 to M = 3.5 (before), and from M = 2.2 to M = 3.8 (after), respectively. OUNL-teachers (distance education) on average were less positive after the pilot, while UT-teachers (regular education) had become more positive after applying the model.

Expert appraisal

Table 3 contains an overview of the average scores that experts awarded to the items of the questionnaire.

Table 3. Scores on closed items questionnaire by expert appraisal (n = 22)

Again, because we are dealing with a relatively small portion of participants here, these preliminary results can only be taken as a second impression. This pre-release version of the model was awarded an average overall appreciation of M = 3.26; SD = .49, which is (only) slightly higher than the appreciation of the Beta-release. The overall appreciation on these fifteen items by the 22 participants showed an individual range of averages between M = 1.9 and M = 4.4. Only three participants, however, were awarded an overall score less than 3.0 points (which could be interpreted as ‘insufficient’), with averages of 1.9, 2.1, and 2.9, respectively. Especially items 6 (description of Phase 2) and item 17 (recommending the model to colleagues) remain problematic.

When controlling for a possible effect of type of institution (polytechnic, university, or educational research bureau) on the appreciation of the model, we found no differences on the average scores on (general) items 15, 16, and 17. We did find a difference approaching significance for average scores (3.9, 3.1, and 3.1, respectively) on the (perceived) quality of the model (F (2, 19) = 2.91, MSE = 8.11, p = .088).

Qualitative analysis of the comments provided by clustering leads to following list of improvements: clarifying how the model addresses all levels of learning goals; elaborating the description of concrete forms of feedback; making the description more compact, by using more summations and schemes; further limit theoretical contributions in the text; further clarify what should be the ‘products’ for each of the Phases; give more attention to the contribution of peer-feedback; some final restructuring and proof reading.

Conclusions and Recommendations

We found mixed appreciations of the Beta-release of the model during a small pilot test. Such differences will largely be explained by error due too the very small numbers and selection of specific participants. A difference in appreciation of the usability between teachers from a distance and a regular institution, however, could also be partly explained from differences in educational model. Differences in appreciation before and after the pilot test might also be partly explained from the role the session leaders played in explaining the model, which might have (further) increased or decreased the appreciation of the model. It can therefore be considered necessary to further examine the extent to which the 6P/ FB-model can be used independently, or whether users will need additional training or support by more experienced designers of feedback.

A more extensive survey by means of an expert appraisal revealed that the overall quality and usability of the model were scored as sufficient. The model was valued to offer a comprehensive and valuable approach to the problem. Experts also found, however, that they could not yet recommend the model in its present form. Experts especially criticised the current presentation and advised us to make a more compact and practical version to be used by teachers in the future.

We feel that the main challenge in improving future versions of the model will be to make the procedures more accessible and applicable. Layering the information, providing easy-to-use templates, more elaborate matrices of functions and forms of feedback, using the model in combination with trainers that provide personal support might all contribute to solving this issue, and therefore need to be further explored.

Projects that will build on this explorative study should at least include more extensive pilot testing, including other domains and institutions, allowing users to further comment on the usability of the model. Further training of teachers in designing feedback is considered a necessity by our group of experts. Some of them have proposed to organise workshops or training programs around the topic of designing adequate feedback for blended learning, using the model as part of the training.

Acknowledgement

This study was carried out in the context of an explorative project funded by the Digital University consortium, a group of universities and polytechnics working on the innovation of Higher Education in the Netherlands and Flanders (5087 Feedback as an instrument to support blended learning: development of a model). Authors would like to thank participants in the usability pilot and the expert appraisal.

References

Atkinson, R. K., Derry, S. J., Renkl, A., & Wortham, D. (2000). Learning from Examples: Instructional principles from the worked examples research. Review of Educational Research, 70(2), 181-214.

Balzer, W. K., Doherty, M. E., & O'Connor, R. (1989). Effects of cognitive feedback on performance. Psychological Bulletin, 106, 410-433.

Butler, D. L., & Winne, P. H. (1995). Feedback as Self-Regulated Learning: A theoretical synthesis. Review of Educational Research, 65, 245-281.

Chi, M. T. H., Siler, S. A., Jeong, H., Yamauchi, T., & Hausman, R. G. (2001). Learning from human tutoring. Cognitive Science, 25, 471-533.

Dillenbourg, P. (1996). The evolution of research on collaborative learning. In E. Spadea & P. Reimann (Eds.) Learning in Humans and Machines: Towards an interdisciplinary learning science (pp. 189-211). Oxford: Pergamon.

Dirkx, C., & Koopmans, M. (2000). Feedback: Commentaar geven en ontvangen [Feedback: giving and taking comments]. Thema: Zaltbommel, The Netherlands.

Hannafin, M., Land, S., & Oliver, K. (1999). Open Learning Environments: Foundations, Methods, and Models. In C. M. Reigeluth (Ed.) Instructional-Design Theories and Models: A new Paradigm of Instructional Theory, Volume II (pp. 115-140). Mahwah, NJ.: Lawrence Erlbaum.

Harasim, L., Hiltz, S., Teles, L., & Turoff, M. (1995). Learning Networks: A field guide to teaching and learning online. Cambridge: MIT Press.

Hummel, H. G. K., & Nadolski, R. J. (2002). Cueing for Schema Construction: Designing problem-solving multimedia practicals. Contemporary Educational Psychology, 27(2), 229-249.

Hummel, H. G. K., Paas, F., & Koper, E. J. R. (2004). Cueing for Transfer in Multimedia Programmes: Process-worksheets versus Worked-out examples. Journal of Computer Assisted Learning, 20(5), 387-397.

Jonassen, D. H. (1999). Designing constructivist learning environments. In C. M. Reigeluth (Ed.) Instructional-Design Theories and Models: A new paradigm of instructional theory, Volume II (pp. 215-239). Mahwah, NJ.: Lawrence Erlbaum.

Kay, J. (2001). Learner control. User Modeling and User-Adapted Instruction, 11, 111-127.

Kester, L. (2003). Timing of information presentation and the acquisition of complex skills. Unpublished doctoral thesis. Heerlen, The Netherlands: Open University of the Netherlands.

Kramer, N. J. T. A., & de Smit, J. (1987). System thinking. Stenfert Kroese: Leiden, The Netherlands.

Land, S. M. (2000). Cognitive requirements for learning with open-ended learning environments. Educational Technology Research and Development, 48, 61-78.

Ley, K., & Young, D. B. (2001). Instructional principles for self-regulation. Educational Technology Research and Development, 49(2), 93-103.

Mevarech, Z. R., & Kramarski, B. (2003). The effects of metacognitive training versus worked-out examples on students’ mathematical reasoning. British Journal of Educational Psychology, 73, 449-471.

Mory, E. H. (2003). Feedback research. In D. H. Jonassen (Ed.) Handbook of research for educational communications and technology (pp. 745-783). New York: MacMillan Library Reference.

Narciss, S. (1999). Individual differences in learning and motivation with informative feedback. Paper presented at the EARLI conference, August 1999, Göteborg.

Nelson, L. M. (1999). Collaborative problem solving. In C. M. Reigeluth (Ed.) Instructional Design Theories and Models: A new paradigm of instructional theory (pp. 241-267). Hillsdale, NJ.: Lawrence Erlbaum.

Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ.: Prentiss Hall.

Pellone, G. (1991). Learning theories and computers in TAFE education. Australian Journal of Educational Technology, 7(1), 39-47.

Prins, F. J., Sluijsmans, D. M. A., & Kirschner, P. A. (in press). Feedback for General Practitioners in Training: Quality, styles and preferences. Advances in Health Sciences Education.

Renkl, A. (2002). Worked-Out Examples: Instructional explanations support learning by self-explanations. Learning and Instruction, 12, 529-556.

Roossink, H. J. (1990). Terugkoppelen in het natuurwetenschappelijk onderwijs: een model voor de docent [Feedback in Science Education: A model for teachers]. Unpublished doctoral thesis. University of Twente, The Netherlands.

Sales, G. C., & Williams, M. D. (1988). The effects of adaptive control of feedback in computer-based instruction. Journal of Research on Computing in Education, 97-111.

Sluijsmans, D. (2002). Student Involvement in Assessment: The training of peer assessment skills. Unpublished doctoral thesis. Open University: The Netherlands.

Topping, K. (1998). Peer-assessment between students in colleges and universities. Review of Educational Research, 68, 249-276.

Van Eijl, P. J., Wils, S. A. M., Supheert, R., Kager, R., Bruins, W., & Admiraal, W. A. (2004). Peer Feedback en ‘blended learning’ voor schrijfonderwijs bij Engels: effectief maar ook voldoende? [Peer Feedback and ‘Blended Learning’ of English Writing: Effective, but also sufficient?] Paper presented at the Onderwijs Research Dagen 2004: Utrecht, The Netherlands.

Van Merriënboer, J. J. G. (1997). Training complex cognitive skills. Englewood Cliffs, NJ.: Educational Technology Publications.

Wiley, D. A., & Edwards, E. K. (2003). Online Self-Organizing Social Systems: The decentralized future of online learning. Retrieved March 12, 2004, from: http://wiley.ed.usu.edu/docs/ososs.pdf




PID: http://hdl.handle.net/10515/sy52j68h8



ISSN: 1492-3831