Searching for and Positioning of Contextualized Learning Objects

December – 2012

Searching for and Positioning of Contextualized Learning Objects

Baldiris photo Graf photo Fabregat photo Mendez photo

Silvia Baldiris1, Sabine Graf2, Ramon Fabregat1, and Nestor Darío Duque Méndez3
1University of Girona, Spain, 2Athabasca University, Canada, 3National University of Colombia, Colombia

Abstract

Learning object economies are marketplaces for the sharing and reuse of learning objects (LO). There are many motivations for stimulating the development of the LO economy. The main reason is the possibility of providing the right content, at the right time, to the right learner according to adequate quality standards in the context of a lifelong learning process; in fact, this is also the main objective of education. However, some barriers to the development of a LO economy, such as the granularity and editability of LO, must be overcome. Furthermore, some enablers, such as learning design generation and standards usage, must be promoted in order to enhance LO economy. For this article, we introduced the integration of distributed learning object repositories (DLOR) as sources of LO that could be placed in adaptive learning designs to assist teachers’ design work. Two main issues presented as a result: how to access distributed LO and where to place the LO in the learning design. To address these issues, we introduced two processes: LORSE, a distributed LO searching process, and LOOK, a micro context-based positioning process, respectively. Using these processes, the teachers were able to reuse LO from different sources to semi-automatically generate an adaptive learning design without leaving their virtual environment. A layered evaluation yielded good results for the process of placing learning objects from controlled learning object repositories into a learning design, and permitting educators to define different open issues that must be covered when they use uncontrolled learning object repositories for this purpose. We verified the satisfaction users had with our solution.

Keywords: Learning design; learning objects economy; micro-context; similarity measures; word sense disambiguation

Introduction

Basic Concepts of Learning Objects Economy

Through the years, the concept of the learning object (LO) has been considered by many diverse and qualified people. The IEEE Learning Technology Standards Committee (LTSC, 2009), in its work on the Learning Object Metadata Standard (2002), defined a learning object as any element, digital or non-digital, that may be used for learning, education, or training. Such a definition categorizes almost everything as a learning object, but even so, not just anything is one. According to Polsani (2005), a LO needs to be accessible, reusable, and interoperable while also intended for a learning process.

Wiley (2000) reinforced the concept of reuse by introducing the definition for “object” from the object-oriented programming paradigm of computer science, where it is understood as a component that can be reused in multiple contexts. In this manner, a learning object is presented as a small instructional component that can be reused in different learning contexts when required. This definition is important to us because our study is based on the learning object economy (Duncan, 2004), where reuse is a key aspect.

Learning object economies are marketplaces for the sharing and reuse of LO. As in any economy, different actors play different roles. Ochoa (2008) identifies eight actors: market-makers, authors, resellers, publishers, teachers, end-users, assemblers, and regulators. Market-makers are researchers and trainers who provide support for LO interchanges with learning object repositories (LOR), open courseware sites, and learning object technologies. Authors, such as teachers or learning designers are LO creators. Resellers are those who have acquired the rights to exploit LO, for example, universities or private companies. Publishers put together and publish LO. Teachers use the LO for instructional purposes. End-users use LO for learning. Assemblers reuse small LO to construct more complex LO. Finally, regulators set the rules by which the sharing takes place.

Barriers to Assembling a Learning Object Economy

Offering a learning process that is available to all is a motivation for stimulating the development of the learning object economy. However, to ensure that this necessity becomes a reality, some barriers in the learning object economy must be overcome, as shown by Duncan (2004).

There are two main technical barriers to reusing LO: granularity and editability. Granularity refers to how complex a learning object should be. Wiley (2000) introduced two different viewpoints for deciding this: an efficiency and an instructional point of view. From the efficiency point of view, Wiley indicates that the decision regarding learning object granularity can be viewed as a trade-off: The possible benefits of reuse come at the expense of cataloguing. Conversely, from the instructional point of view, the major issues are the scope and sequence of the learning design.

Editability is important because any aspect of a learning object can be changed if it is available in a suitable form. If a LO is editable, its granularity can be modified. There are many distributed LO that are not editable; in fact, this is one of the most common excuses provided by teachers for not reusing LO.

Counting editable and open LO requires agreements among the LO economy actors. In particular, adequate author rights management would increase their confidence in distributing editable and open content. Implementing author tools to support LO editability, which would address the accessibility issues in the content, is one of the most important issues to meet for the successful establishment of this economy.

Barriers from the pedagogical view are basically related to the LO context. According to Dey and Abowd (2000), context is defined as any information that can be used to characterize the situation of an entity—in this case, the LO. Context in education is essential, but in practice, incorporating context in LO inhibits reuse. Addressing the context issues would allow instructors to use LO in different scenarios. Small granularity drives the context issues, and LO editability allows teachers to contextualize the LO according to the learners’ needs.

Enablers of the Learning Object Economy

Along with the barriers, some enablers must be promoted in order to develop the learning object economy: learning design generation and standards promotion.

Learning design generation.

Learning design is a term coined by a pedagogical movement asking for more consistent approaches to describing and documenting teaching practices in order to facilitate communication and sharing, while also improving teaching practice. However, there is currently no standard definition for learning design (Koper & Yongwu, 2009). A well-accepted definition for the instructional design process is simple: the process that should be followed by teachers in order to plan and prepare instruction (Reigeluth, 1999). This process should address people’s cognitive, emotional, social, and physical needs in an integral way. Given that LO are only content, to foster real learning experiences they need to be administered properly.

Adequate pedagogical theories and techniques need to be in place in order to insure that the LO have real impact (Koper & Yongwu, 2009).

Automatic learning design generation is an important topic in the research area of adaptive learning systems and technology-enhanced learning. Some researchers (Duque, Méndez, Ovalle Carranza, & Jiménez Builes, 2002; Morales, Castillo, & Fernández-Olivares, 2009; Ulrich & Melis, 2009; Karampiperis & Sampson, 2006; Hernández et al., 2009; Baldiris, Graf, & Fabregat, 2011) have proposed approaches to help teachers generate learning designs adjusted to user characteristics such as learning styles and competences, which is not an easy task, particularly for teachers. Actually, this problem implies that teachers need to know the different instructional theories; they also must be able to control the different user variables in the learning design construction, such as learning styles and competencies, among others. Furthermore, teachers need to know how to develop standardized learning designs for the specific learning platform they use. Besides the personalization problem, another important issue for learning design generation is how to place learning objects from different learning object repositories into the generated designs.

Standards promotion.

If a global learning object economy is the goal, there must be common standards that every party agrees with to enable LO-sharing among heterogeneous systems (Ochoa, 2008). Important organizations and groups such as the IEEE Learning Technology Standards Committee (LTSC, 2009), the IMS Global Learning Consortium (n.d.), and the Dublin Core Metadata Initiative (n.d.) among others have proposed approaches for learning object standardization. Almost all elements, actors, and subprocesses of the educational process have been standardized. Baldiris, Santos, Fabregat, Jesus, and Boticario (2007) present an analysis of the different standards that have been accepted and validated internationally and the organizations involved in their creation.

Contributions and Outline of the Paper

In this paper, we aim to stimulate the enablers of the learning object economy to support the generation of standardized and adapted learning designs. Our investigation promotes LO reuse by encouraging instructors to access distributed learning object repositories (DLOR) as sources of LO with diverse granularity that could be elements in a generated learning design. Our proposal consists of two different parts: the distributed learning object metadata searching process (LORSE) and the micro-context-based positioning process (LOOK).

The distributed learning objects metadata searching process is a mechanism to promote reuse. It is supported by agent technologies, and its main purpose is to look for external LO that were not developed by the teachers which could be used as inputs in a learning design generation process. A micro-context-based positioning process analyzes a learning object’s current micro-context (in the LOR) and future micro-contexts (in the learning design), using disambiguation techniques to establish the most promising micro-context for the LO in a learning design, and supports the placement of the object in its correct context.

The rest of this article is structured as follows. In Section 2, we introduce the distributed learning object metadata searching process. The third section describes the micro-context-based positioning process. In the fourth section we present the results of a layered evaluation. Finally, in the fifth section, we make some conclusions and comments on future research.earch.

Section 2

LORSE: A Metadata Searcher of Open Learning Objects in Distributed Learning Repositories Based on Intelligent Agents

In order to facilitate the distributed learning object metadata search process, we developed LORSE, a distributed learning object metadata searcher, to promote reuse in the learning object economy. With LORSE, teachers, students, and external institutions can search in different learning object repositories using a unified interface. At the implementation level, LORSE (Baldiris, Bacca, Noguera Rojas, Guevara, & Fabregat, 2011) has been modelled as an independent set of JADE intelligent agents that collaborate to support users in the LO search process.

LORSE consists of two different types of agents: the directory facilitator agent and the specific search agent. The main purpose of this multiagent platform (Figure 1) is to deliver the most suitable LO according to the parameters provided by the user in a specific query.

The directory facilitator agent maintains a directory of tuples, where each item relates to one specific search service in a LOR with one specific search agent. Each specific search agent does the tasks of registering a new service in the directory facilitator agent and processing the requested services. When an external process needs to request a particular service on the platform, the external process must communicate with the directory facilitator agent to request the identifier of the agent in charge of a specific service. Specific search agents implement particular web clients by requesting search services in particular repositories. In Baldiris et al. (2011), we introduced an example of this application in three repositories (Merlot, Connexions, and UdG). In this article, we introduce an extension of LORSE that includes six additional services: DalSpace, Deep Blue, DLESE, ARIADNE, SMETE, and GATEWAY. The extended architecture of LORSE is shown in Figure 1.

Figure 1

When the Merlot agent (the specific search agent in charge of integration in the Merlot repository) is born, the Merlot search service is registered to the directory facilitator agent in order to allow other agents or processes to locate and send requests to it. The Merlot agent is activated when a search request is received. Merlot’s agent implements a particular behavior, a client for the RESTful web service offered by the Merlot repository. When a request is sent to the agent, according to the terms and conditions of the query, the agent performs a connection with the service, sending the corresponding parameters, and then obtains a response as an XML document (metadata). The implementation of both the Connexions and UDG Agent is similar to the one for the Merlot agent; they have behaviors designed to interact with the RESTful web service offered by these applications.

To integrate the DalSpace digital repository, the Deep Blue Repository from the University of Michigan, the DLESE Repository, ARIADNE, SMETE, and GATEWAY into the multi-agent platform, we created an intelligent agent for each. This agent presents indexer behavior, using the OAI-PMH harvester protocol to index the categories (catalogues) and records in the categories (resources) of each particular repository. Each metadata resource is stored in a database as a tree. In this manner, the information is available for a search process.

In order to test the extended version of LORSE independently, we integrated our development in an OpenACS/dotLRN learning environment. For the integration process, it was necessary to install the LORSE client package on this platform, which implements a web service client upon .LRN in order to send requests to the LORSE multiagent platform and process its responses. This package offers a user interface that provides functionalities allowing users to search several repositories in a transparent way. Therefore, when teachers use the learning environment, they are able to search for LO in those repositories to enhance the activities designed in the platform without leaving the learning environment.

Section 3

LOOK: Micro-Context-Based Positioning Process for Open Learning Objects

The main purpose of this section is to provide an introduction for the micro-context-based positioning process LOOK, which aims to place learning objects previously found by LORSE in learning designs.

To achieve this objective, two different sources of information are available: (1) the information from LOR, particularly the catalogue or indexed mechanism of the LO, and the LO metadata; and (2) the available information provided by the teacher for the competence definition, which defines the appropriate knowledge that a person should possess and show in a specific context. The competence definition consists of four categories of information: competence general information, which provides general data about the competence; competence elements, which are smaller learning purposes that provide more specific and concrete learning process outcomes; didactical guidelines; and the competence context of application.

Competence elements describe the essential knowledge that students should use in a specific context to demonstrate that they have acquired new information, and competence evidence is a mechanism that measures students’ levels of achievement in each particular competence element. Schum (1994) explained how the evidence coming from different sources can be evaluated. In our case, analysis of the evidence is related to the relevance of the learning object that will address what the teacher is looking for, which he or she has defined in the competence definition of the course. In the following section, we introduce the main topics of relevance.

Learning Object Relevance

Borlund (2003) mentioned three central conclusions from the nature of relevance and its role in information behavior:

Relevance is a multidimensional cognitive concept whose meaning is largely dependent on users’ perceptions of information and their own information need situations;
Relevance is a dynamic concept that depends on users’ judgments of quality of the relationship between information and information need at a certain point in time;
Relevance is a complex but systematic and measurable concept if approached conceptually and operationally from the user’s perspective.

Saracevic (1996) distinguished between five basic types of relevance: (1) system or algorithmic relevance, which describes the relation between the query (terms) and the collection of information expressed by the information object(s); (2) a topical-like type, associated with aboutness or criterion; (3) pertinence or cognitive relevance, related to the information need as perceived by the user; (4) situational relevance, depending on the task interpretation; and (5) motivational and affective, which is goal-oriented.

Ochoa (2008) used a modified version of Saracevic’s categories (eliminating the motivational and affective dimensions) as the basis to define a set of complete metrics for LO relevance identification. These metrics are shown in Table 1.

Table 1

Learning Object Relevance in the Micro-Context

Automatic word sense disambiguation (WSD) has been an interest and concern since the earliest days of computer language treatment in the 1950s. It is defined as the association of a given word in a text or discourse with a definition or meaning distinguishable from other meanings potentially attributable to that word (Ide, 1997).

All disambiguation work involves matching the context of the instance of the word to be disambiguated with either information from an external knowledge source (knowledge-driven WSD), or information about the contexts of previously disambiguated instances of the word derived from corpora (data-driven or corpus-based WSD).

The assignment of senses to words is accomplished by relying on two major sources of information:

  • the context of the word to be disambiguated in the broad sense, including information in the text or discourse in which the word appears, together with extra-linguistic information about the text;
  • external knowledge sources, including lexical, encyclopedic resources (among others), and hand-devised knowledge sources, which provide data useful to associate words with meanings.

Most disambiguation works use the local context of a word occurrence as the primary information source for WSD. Local or “micro” context is generally considered to be some small window of words surrounding a word occurrence in a text or discourse, from a few words of the context to the entire sentence in which the target word appears.

We consider the micro-context of a learning object to be a part of the curricular structure where the learning object should be placed (the learning design to be generated).

Consider the curriculum structure in Table 2 that belongs to a course teaching Unified Modelling Language (UML), which was generated based upon the competence definition provided by a teacher.

Table 2

We need to place the LO, which can be obtained from a preliminary search based on the mechanism provided by the LOR, or according to the metrics described in Table 1, in the structure from Table 2.

We analyzed two different possible micro-contexts, the micro-context of the LO in the repository structure (catalogue), where the LO is placed, and the micro-context of the LO in the curricular structure, where the LO will be placed. Comparing these possible micro-contexts, a user can decide the best location for the learning object in the learning design.

Then, the first step is to define the micro-context of each learning object (LO) to be placed and also the possible micro-context in the curriculum structure.

The micro-context where a LO is placed in a LOR catalogue is provided by equation 1.

Equation 1

In equation 1, LO is the learning object, and C is the catalogue in the LOR. loMicroContext defines the LO micro-context in a particular LOR catalogue.

Table 3 shows the loMicroContext of one LO, Introduction to OMG’s Unified Modelling Language.

Table 3

cuMicroContext defines the possible micro-context in the curricular structure (CS) provided by the teacher. These possible micro-contexts are given by equation 2.

Equation 2

The number of leaves in the CS defines the possible micro-context of the curricular structure. Three of the nine possible micro-contexts from table 2 in the CS are shown in Table 4.

Table 4

Now, the second step is to calculate the similarity between the different CS micro-contexts and the LO micro-context in order to place the LO in the structure. For this step, we proposed the use of different metrics to calculate the similarity between the TF–IDF (term frequency–inverse document frequency) inferred vectors in the analyzed micro-context (CS and LO). We used similarity measures that have been extensively validated in information retrieval: the Dice coefficient and cosine distance (Dice, 1945).

The Dice coefficient compares the similarity between two vectors (Q and D) from 0 to 1, where 1 indicates identical vectors and 0 orthogonal vectors. Equation 3 shows Dice coefficient.

Equation 3

Cosine distance varies between -1 and 1, where -1 means exactly the opposite, and 1 means exactly the same, with 0 usually indicating independence, and in-between values indicating intermediate similarity or dissimilarity.

Equation 4

Equation 4 presents cosine distance. 0 represents the angle between Q and D. Based on the results of the algorithms for metrics implementation, the LO will be placed in the micro-context of the CS most similar to the micro-context of the LO in the repository structure (catalogue).

Section 4: Evaluation

Description of the Proposed Evaluation Process

After implementing our solutions for searching and locating LO, we conducted an evaluation of our developments. As we mentioned in the introduction, this article introduces our solution for looking up learning objects in distributed learning object repositories and positioning them in the most promising micro-contexts of learning designs that will be generated in the future.

Brusilovsky, Karagiannidis, and Sampson (2001) reported that the layered evaluation for adaptive hypermedia systems was a good approach to use to completely validate the elements for this kind of system. We used a layered evaluation process to measure the results in our research because the most important associated decision process (place a learning object in a learning design structure) supports an adaptive mechanism (adaptive learning design generation process based on students’ and teachers’ preferences). According to the adaptive system evaluation theory, different layers should be considered in order to test all the elements of the adaptive system (Brusilovsky et al., 2001; Karagiannidis & Sampson, 2000; Brusilovsky & Sampson, 2004). We define the following set of evaluation layers for our study:

1) The decision-making evaluation layer, where the question is, Are the decisions about where the learning objects should be placed valid and meaningful for teachers?

2) The user satisfaction evaluation layer, where the question is, Does the proposed solution match with the teachers’ expectations?

Test Course: Object-Oriented Design with UML

Object-Oriented Design with UML is a course offered by the University of Girona in the formal education system. The course is supposed to establish student competence in UML: “The student will be able to design object oriented software using the unified modelling language (UML). The student will identify the most adequate diagrams to support the specification of each step in the object oriented development process”

To complete this competence, five different competence elements and the associated competence knowledge were defined.

  • First competence element: Student defines Unified Modelling Language and identifies its main associated diagrams. Competence knowledge: Unified Modelling Language and its diagrams.
  • Second competence element: Student understands the concept of use case diagrams and their associated concepts, such as actors, inclusion, extension, and generalization. Competence knowledge: Use case diagrams.
  • Third competence element: Student understands the concept of class diagrams and designs class diagrams considering users’ requirements. Competence knowledge: Class diagrams.
  • Fourth competence element: Student understands the concept of interaction diagrams, particularly sequence and collaboration diagrams. He or she expresses the dynamic view of the software using these diagrams. Competence knowledge: Interaction diagrams, sequence and collaboration diagrams.
  • Fifth competence element: Student understands the concept of activity diagrams to construct activity flows. Competence Knowledge: Activity diagrams.

For this course, 87 open learning objects were constructed. These learning objects were placed in an instance of the Fedora Commons Repository available at University of Girona. The set of learning objects that supported the learning process included diverse types of atomic resources with specific pedagogical intentions. These included exercises, simulations, diagrams, figures, graphs, indices, slides, tables, narrative texts, experiments, problem statements, lectures, questionnaires, exams, and self-assessments. Furthermore, each learning object had one associated LOM metadata where the most relevant information about the learning object was defined by a labelling process.

The Decision-Making Evaluation Layer

The main purpose of this evaluation layer is to validate our process for placing learning objects from different learning object repositories in the curricular structure of a learning design.

According to the typologies from McGreal (2008) and Sampson (2011) of the learning object repositories involved in our research and the character of previously obtained results, we divided the testing scenarios into two different environments, an uncontrolled and a controlled environment.

The uncontrolled environment consisted of repositories with diverse levels oflabelling, where learning objects have different degrees of granularity. This environment permitted us to verify the possibilities and limitations of our approach in uncontrolled repositories where metadata labelling is not defined or supervised.

The controlled environment consisted of repositories available at University of Girona where the labelling was previously defined using relevant information and the granularity of the learning objects was also defined. This kind of environment permitted us to verify more accurately the precision of the proposed algorithms in a controlled set of learning objects and their metadata. Both environments shared the same testing course and, for this reason, the competence definition and the analyzed micro-context associated with the competence were the same for both environments.

First Scenario: An Uncontrolled Environment

Description

We used this scenario to validate our proposal for locating learning objects from different learning object repositories in the curricular structure of a learning design. The uncontrolled environment considered different learning objects repositories linked through the same interface provided by LORSE. The involved repositories were ARIADNE, Merlot, SMETE, and GATEWAY. Some learning objects in these repositories were labelled with LOM, others with Dublin Core, but in general with a small amount of information defined by the market-makers.

Method

We looked for the catalogue provided for each defined repository. We performed different kinds of searches in the defined repositories using diverse search criteria. The criteria were defined with the information provided for the metadata in each repository and the searching mechanism provided each one. Then we selected the 10 most relevant LO for our study.

Using the previous information, we constructed the LO micro-context (loMicroContext) in the repository in two different ways. The first one was built as described in the LOOK section above. The second one also considered the LO metadata as a part of the LO micro-context. This was necessary since in many cases the LO micro-context based on the LO catalogue was not significant for our study; the LO micro-context did not support the proposed similarity analysis.

The next step was building the micro-context in the curricular structure (cuMicroContext). We defined six micro-contexts: five different micro-contexts according to the five competence requirements defined in the course competencies list and a general course micro-context. This general course micro-context consisted of the title, description, and all the knowledge associated with the competence requirements.

With all the micro-contexts involved (loMicroContext and the cuMicroContext), we proceeded to compare them, calculating the similarity measurements among the micro-contexts. We calculated the similarity of each learning object to each curricular structure micro-context. Then, we consolidated an average similarity, grouping the learning objects according to the repository where the LO were placed.

Results and Conclusions

Table 5 shows the most relevant results of this study. The first column defines the different criteria used for searching in the considered learning object repositories. The same criteria was used to define the LO micro-contexts. Additional columns represent the results of the average similarity consolidation for the general course micro-context.

Let us introduce an example: 0.2368 is the average similarity measure calculated among the 10 learning objects retrieved using the metadata—in this case, abbreviated keywords from Merlot. For each learning object, the similarity of its micro-context was calculated with respect to the general course micro-context.

We do not show the analysis of the other partial curricular structure micro-contexts considering the competence knowledge because the similarity measures were very small and extremely close. This did not permit us to determine the most promising micro-context for a learning object.

One of the most important conclusions we drew from this study was that using the definitions from the provided catalogue for uncontrolled repositories to define the learning object micro-context in a new learning design is very difficult. That can be seen in row six of Table 5. The reason is simple: The catalogue definition is too general for the LOOK positioning process to place the learning objects in a micro-context defined by the competence. The micro-context of the catalogue does not meet the micro-context extracted from the competence definition.

Table 5

This situation led us to redefine the micro-context of the learning object, as is shown in Equation 5.

Equation 5

However, similarity measures for both micro-contexts do not show a strong relationship, although a manual analysis of the resource content shows strong relationships for the educational process.

Second Scenario: Controlled Environment

Description

In order to test our proposal in a controlled environment, we prepared a complete course of Object-Oriented Design with UML. The main objective of this study was to analyze our approach’s capacity for adequately placing the learning objects into a specific course structure. The starting point was the “correct” classification developed by an expert teacher. This means that a teacher told us how he or she put the objects into the proposed curricular structure.

Method

According to the information provided in the competence definition, a structure for the course was defined, as shown in Table 2. The teacher manually placed the 87 available objects in the structure defined for the course. In this way, we have defined a point for comparison. Five micro-contexts associated to the UML course curricular structure (csMicro-context) were defined. The micro-contexts of each learning object (loMicro-Context) in the UML course were defined.

Average similarity measurements between each loMicro-context and each cuMicro-context were calculated. This means that for each LO in the course, we compared its micro-context to the five defined micro-contexts of the curricular structure. Grouping the LO according to the classification provided by the expert teacher, we calculated the average similarity for each csMicro-context. Then, we compared the similarity of each set of learning objects to each csMicro-context.

Results and Conclusions

Tables 6 and 7 present the LOOK system’s precision, placing the LO in the best curricular structure micro-context. The obtained results came from calculating the average similarity for each set of learning objects previously placed by teachers in a particular csMicro-context. The results show a correspondence between the teacher’s classification and the LOOK process classification, and indicate that in general, LOOK places the LO in the best csMicro-context according to the teacher’s opinion.

In Tables 6 and 7, the rows show the identified csMicro-contexts (introduction, activity diagram, class diagram, use case diagram, and interaction diagram) and the columns represent the micro-context where the teacher classifies the set of learning objects previously. The values in the table indicate the average similarity between the micro-context for each set of LO previously classified by the teachers and each csMicro-context.

For example, in the first column, we calculated the average similarity of the set of LO previously classified by a teacher in the introduction micro-context and each csMicro-context. In this way, the similarities are (0.2222) for the introduction micro-context, (0.1379) for the activity diagram micro-context, and (0.1194) for the class diagram micro-context and so on. We observed that the average similarity for the set of LO placed by the teacher in the introduction micro-context coincides with the highest similarity calculated by LOOK to the introduction csMicro-context, 0.2222. In this way, the decision LOOK made to place these LO in the introduction micro-context corresponds with the teacher’s decision to position these LO in the introduction micro-context.

Table 6

Table 7

In particular, Table 6 presents the results applying DICE similarity measure. DICE analysis generates a precision of 100%, which means the processhas localized100% of the set of learning objects in the adequate curricular structure micro-contexts. Nevertheless, COSINE analysis generates 100% precision with respect to the classification provided by the teacher.

In general, the results of the study presented in Tables 6 and 7 show a strong correspondence between the classifications provided by the teacher and the possible classifications based on the similarity measures provided by the algorithms. The low values observed in Tables 6 and 7 are predictable because of the nature of the information available in the two different micro-contexts. Therefore, some labels in the competence definition as well as some in the LO metadata could contain irrelevant but comparable information because of the purpose of each kind of information. Only the relevant words for both micro-contexts are actually important, and the values shown in previous tables capture this relevance while selecting the best object for each particular micro-context.

User Satisfaction Evaluation Layer

Description

Our main objective in this evaluation layer was to develop a qualitative study (Hernández Sampieri & Baptista Lucio, 2004) that would permit us to achieve a better understanding of potential opportunities for improving our approach and show us more effective ways to support this task. The strategy we used was to develop case studies, which permitted us to concentrate on a particular situation—in our case, the use of distributed learning objects for creating learning designs.

The analysis was based on interviews with teachers, case studies where we applied a gap model instrument (Hernández Sampieri & Baptista Lucio, 2004) to evaluate their satisfaction level. The gap model allowed us to capture the difference between the teachers’ expectations and the satisfaction that they really obtained from the offered service.

The gap model was applied in a particular instrument (a survey) to measure user satisfaction for four aspects of our proposal:

  • satisfaction with the searching process (SEQ1), that is, the possibility of searching in different distributed repositories in a unique environment;
  • the usability of the tool, developed on a dotLRN platform, to integrate LORSE (SEQ2);
  • satisfaction with the results offered by the search process (SEQ3);
  • satisfaction with the possible location of LO in a curricular structure available for testing (SEQ4).

The instrument was applied to 15 teachers (cases) at the University of Girona, Spain, as a part of descriptive research, where teachers had the opportunity to test our proposed application. These instructors teach different courses at the university from different areas of knowledge: pedagogy, economy, law, psychology, tourism, and administration science. Some of these courses are already supported by a virtual learning environment (Moodle).

Methodology

We arranged sessions with teachers from University of Girona. The main researcher introduced teachers to the learning object repository environment, showing them some of the most important ones. The main researcher introduced LORSE, its functionality and integration into the dotLRN learning management system as a porlet. The teachers had the opportunity to conduct some searches using the system. The LOOK process was described to the teachers, who observed the possible learning objects included in the test course. A session of discussions and brainstorming was proposed to every teacher in order to gain their opinions about our research. They were very motivated in this session.

Results and Conclusions

The results presented in Figure 2 show a very close relationship between the importance perceived by the users referred to the evaluated issues and their satisfaction with the solution. One of the most important parts of the descriptive analysis was the conclusions and opinions highlighted by the teachers: They all thought the reuse of learning objects was a possibility to facilitate the virtual learning process because efforts from teachers at different universities might be united.

Figure 2

All the teachers emphasized the necessity of guaranteeing the quality of the selected learning objects to support learning design. For them, quality means that both the selected learning object should be contextualized for the teachers’ and students’ needs, and it must guarantee learning design quality.

According to the interviews from each teacher, we concluded that 60% of teachers consider it a good practice for universities to include in their strategic plans the creation of spaces to updateteachers about the resources for learning and teaching available around the world and in their own institutions. Teachers think that much research and knowledge developed by important institutions is not well known in the academic context and, for this reason, their efforts may not be widely used by teachers. This is the case for the available and open learning object repositories.

Conclusions and Future Work

The main purpose of this article is to introduce our research into searching for learning objects in distributed learning object repositories and their positioning process in the most promising micro-contexts of future learning designs. Our solution includes the definition of two different processes: the distributed learning object metadata searching process (LORSE) and the micro-context-based positioning process (LOOK), which we introduced here.

We presented our results in two evaluation layers, the decision-making layer and the user satisfaction layer. The decision-making layer encouraged us to conclude that, on one hand, a search process for the LO over controlled LOR for feeding learning designs is a promising option. Learning objects selected and placed in the learning design meet the teachers’ opinions in a previous manual positioning process. In this process, the importance of the metadata labelling process and the competence definition has been demonstrated. On the other hand, the decision-making process for including learning objects from uncontrolled learning object repositories in semi-automatically generated learning designs is a difficult process. In fact, to achieve a viable solution with these repositories, the object metadata needs to be refined. Metadata available in the involved repositories currently has limited information.

To obtain a closer view of the teachers’ satisfaction with our proposal, we used a user satisfaction evaluation layer. The results obtained with teachers from University of Girona permitted us to define some improvements from a user-centered design view. Although the results were promising and we obtained a high user satisfaction level, we also need to address some important elements.

Some teachers suggested improving the appearance of the learning design player because they believe it could be difficult to manage for the student. The teachers suggested simplifying both the LORSE and LOOK interfaces, in order to facilitate easy use of the programs and to improve the usability of our solution. Results obtained in the descriptive analysis stimulated the development of evaluation scenarios when the main issues were testing the usability and accessibility of the proposed solution.

Currently, our research interest is focused on some of the different issues identified in our research: A good way to improve our solution for uncontrolled learning objects repositories could be to develop a characterization for the learning object repositories using ontology. This will optimize the search process to obtain more contextualized LO. Characterizing learning object repositories using ontologies would allow us to add the necessary semantics that support the selection of the repositories for a specific design process. In particular, as a result of the evaluation we identified the necessity of the following knowledge: character and granularity of the LOR, technical details, and main knowledge areas (e.g., math and languages). Finally, we need to develop a usability and accessibility testing scenario in order to verify the facility of our solution to meet those user needs in more detail.

Acknowledgments

The authors acknowledge the support of NSERC and the European Commission for its support through the funding of the ALTERNATIVA Project (DCI-ALA/19.09.01/10/21526/245-575/ALFA III [2010] 88). They would also like to thank the Spanish government for its support through the funding of the Augmented Reality in Adaptive Learning Management Systems for All (ARrELS – TIN2011-23930) and the Catalan Government for its support through the European Social Fund (SGR-1202). Also thanks to the National Call of Strengthen Research Groups and Artistic Creation from the National University of Colombia 2010–1012, programme code 14163.

References

Baldiris, S., Bacca, J. L., Noguera Rojas, A., Guevara, J. C., & Fabregat, R. (2011). LORSE: Intelligent meta-searcher of learning objects over distributed educational repositories based on intelligent agents. In Frontiers in Education Conference (FIE), 2011 (pp. F1E1–F1E5).

Baldiris, S., Graf, S., & Fabregat, R. (2011). Dynamic user modeling and adaptation based on learning styles for supporting semi-utomatic generation of IMS learning design. Proceedings of the Eleventh IEEE International Conference on Advanced Learning Technologies (ICALT ‘11) (pp. 218–220).

Baldiris, S., Santos, I.C. Fabregat, R., Jesus, G., & Boticario, G. (2007). Modelado de competencias en sistemas de gestión de aprendizajes. 3 Congreso Internacional sobre el Enfoque Basado en Competencias: Diseño Curricular por Competencias y Gestión de la Calidad del Aprendizaje. Girona: University of Girona.

Borlund, P. (2003). The concept of relevance in IR. Journal of the American Society for Information Science and Technology, 54(10), 913–925.

Brusilovsky, P., Karagiannidis, C., & Sampson, D. (2001). The benefits of layered evaluation of adaptive applications and services. In S. Weibelzahl, D. Chin, & G. Weber (Eds.), Proceedings of Workshop on Empirical Evaluation of Adaptive Systems at the Eighth International Conference on User Modeling, UM2001 (pp. 1–8).

Brusilovsky, P., & Sampson, D. (2004). Layered evaluation of adaptive learning systems. International Journal of Continuing Engineering Education and Lifelong Learning, 14, 402–421.

Dey, A. K, & Abowd, G. D. (2000, April). Towards a better understanding of context and context-awareness. Paper presented at the Workshop on the What, Who, Where, When, Why and How of Context-awareness, Conference on Human Factors in Computing Systems (CHI 2000),The Hague, Netherlands (pp. 1–6).

Dice, L. R. (1945). Measures of the amount of ecologic association between species. Ecology, 26(3), 297–302.

Dublin Core Metadata Initiative. About us. [Web page]. Retrieved from https://dublincore.org/about-us.

Duncan, C. (2004). Learning object economies: Barriers and drivers. Paper presented at eLearnInternational, Edinburgh, Scotland, February 18–19.

Duque Méndez, D. N., Ovalle Carranza, D. A., & Jiménez Builes, J. A. (2002). Artificial intelligence for automatic generation of customized courses. In E. Pearson & P. Bohman (Eds.), World Conference on Educational Multimedia, Hypermedia and Telecommunications (pp. 2693–2698). Chesapeake, VA: AACE.

Hernández, J., Baldiris, S., Santos, O., Huerva, D., Ramón, F., & Boticario, J. G. (2009). Conditional IMS LD generation using user modeling and planning techniques. Proceedings of the Eighth IEEE International Conference on Advanced Learning Technologies (ICALT ‘09) (pp. 228–232).

Hernández Sampieri, R., & Baptista Lucio, P. (2004). Metodología de la investigación. Mexico: McGraw Hill.

Ide, N. (1997). Word sense disambiguation: The state of the art. New York, 1–41.

IEEE, LTSC (Institute of Electrical and Electronics Engineers, Learning Technology Standards Committee). (2009, March 7). The IEEE LTSC temporary home page [Web log post]. Retrieved from http://ieeeltsc.wordpress.com/

IEEE, LTSC. (2002). 1484.12.1-2002 IEEE Standard for Learning Object Metadata (version 1.2). Washington, DC: IEEE.

IMS Global Learning Consortium. (n.d.) Background. [Web page]. Retrieved from http://www.imsglobal.org/background.

Karagiannidis, C., & Sampson, D. (2000). Layered evaluation of adaptive applications and services. In International Conference on Adaptive Hypermedia and Adaptive Web-based Systems, AH 2000 (pp. 343–346).

Karampiperis, P., & Sampson, D. (2006). Adaptive learning objects sequencing for competence-based learning. Proceedings of the Sixth IEEE International Conference on Advanced Learning Technologies (ICALT ’06) (pp. 136–138). Washington, DC: IEEE Computer Society.

Koper, R., &Yongwu, M. (2009). Using the IMS LD standard to describe learning designs. In L. Lockyer, S. Bennett, S. Agostinho, & B. Harper (Eds.), Handbook of research on learning design and learning objects: Issues, applications, and technologies (pp. 41–86). New York, NY: Academic, 2009.

McGreal, R. (2008). A typology of learning object repositories. In H. H. Adelsburger, Kinshuck, J. M. Pawlowski, & D. Sampson (Eds.), Handbook on information technologies for education and training (pp. 5–28). New York, NY: Springer.

Morales, L., Castillo, L., & Fernández-Olivares, J. (2009, November). Planning for conditional learning routes. In A. H. Aguirre, R. M. Borja, C. A. Reyes Garciá (Eds.), MICAI 2009: Advances in Artificial Intelligence Lecture Notes in Computer Science, 5845 (pp. 384–396). New York, NY: Springer.

Ochoa, X. (2008). Learnometrics: Metrics for learning objects (Doctoral dissertation, Katholieke Universiteit, Leuven). Retrieved from http://ariadne.cti.espol.edu.ec/xavier/papers/ThesisFinal2.pdf

Polsani, P. R. (2005). Use and abuse of reusable learning objects. Journal of Digital Information, 3(4), 1–10.

Reigeluth, C. M. (1999). Instructional design theories and models, a new paradigm of instructional theory. Abingdon, UK: Laurence Erlbaum Associates.

Sampson, D. (2011). From open educational resources to open learning design sharing educational practices in the knowledge cloud. In Congreso Internacional de Ambientes de Aprendizaje Adaptativos y Accesibles | Hacia un sistema educativo comprometido con la diversidad.

Saracevic, T. (1996). Relevance reconsidered. In P. Ingwersen & N. O. Pore (Eds.), CoLIS 2, Second International Conference on Conceptions of Library and Information Science: Integration in Perspective (pp. 201–218). Copenhagen, Denmark: Royal School of Librarianship.

Schum, D. A. (1994). The evidential foundations of probabilistic reasoning. Wiley Series in Systems Engineering and Management. Hoboken, NJ: John Wiley & Sons.

Ullrich C., & Melis, E. (2009). Pedagogically founded courseware generation based on HTN-planning. Expert Systems with Applications, 36(5), 9319–9332.

Wiley, D. A. (2000). Connecting learning objects to instructional design theory: A definition, a metaphor, and a taxonomy. Learning Technology, 2830(435), 1–35.






ISSN: 1492-3831