MyEnglishTeacher: VOD based Distance Academic English Teaching via
an adaptive, multi-Agent Environment
Alexandra I. Cristea and Toshio Okamoto
AI&Knowledge Eng. Lab., Graduate School of Info Systems
University of Electro-Communications
Choufu, Choufugaoka 1-5-1, Tokyo 182-8585, Japan
{alex, okamoto}@ai.is.uec.ac.jp
Abstract- As English spreads as the new international language for all domains, be it economics, politics, education,
research, and so on, it becomes increasingly important to facilitate English upgrading for people everywhere. In this
way, language barriers can be lifted, and international communication enhanced. The recent growth and expansion of
the Internet allows us to build distance education systems today. However, uniform level education for everybody
proved to be a wrong approach. Distance teaching misses one important factor: the teacher: flexible, adaptive, and
more important, human. The advances made in the artificial intelligence (AI), as well as in the technological field
recently, allow us to implement, if not human-like automatic teachers, at least adaptive, flexible, and individual user
needs oriented learning environments. We propose in this paper a new approach to English upgrading and new
student adaptation mechanisms that we support theoretically and with examples. Moreover, the present paper
presents systematically the background, rationale, design, implementation and preliminary testing and evaluation of
the prototype of such a free, Internet-based, agent-based, long-distance teaching environment for academic English
upgrading, with special focus on the learning environment and learner adaptability.
Keywords: CALL, Adaptive Learning Environments, Distance Education, Agents, Student Modeling, English
Teaching, ITS, Hypermedia and Multimedia, Adaptive Hypermedia, VOD
1 Introduction
1.1 The strive for adaptability
Recently, the hypermedia community has realized that one of the great advantages that the Internet provides, besides
access to and from the most remote parts of the world and at any desired time (the “at any time, from any place” slogan), is a
large space for customization and moreover, user adaptation. The first to take this road were the commercial sites, starting
with the password logging and selecting the desired items for viewing, etc. In the educational domain, although sites
providing educational material flourished [Report], the idea of adaptation was slow to penetrate. Lately, however, many
researchers have noticed the latent potential in the adaptive approach. The combination of different backgrounds, different
cognitive styles, different learning strategies, motivations, capacities, even hobbies and extra-curricular interests of students
are incompatible with the “one course for all” policy. The teaching materials and systems on the web have one great
disadvantage: they lack the human teacher. Therefore, the different situations and needs cannot possibly be covered by rigid,
non-adaptive courseware. Some course and courseware providers actually try to correct this by using human teachers on-line,
at certain hours, or even around the clock. However, this either is a very costly solution, or doesn’t use the “at-any-time”
aspect of the web, or both. As a result, a more recent trend emerged, towards adaptive learning environments that try to
approximate the user’s needs, by building user models. With the increase of flexibility and bandwidth of the Internet
environment, such systems gradually try to move from stand-alone working machines towards the WWW. Our research
belongs to this new direction of research.
1.2 Academic English upgrading system rationale
In the academic field, research and development, where international cooperation is important, English is used frequently.
Still, academic English has become international English. Therefore, one has to understand a multitude of accents from all
around the world to be able to function in the present globalized society. However, although accents are more or less variable,
the spoken, but mostly, the written academic language has still its rules and etiquette. Academics usually know some English
and have a more or less wide English vocabulary. However, especially in Japan, but in other non-English speaking countries
as well, often, although a person can read English, when it comes to writing a paper or making an academic presentation in
English, serious problems appear. Therefore, we embed these necessary rules and etiquette in our teaching environment. The
main aim of our system is to help academics exchange meaningful information with their peers, through a variety of
information exchange ways: academic homepages, academic papers, academic presentations, etc. As far as we know, this type
of English teaching system is new. Some English teaching environments on the Web appeared, but, as in [Aspera
PrivaTeacher] or [EnglishLearner], they have two main defects: they are not free, and/or they are not automatic, but based on
real human teachers at the end of the line.
1.3 System goals
Our aim is to have a system capable to function autonomously, without human interference, as a virtual, long-distance
classroom, embedding the necessary tutoring functions within a set of collaborating agents that will serve the student.
The course is called ‘MyEnglishTeacher' [Cristea], because it adapts over time to the needs and preferences of individual
users. These needs can be expressed explicitly, or can be implicitly deduced by the system, represented by its agents.
Users can find in our virtual classroom situational examples of academic life, presented as Multimedia, with Audio and/or
Video presentations, Text explanations and pointers to the main patterns introduced with each lesson, exercises to test the user
understanding, moreover, adaptive correction, explanation and guidance of the user mistakes.
1.4 Paper organization
The paper is organized as follows. In the next section, we describe the background on which our current research is based,
by connecting ours with similar approaches and explaining the differences between them. In section 3, we show the main
modules and features of our system. Section 4 describes in more detail the course editor environment module, with the
structure that we enforce for easy retrieval and adaptation adjustments. Section 5 presents the other main module, the learning
environment, with its student models, agents, weight relatedness and importance coefficient computations. Section 6 is
dedicated to the student adaptation issue, where we discuss our solutions and give examples. Sections 7 shows the other
functions and functionalities of our system, and presents them in the context of the prototype system tests. Section 8 evaluates
the system according to the 5-star model. Finally, section 9 presents our present results, the orientation of our future research
and some conclusions.
2 CALL systems in distance learning context
Virtual environments in education and distance-learning systems are the recent trends in education worldwide. This trend is
determined by the current spread of the Internet, as well as by a real demand for better, easy-to-access, and cheaper
educational facilities. Therefore, universities everywhere respond to the academic demand for technological and pedagogical
support in course preparation, by developing specialized software environments [Collis]. As bandwidths grow, the traditional
text environments gradually switch to multimedia and Video-on-Demand (VOD) systems [Tomek].
The problems in the currently available language education systems that motivated our research, as cumulated from
aspects pointed out by the language specialist in our team, and also by [Levy] and [De Bra], can be resumed as follows:
lack of a clear view of the presented material; possible views can be what [De Bra] calls “a bird’s point of view”, i.e., a
compressed representation of the whole document space, or “a fish’s point of view”, i.e., small, useful slices of the
whole representation graph, that are related to the current user needs, etc.;
lack of presentation structure related to a learner’s individual characteristics;
lack of exercises related to the learner’s individual characteristics;
lack of analysis of the interaction between learner and learning environment, with special focus on assimilation and
accommodation;
the lack of a variety of problem-solving tasks to motivate students to think about their reading; the learning process
does not enable learners to become active participants;
the lack of learning activities for checking learners’ constructive understanding (requiring the learner not only to
memorize, but also to summarize, generate, differentiate, or predict);
lack of explanatory feedback (telling the user why);
lack of considerations about the effectiveness of different physical attributes of the presentations, on the students’
learning.
These problems could not be solved by traditional systems, mostly due to their lack of adaptability, or in other words,
intelligence. In [Weiss] it is stated: “there is the need to endow these systems with the ability to adapt and learn, that is, to
self-improve their future performance”.
The objective of our research is to help learners achieve academic reading and writing ability. The course is intended for
students whose starting English level is intermediate and upper-intermediate, who have some vocabulary of English, but not
much practice in using it. The tutoring strategy used is to give the reader insight into his or her implicit or explicit learning
strategies. The methodology applied is the communicative teaching approach, allowing communication and interaction
between student and tutoring system, via agents. The topics and stories used are mainly passages from textbooks, journals,
reference works, conference proceedings, and academic papers, in other words, real-life academic products.
3 MyEnglishTeacher’s system features and modules
The system is implemented as a multimedia environment for Academic Reading, Writing and Comprehension, allowing
text, graphics, audio and video representation of the presented material. Programming and scripting languages used are CGI,
Perl, Javascript and Java. The plug-ins for different environments are reachable from within the system environment, so the
student-user overhead is kept low.
- Text DB
- Graphics DB
- Audio DB
- Video DB
- Expression DB
- Link datDB
DB
DBLink
- general
student DB
...
- private
student DBs
GlA
Course Editor
Environment
+display
Teacher user
PA
Learning
Environment
+display
Student user
Fig. 1: The system modules and their interaction
Figure 1 shows the simplified system overview, represented as a set of interacting modules. The system offers two
interfaces, one for the teacher/tutor user, for course-authoring purposes, and the other one for the student user, who is
supposed to learn.
The information exchange from tutor to system contains input of lessons, texts, links between them, etc., but also asking
for help in editing. The data from the tutor is stored in six different structured databases, including a library of expressions
that appear in the text, a VOD database, a background image database, an audio database of listening examples, a full text
database and a link knowledgebase. These databases are used to build structural knowledge-bases.
The information exchange with the student is more complex. It includes usage of the presented materials, implicit or
explicit advice, the student’s advice requests, queries, searches, gathering of data on the student by the two agents, the Global
Agent (GlA) and the Personal Agent (PA). Each of these agents has its own data-/ knowledge-base about the student(s). The
GlA stores general features on students, and the PA stores the private features of each student.
User modeling follows many patterns and has many applications. [Di Lascio] proposes a fuzzy-based, stereotype collecting
user model for hypermedia navigation. [Virvou] elaborates on the Human Plausible Theory. [Collins] provides intelligent help
for determining the cause of errors in software usage. [Sa] has shown how prior belief (belief bias) can influence the
correctness of judgment of the human (users). Other authors, like [Elliot] have studied the relation between achievement goals,
study strategies and exam performance.
A realistic user model has to take into consideration the influences a system can achieve on the user, in order to allow an
easy interpretation of the current state, as well as an easy and clear implementation of the user model. We will discuss our
user model implementation in more details further on (section 5.1).
4 The Authoring Environment (Course editor)
To allow easy retrieval and easy adaptation, we impose some structural restrictions from the authoring tool, as follows. These
restrictions are also reflected by the system feedback (figure 4). Next, we describe the building blocks of our courses.
4.1 Texts
The smallest block is the TEXT. Each video/audio recording has a corresponding TEXT (dialog, etc.). For each TEXT, it is
analyzed if video is necessary, or if audio suffices, as audio requires less memory space and allows a more compact storage
and a speedy retrieval. Each TEXT also has (beside of main text, etc.), the following attributes: a short title, keywords,
explanation, patterns to learn, conclusion, and finally, exercises.
Titles and keywords are naturally used for search and retrieval, but the explanation and conclusion files can be also used
for the same purpose, as will be explained later on (section 4.3).
4.2 Lessons
One or more TEXTs (with video or not) build a LESSON. Each LESSON also has (beside of texts, etc.) the following
attributes: title, keywords, explanation, conclusion, combined exercises (generated automatically or not).
Next, a text or lesson will be called a ‘SUBJECT’.
4.3 Exercises (Tests)
Each EXERCISE also has (beside of main text, etc.), the following attributes: a short title, keywords, explanation, patterns
to test, conclusion. Exercises are connected to either a text or a lesson initially.
This structure of the exercises allows connecting exercises to the corresponding texts and lessons, automatically, via
relatedness computations (beside the connections that are input by the course designers, new connections can emerge).
Exercises and tests can be compulsory or not, as will be discussed later on.
4.4 Priority and Relatedness Connections
Lesson
Text 1
Text 2
...
Text i
...
Text n
Lesson
Text 1
Text 2
...
Text i
...
Text n
Lesson
Text 1
Text 2
...
Text i
...
Text n
w
1
New
Lesson
Text 1
Text 2
...
Text i
...
Text n
Lesson
Text 1
Text 2
...
Text i
...
Text n
Relatedness
connections
Priority
connections
Test-point
2
Fig. 2: The subject link database
The Priority Connections represent the normal flow of the lesson, as input by the teacher.
Moreover, we use also weighted connections, which we call Relatedness Connections, between subjects, for which no
specific learning order is required, but which are related. These relations are useful, e.g., during tests: if one of the subjects is
considered known, the other one should be also tested. These are set by the teacher, or computed automatically by the system.
The main difference between the priority connections and the relatedness connections is that the first ones are directional,
weightless connections, whereas the latter are non-directional, weighted connections. We will discuss the weight computation
in section 5.2.
After the initial settings of the teacher, the system will automatically add more links via keyword matching, from explicit
keyword files and keyword search within subjects.
The teacher / multimedia courseware author can decide if it is more meaningful to connect individual texts, or entire
lessons, for each lesson. The way a new lesson is inserted, by connecting at least to the previous and the following lesson in
the lesson priority flow, is shown in figure 2 (steps 1,2).
Priority connections, with no respective relatedness connection, can exist (figure 2). This happens when, e.g., common
course design knowledge dictates the priority, but the lessons learning contents are quite different. These kinds of priorities
are optimal student-learning-strategy-related connections, not similar-contents connections.
These priorities help the system to place the current subject in the global subject map. This result can be shown to the
teacher or not, depending on the options under which the system is running. The final graph is used for the student, and it can
be shown to the student upon request, serving as a map guide.
4.5 Test-points
The teacher should mark TEST-POINTS, also called compulsory exercises (figure 2), at which it is necessary to pass a test in
order to proceed (these tests can be at any SUBJECT level). This idea is derived from the game-theory, where a player cannot
proceed to the next level, unless s/he has passed some test (collected some items, etc.)
5 The Learning Environment
5.1 Student models and agents
The system gradually builds two adaptive student models: a global student model (GS) and an individual student model (IS),
managed by two intelligent agents: the personal agent (PA) and the global agent (GlA). Some features, common to all
students, can be captured in the GS. However, many studies have shown [Tomek] that personalized environments and
especially, personalized tutors have a better chance of transferring the knowledge content. This is true even in the more
general sense of a tutor and student, where the tutor can be man or machine, and the student likewise.
In this work, we mean by agent a “computer system situated in some environment”, “capable of autonomous action”, “in
the sense that the system should be able to act without the direct intervention of humans”, “and should have control over its
own actions and internal state” [Jennings]. These agents’ intelligence is expressed by the fact that each agent “is capable of
flexible autonomous action in order to meet its design objectives”, and that it is “responsive” (it perceives its environment),
“proactive” (opportunistic, goal-directed), “social” (able to interact) [Jennings], and of an “anticipatory” nature (having a
model of itself and the environment, and the capability to pre-adapt itself according to these models) [Ekdahl].
In the following, the raw data stored for the two student models, the GS and IS, is presented.
5.1.1
The GS
The GS contains the global student features:
the common mistakes;
favorite pages, lessons, texts, videos, audios, grading of tests’ difficulty (according to how many students do each
test well or not);
search patterns introduced, subjects accessed afterwards: if many IS use the same order, than they are recorded in
the GS.
5.1.2
The IS
The IS contains the personal student features:
the last page accessed;
grades for all tests taken, mistakes and their frequency; if the student takes the test again and succeeds, his/her last
grade is deleted, but his/her previous mistakes are collected for future tests;
the order of access of texts inside each lesson;
order of access of lessons (this can be guide to other students: “when another student was in your situation, he/she
chose...”);
frequency of accessing texts/ lessons/ videos/ audios, etc. - for guidance and current state check;
search patterns introduced, subjects accessed afterwards (to link subjects that the system didn't link before via
patterns).
5.1.3
The PA
The role of the personal agent is to manage the information gathered on the user, and to extract from this information useful
user guidance material. Each step taken by the user inside the environment is stored, and compared with both what was
proposed to the user, as well as with what the user was expected to do (from the PA’s point of view). The differences between
previous expectation and current state are exploited, in order to be used for new guidance generation.
Beside of analyzing the own user and extracting knowledge from the data on him/her, the PA can request information from
the GlA, about, e.g., what other users chose to do in a similar situation.
Furthermore, the PA can contact other PAs with similar profiles (after a matchmaking process), and obtain similar
information as from the GlA, only with more specificity. The PA can decide to turn to another PA if the information from the
GlA is insufficient for a decision about the current support method.
The PA decides, every time a user enters the system, what material s/he should study during that particular session, and
generates a corresponding list. This can be seen in figure 3, where user chen is offered two lessons. Therefore, the course
index is dynamic, not static. To this material, the PA will add or subtract, according to the interaction with the user during the
session.
According to [Maes], the PA can also be viewed as an interface agent (“a computer program to provide assistance to a user
dealing with a particular computer application” – in this case, a learning environment). However, the PA’s job description is
wider, as follows.
5.1.4
The GlA
The global agent averages information from several users, in order to obtain a general student model. The deductions of the
global agent are non-specific.
The GlA is necessary, because otherwise, the system will not profit from the fact that different users interacted with the
system, and each new interaction can smoothen the path for following users.
The GlA is called before the PA starts looking for information from other PAs, process that can be more time-consuming.
Therefore, the role of the GlA is to offer to the PAs condensed information, in an easily accessible, swiftly loadable form.
From this description, it is clear that the GlA is subordinate to the PA (from the student user’s point of view). The GlA
cannot directly contact the student user – unless the PA explicitly requests it.
If the GlA considers that its intervention is required, it still has to ask for permission from the PA. In this way, generation
of confusing advice is avoided.
5.2 Subject material’s relatedness weight computation
The way lessons and texts are connected via a link knowledgebase was explained previously (section 4.4). In figure 2, these
connections and their meaning can be seen. In this section we will discuss the way the two agents, GlA and PA, interact with
this link knowledgebase.
As shown previously, the priority connections between lessons have no weight attached, but the relatedness connections
have weights. These weights are changed interactively, as they reflect ‘how related’ two subjects are. This information is
useful for both guiding of the student during learning, as well as for testing the student.
Weights’ values are initialized as strong (eq.1), if resulting from teacher selection, and they are weaker, if the system
(GlA) deduced them. The weights are changed by the GlA, according to the behavior of the students within the
‘MyEnglishTeacher’ environment (eq. 2).
wA,B0=1: teacher’s selection; 0.5: system’s generation; 0: rest;
(1)
wA,B t+tconst = wA,B t + f1(no. of times connection A,B activated1) +
+ f2(no. of times connection A,B was accepted, when
proposed in relation to unknown subject)
+
+ f3(no. of times connection A,B was accepted, when
proposed in relation to query)
+
+ f4(no. of times tests related to connection A,B were
solved satisfactorily or not)2;
(2)
where: (0,1): forgetting rate; f1~f4: linear functions;
wA,B>03: weight between subjects A and B;
t: time; tconst: period for weight update4;
It is easy to see from these equations that related subjects will form cluster-type groups. However, inside groups, the
weights express the relative relatedness of the components.
5.3 Subject material’s importance coefficient computation
The importance of each text is computed as a direct relation value between the weight of the current text and the final test
result of the compulsory test (TEST-POINT) of the current stage. If, during the navigation activity of different users, the study
of a certain text results is of importance for the outcome of the current stage test, then the importance coefficient of the
respective text is increased. If, on the contrary, the respective text turns out to be superfluous (section 6.3, ex.3,1), the relative
importance to the test is decreased. This subject will not be offered to the student, unless one of the following two situations
occur:
1.
the user asks for more information on a specific related text;
2.
the user fails a tests that is related to that text.
The importance coefficient is represented with I and is computed as follows (A is the current text, and B is the test to which
it is related).
1
2
3
4
by the user or by other users, depending if it is a weight in the global model or the personal one;
can be positive or negative
if wA,B = 0, the relatedness connection dissapears
the weights are not updated at every move, otherwise computation becomes too time-consuming
Note: all texts are considered related to the compulsory tests of the current stage. Keyword matches increase this
relatedness. The importance of all course items is at the beginning the same and takes the value of 100%.
IA,B0= 100%
(3)
IA,B t+tconst = IA,B t * f1(time spent on text A) *
* f2(optional tests of the current level were
solved satisfactorily or not) *
* f3(compulsory tests of the current level were
solved satisfactorily or not);
(4)
The way this importance factor works is further explained and exemplified in sections 6.2 and 6.3.
Here it is to be also noted that the importance factor between a text and a test is different from the relatedness factor,
which is computed similarly to the relatedness weights. The reason is that, by having a structure similar to the text structure, it
is relatively easy to compute keyword matches between tests and texts. These relatedness values evolve differently, however.
This type of calculation allows establishing equivalence of different texts, or even paths, by combining the relatedness value
with the importance coefficient.
5.4 Other PA and GlA mechanisms
Each student’s PA copies the subject link knowledgebase, and modifies it for its student, according to his/her independent
behavior. These modifications are, just like the modifications of the GlA’s subject knowledgebase, adaptive.
The PA has to check from time to time with the GlA, in order to decide on possible updates of its own subject
knowledgebase, as the GlA is adaptively reflecting the current average trend.
The PA generates the ‘next learning steps’ (current index) and the ‘review suggestions’. The latter contains suggestions to
consult lessons and texts, connected to the errors that appeared in the student’s current and previous tests.
The PA generates the current index following the normal learning flow set by the priority order. Going in the opposite
direction can happen only when the student cannot answer some tests or quizzes satisfactorily. In that case, the review
suggestion window activates, as explained, connections to previous subjects, to direct the student to where s/he can fill the
gaps in his/her knowledge.
The PA also has to choose between two or more priority links. The usual procedure is to present all of them to the student
in their relatedness connection weight order. If no such connections exist, a random order is assumed. However, the current
user’s choices will be recorded by the PA, reported also to the GlA and be reflected in the new connections. That means that
the current user’s choices might be recommended to the next user. The ignored, non-related links will appear next time lower
on the list.
The PA and GlA “manage” (compute) the relatedness and importance coefficient values.
Moreover, the PA embeds also pedagogical knowledge, which is beside the target of the current paper.
5.5 Resuming the agents’ structure
From the described interactions between agents and databases, and between the agents themselves, it is clear that the agents of
the system work in two ways. The first way is based on the embedded rule/knowledge systems, which try to foresee, prevent
and solve conflicting situations. The second way is as adaptive, learning objects, which can change their representation of the
subject space, by creating and deleting links and changing weights and importance coefficients.
A next step in the system’s agents design will be focused on adaptive problem, quiz and test generation. In short, this
design is made necessary by the fact that a student, after failing to pass a test, has to be presented, after some more learning is
done, with a new test, of similar difficulty and contents. As it is difficult for the teachers to generate as many tests as would be
necessary for such repeated situations, this task is to be passed to the system’s agents.
A very important task of each of the agents is also to keep the consistency of the subject link database. The agents inform
the teacher(s) if some subjects form loops (determined by the priority connections set by the teacher(s)), if some subjects
become inaccessible, etc. Ultimately, when a teacher is not available, they make corrections by themselves, and decide from
the student-user(s) feedback about the appropriateness of those changes.
6 Student adaptation
6.1 Adaptation hypotheses
We use adaptation due to a number of hypotheses that we have hinted to previously. In this section, we are going to
present them systematically. The hypotheses we make about the needs of adaptation and the places where adaptation is
needed – as opposed to places where no adaptation is required, even if adaptation might be possible – is fully reflected in the
means, methods and techniques we use for adaptation.
Hypothesis 1: Teachers write more course material than is necessary. This means that teachers have a tendency to cover
the whole area with their course material, in order to give a clear and complete basis of learning to the students. However,
depending on the students’ capacities, background, etc., just a part of the material should suffice for their learning.
Hypothesis 2: Unseen equivalences can exist between course materials. This is the same as saying that, if student S
studies either material A, B, C, or material D, E, F, s/he can successfully pass test T. In this way, the sequence A, B, C is
equivalent to the sequence D, E, F. Of course, this equivalence can exist on a lower scale, as in material A is equivalent to
material B.
Hypothesis 3: Students are different in their learning needs and learning styles, and no learning need or style is better
than another is. Here, in the first part, we state something that is now-a-days thought of as trivial in adaptive learning circles.
However, there are still some teachers that consider a certain teaching style, corresponding to certain learning styles, better
than others. We take here the democratic view that given time, patience, good-will and interest, anybody can learn anything.
An adaptive environment that is tailored to the particular learning styles can speed up this process. This hypothesis, especially
it’s justification, is maybe debatable, but we argue that the past high level knowledge, discoveries, etc., are taught now-a-days
in school without any problems.
Hypothesis 4: Students are different in their background knowledge, mental capacity and motivation. This hypothesis is
important to make, as it represents the basis of many adaptive courseware systems. However, we treat these issues as a whole
in our system. A better approach would be maybe to separate them, especially the motivation issue. However, treating this
issue can easily degenerate into building software for entertainment, a quite distinct notion that we do not wish to pursue at
the moment. Another reason is, as said previously, that our system is dedicated for academics who wish to improve their
English. Therefore, they don’t start from zero, and we presume that motivation exists already, that they are reasonably capable
adults who don’t have to be sweet-talked into learning and honestly want to gain more knowledge.
Hypothesis 5: An average student behavior exists. This hypothesis is needed, because we are building an average student
model. We can make this hypothesis based on background information from the regular teaching system, and the regular
students. We expect, however, to find also behavior clusters, corresponding to learning styles and backgrounds. The issue of
behavior clusters is not treated at the current stage of our research, but we include it in the plans for our future research.
6.2 Adaptation type
Adaptations to students can be multifold. An adaptation, for instance, can be the tuning of the color scheme of the system
to the user’s preferences, and so on. Here we will focus mainly on the adaptation of the course material and its presentation to
the student’s needs, as explained through our adaptation hypotheses. From the learning environment point of view, adaptation
can mean content-level adaptation, and link-level adaptation [Beyerbach]. We consider the following adaptation types:
I. Overriding priorities: all these methods mimic the ants model [Schoofs].
3.
Jumps: result indirectly as an effect of the importance coefficient computations. An example of a jump situation
is shown below:
Regular course order (as set by the priority connections): A->B->C->D
Jump course order: A->B->D
A jump can result if the importance of the point C (representing some course material) has decreased below a
threshold.
4.
Reverse order: results from the path steps information recorded from previous users. An example of a reverse
order situation is shown below:
Regular course order (as set by the priority connections): A->B->C->D
Reverse order: A->B->D->C
A reverse order situation can result if enough students (a threshold number of students) have chosen to “walk”
the path in the reverse sequence. The system will start then to automatically suggest the reverse path order as the
better one.
5.
Crossed courses: is the case when two (or more) courses merge, to form a single, “best path” course. An
example of crossed courses is shown below:
Regular course order 1 (as set by the priority connections): A1->B1->C1->D1
Regular course order 2 (as set by the priority connections): A2->B2->C2->D2
Crossed course order: A1->B2->D1->C1
A crossed courses order can result if enough students (a threshold number of students) have chosen to “walk” the
path in the respective sequence. The system will start then to automatically suggest the new path.
At the present stage, we mainly envision using jumps, as too much interference with the course order established by
teachers can be controversial. Also, jumps don’t need the adding of an extra map to the already existing maps of priority
connections and relatedness connections.
II. More information for in-depth study
When a student desires to study a specific concept more in-depth, the system automatically suggests the “most related”
items, as computed from the relatedness connections described previously. The listing follows the weight value order of the
relatedness connection, from the largest value, to a threshold (example: section 6.3).
III. Feedback for failed tests
If a student fails to pass a test, the system identifies the items causing the failure and suggests further studies, by using the
relatedness connections. The order of the suggestions is the same as for in-depth study. To quickly identify the specific
concept that has been misunderstood, the tests have to be designed similarly to the texts and lessons, and provide the same set
of attributes (section 4.3). A feedback example for a test failure follows.
6.3 Examples of adaptation
According to the adaptation types we aimed at, which were explained previously, the current section will present two main
examples of adaptation situations.
Example 1: An in-depth study request and the system’s response.
[…]
First Student’s trajectory: (1)->…->(7)
H (H is a lesson about differences in spoken and written style in English)
(1)
(5)
(2) (3)
(6)
(7)
TEST-POINT
(4)
A B
C
D
E
At point H, student asks the system:
More information? (by pressing a button)
System replies:
A, B, C, D, E (the texts A-E are examples of spoken and written English)
(this in-depth information is deduced from relatedness links)
First student’s results at the test-point: satisfactory (85%)
[…]
Observation: Not all priority links, or relatedness links are shown in these examples, for the sake of a better over-view.
The result is what is called in the hypermedia literature “a fish’s view”, i.e., a local view, corresponding to the situation and
current needs (see section 2).
Example 2: A bad test result and the system’s advice
[…]
Second Student’s trajectory: (1)->…->(4)
H
(1)
(2)
(3)
(4)
TEST-POINT
Second student’s results at the test-point: poor (40%)
Student asks the system:
Advice? (by pressing a button)
System replies:
A, B, C,( D, E)
(deduced from the relatedness-links, as above)
[…]
Observation: The study items and the test-point refer to the same as in Example 1. Here we can notice both the usage of
relatedness links, for pointers after a bad test result, as well as the effect of learning an optimal (a sufficient) path.
Example 3: Path shortening due to importance coefficient changes
System’s course suggestion: (i-1)100%->(i)100%->(i+1)100%->…->(compulsory-test-point)100%
Student ale trajectory: (i-1)->(i)->(i+1)->…->(compulsory-test-point)
i-1 ->i: “Usage of Yes and No in Conversation. Scene I” -> i+1: “Usage of Yes and No in Conversation. Scene II”->..
Keywords: Yes and No, Affirmation, Negation,
Keywords: Yes and No, Affirmation, Negation,
Question answering, positive questions
Question answering, Negative questions
Relatedness Connection initialization: number of common keywords/ max( number of keywords i, i+1)=4/5=80%
Student ale timing results:
i-1 : x seconds
i
: 115 seconds; video-length: 65 seconds; (video-selected)
i+1 : 10 seconds; video-length: 67 seconds; (video-not-selected)
i+2: y seconds
…
test-point: test results satisfactory (90%)
Test example: Q1: Your user name is ale, right?
Q2: Isn’t your user name is ale? […]
Conclusion: Point i+1 was not completed, but the test-point was successfully passed.
System reaction: importance value of i+1 decreases.
For the next student, the system will change its suggested trajectory as follows, due to the change in the importance.
Example 4: The shorter path presented to the next student
System’s course suggestion: (i-1)x%->(i)100%->(i+1)80%->…->(compulsory test-point)100%
Observation: importance is test-point importance, but results also from relations between subjects, as in the relationship
between i and i+1.
The system could do such an inference as in example 4, due to recording each item selected by the students (including
actually using the multimedia, video presentations or not), and comparing it to the expected results (here, the time to spend).
One further step we haven’t touched here is the fact that text i and i+1 are identical in relation to the test-point. This kind
of deduction can be made based on both importance coefficient and relatedness value.
7 Other functions, system tests
Fig. 3: A partial view of the registration interface
When entering the system for the first time, the student user must register. Figure 3 shows part of the registering interface. For
confidentiality, the user must choose pseudonym (username) and password, to access his/her own profile stored by the system.
The stored data have multiple uses (e.g., to help the GlA and PA in their decisions). If the user forgets his/her username and
password, the system can retrieve them, by requesting some of the information provided in this form, e.g., name, e-mail, and
birthday.
The obtained user information is collected with the purpose of applying the adaptive advising method. Users are not only
different at start from one-another, but they also evolve differently in time, their knowledge changes (this being the actual
purpose of the teaching system), so the personal user model has to evolve together with its user.
After registration/entering procedure, the user can continue his/her study from where s/he stopped last time (figures 4,5).
The download links (plug-ins), as well as a variety of free, on-line dictionary links can be seen in the left frame of fig. 4,5.
Fig. 4: The learning environment interface
In addition, for users with limited system memory-space a text-only version is available.
Figure 5 shows how we combine text/video/sound. The cues that are used in the videos are completely reproduced as texts,
for an easy understanding, and for quick search. However, listening-only tests/lessons are possible.
Next, the role of the TEST-POINTS ( section 4.5) is explained. If the student wants to jump one or more subjects, s/he can
proceed with only one test, made of a random combination of tests from the previous TEST-POINTS, in a proportional
relation. If the student fails, another test is generated similarly a number of times. If s/he fails again, s/he is given pointers
where to return to study, and cannot proceed until the requests are satisfied.
Fig. 5: A personalized text and corresponding movie
This is because the PA represents a personalized tutor, and, as a tutor, it cannot allow the student to pass any level without
taking the respective exams. Therefore, the system is aimed mainly as a learning, and less as a referencing environment
(although referencing can be done, assuming the referenced material was previously studied and belongs to the current level,
or the levels before).
Next to these compulsory tests, optional tests can be added, for student self-evaluation. The results of these tests are also
recorded by the agents, but only for guiding students and detecting error patterns.
Moreover, the system allows the user to leave messages when entering the system.
8 Evaluation and Discussion
Next, we discuss our environment according to the 5-star-model [Merrill]. The five stars that a training system/ program
can maximally achieve [Merrill] are presented one-by-one, and we discuss the demands of each star, and how our system
contrives to meet them.
First Star – An-Environment-for-Learning
Learning_is_best
… when-knowledge-is-connected-to-the-experience-of-the-learner.
YES: In the sense that pre-tests are possible, if the teacher sets an compulsory test-point at the beginning of the
course, forcing the system to generate advice from the beginning. If not, the path adapts later-on, corresponding finally
to the experience and background of the learner.
… when-expert-knowledge-is-presented-and-demonstrated.
YES/POSSIBLE: This is up to the course implementer/teacher, to design his/her courses that way. Also, the student
can view the generated maps, in a fish-eyes-view or a bird-eye’s-view. Due to privacy problems, the student cannot
really be shown whole paths of other users.
… when-the-learner-applies-his/her-knowledge.
YES: Knowledge-application occurs first in tests, at optional/compulsory test-points. Later on, it can be successfully
applied in academic life, at conferences, when writing journal papers, when giving courses in English, when reading
other researcher’s results, when discussing with other academics, etc.
… when-the-knowledge-is-integrated-into-the-learner's-world.
POSSIBLE: We already store user-data regarding student’s preferences, their research interests and hobbies, to
integrate later language-related knowledge into their private worlds. With this data acquisition we aim at searching for
similar English-usage-examples on the web, this time, related to the user’s preferences (research/hobby). E.g., users can
be shown how some grammar patterns appear in the every-day academic life.
Second Star – A-Problem-to-Solve
Learning_is_best
… when-the-learner-is-engaged-in-a-real-world-problem.
POSSIBLE: Up to the course designer; a further step: to implement collaboration not only among student and system, or
a-synchronous, indirect collaboration between learners via good-path strategy, learner patterns, etc., but also
synchronous, on-line, real-time chat and other students’ collaboration methods. An example is collaboration in problem
solving, which we have not pursued yet, because one very important feature of the WWW learning, the “at-any-time”
characteristic, can be lost.
Third Star – Information-to-Learn
Learning_is_best
… when-the-learner-is-shown-rather-than-told.
YES: The dialogue-based VOD approach we have taken, which allows seeing the texts and patterns to learn in real-life
situations, provide a learning environment where the learner is more often shown than told.
… when-the-information-is-consistent-with-the-learning-goal.
YES: The information is courses on academic English, and the target is academic people, who want to improve their
academic English for real-life academic English conversations and other non-conversational academic situations.
… when-media-plays-a-relevant-instructional-role.
YES/POSSIBLE: The use of VOD, text, images, sounds is possible and encouraged, but it’s finally up to the teacher and
course designer to use them properly and not abuse these features. The ideal way is to use media both as a teaching tool,
in lessons, and in testing. Combinations are possible and leave free space for good courseware development.
… when-the-learner-is-directed-to-important-information.
YES: The learner is guided/directed to important/needed information for him/her via the adaptive features, automatic
navigation guidance, but also, via pre-stored pointers, instructions and indications drawn by the course designers.
Fourth Star – Skills-to-Practice
Learning_is_best
… when-the-learner-practices-the-skill.
YES: The multitude of tests in the courses, optional/compulsory, give the learner a chance to practice his/her achieved
skills. A further step would be multiple, equivalent, automatic test generation.
… when-practice-is-consistent-with-the-learning-goal.
YES: The tests correspond to subjects, and the subjects correspond to real academic life English usage situations.
… when-the-learner-can-demonstrate-skill-improvement.
YES: Evaluations at each stage, compulsory test-points allow the learner to demonstrate the achieved knowledge and
language usage skills. Furthermore, the learners can start tackling real life academic situations, where the common
language is English.
Fifth Star – A-Coach-to-Guide
Learning_is_best
… when-the-learner-is-shown-how-to-detect-and-correct-errors.
POSSIBLE/NOT-YET: This feature is not yet implemented in the current version. Bad test results trigger pointers to yet
uncovered course material and to course items related to the mistakes were made.
… when-problems-start-easy-and-then-get-harder-and-harder.
POSSIBLE/NOT-YET: Can be done by the teacher, by tuning the compulsory tests, but is not automatically generated.
… when-coaching-is-gradually-withdrawn-for-succeeding-problems.
NOT-YET: Unless the teacher specifically implements it, there is no problem-solving-coaching, except, as said
previously, pointing to uncovered or not yet understood topics (deduced from bad test-results on those topics). It makes
no sense to withdraw this last feature, for the present system. This would only make sense in the context of multiple hints
in problem solving, that are gradually withdrawn.
Concluding from this evaluation, we can say that, according to the 5-star-model [Merrill], our system has at present around
4 stars, although some improvements can still be done.
9 Conclusions and future research
We proposed an Adaptive, Web-based, Academic English Teaching Environment, called “MyEnglishTeacher”, and
described the system rationale, design and implementation. Next, we showed the system modules: an authoring environment
for the teacher user(s), helping in generating lessons and a learning environment for the student user(s). We also presented the
course structuring details of the first environment, and concentrated on the student environment and its adaptive features. In
particular, the learning environment is based on two intelligent agents, interacting with each other and the student user, to
guide the student through a new course for academic English, under development in our laboratory. We also explained how
our agents evolve and present intelligence. They build and modify student-models with the help of a double graph: a non-
weighted, directional-priority-graph, and a weighted, non-directional, relatedness-graph. Moreover, they base their decisions
on importance coefficients attached to each piece of course material.
In addition, we explained how, from authoring system design requirements, we enforce the generation of structuredcontent databases, to serve as a basis to the rule/knowledge-bases, used and expanded by the two agents. Moreover, we
showed the computation and implementation of the weight-update-function for the relatedness-links and the importancecoefficients, and explained their usage. We showed how, with the priority graph built by the teacher, the relatedness graph
automatically built by the system and the importance computation, the student guidance, direction and orientation inside the
multimedia web courseware is possible. Further on, we introduced our adaptability-hypotheses, and our considered adaptation
types, and showed some simple student adaptation working examples. Moreover, we presented some other functioning
examples of the “MyEnglishTeacher” environment. We made some preliminary tests of the prototype system, of registration,
identification, data-processing and automatic-course-index-generation for each user, also some tests of access from the
laboratory and from abroad. Moreover, we have made a qualitative evaluation of our system, according to the five-star model.
Our system is still evolving, and for the next steps, we will focus on the items described below:
Design a model for behavioral clusters – eventually, a fuzzy model. This can help in avoiding sending all students
on a beaten track, in the forward prediction step. (This is not vital for the in-depth learning type or for navigation
guidance after test failure).
Design a combination-value between the importance-coefficient and the relatedness-weight for identical texts and
paths identification.
Design more tests for our system with real-world-situations and real students, for a real classroom-type evaluation.
Extend and implement other AI-related, as well as non-related agent capabilities (e.g., automatic, intelligent quiz
generation).
At keyword searches, the PA should also search the web for appropriate patterns - together with student's
research interests keywords - to show "real-life" example-usage.
We believe that with our system we are addressing more than one current need: the need of an English tutor for academics,
which should also be easily accessible – i.e., on-line –, free, adaptive and user-friendly.
Bibliography
Aspera PrivaTeacher, http://www.privateacher.com/
Beyerbach, B., Developing a technical vocabulary on teacher planning: preservice teachers’ concept maps, Teaching and
Teacher Education, Vol.4, 1988.
Collins, A., Michalski, R. (1989) “The logic of plausible reasoning: A core theory”, Cognitive Science, Vol.13, 1-49.
Collis, B. (1999) “Design, Development and Implementation of a WWW-Based Course-Support System”, Frontiers in
Artificial Intelligence and Applications Series, Eds. Cumming, G., Okamoto, T., Gomez, L., IOS Press Ohmsha, 11-18.
Cristea, A., Okamoto, T., Cristea, P. (2000) “MyEnglishTeacher – An Evolutionary, Web-based, multi-Agent
Environment for Academic English Teaching”, CEC-2000, San Diego, USA, 1345-1353.
De Bra, P., Hypermedia Structures and Systems, http://wwwis.win.tue.nl/~debra/2L690/
Di Lascio, L., Fischetti, E., Gisolfi, A. (1999) “A Fuzzy-Based Approach to Stereotype Selection in Hypermedia”, User
Modeling and User-Adaptive Interaction, Vol.9, No.4, Kluwer, 285-320.
Ekdahl, B., Astor, E., Davidsson, P. (1995) “Towards Anticipatory Agents”, Woolridge, M., Jennings, N.R. (Eds.),
Theories, Architectures, and Languages, Lecture Notes in Artificial Intelligence, Springer, 191-202.
Elliot, A.J., McGregor, H.A., Gable, S. (1999) “Achievement Goals, Study Strategies, and Exam Performance: A
Mediator Analysis”, Journal of Educational Psychology, Vol.91, No.3, 549-563.
EnglishLearner, http://www.englishlearner.com
Jennings, N.R., Wooldridge, M. (1998) “Applications of Intelligent Agents”, Agent Technology Foundations,
Applications, and Markets, Springer-Verlag.
Sa, W.C., West, R.F., Stanovich, K.E. (1999) “The Domain Specificity and Generality of Belief Bias; Searching for a
Generalizable Critical Thinking Skill”, Journal of Educational Psychology, Vol.91, No.3, 497-510.
Levy, M. (1997) “Computer-Assisted Language Learning”, Oxford-Claredon-Press.
Maes, P. et-al. (1993) “Learning Interface Agents”, Proc. 11th Nat. Conf. On Artificial Intelligence, AAAI, MIT/AAAI
Press.
Merrill, D., (2000) “Does Your Training Rate 5 Stars”, Invited Talk, IWALT 2000, New Zealand,
http://lttf.ieee.org/iwalt2000/keynotes.html.
Report: http://www.rand.org/publications/MR/MR975/MR975ch6final.htm
Schoofs, L., Naudts, B., (2000) "Ant Colonies are Good in Solving Constraint Satisfactory Problems", CEC 2000, 11901196.
Tomek, I. (1999) “Virtual Environments in Education”, same as [Collis], 3-10.
Virvou, M., Du Boulay, B. (1999) “Human Plausible Reasoning for Intelligent Help”, User Modeling and User-Adaptive
Interaction, Vol.9, No.4, Kluwer, 321-275.
Weiss, G., Sen, S. (Eds.) (1996) “Adaptation and Learning in Multi-Agent Systems”, Springer, Lecture Notes in Artificial
Intelligence, Vol.1042.
© Copyright 2025 Paperzz