Intelligent Classroom

Embed Size (px)

Citation preview

  • 7/29/2019 Intelligent Classroom

    1/6

    The Intelligent Classroom: Towards an Educational

    Ambient Intelligence Testbed

    Rabie A. RamadanComputer Engineering Department,

    Cairo University,

    Cairo, Egypt,

    [email protected]

    Hani Hagras, Moustafa Nawito, Amr El Faham,and Bahaa Eldesouky,

    Ambient Intelligence Center, German University in

    Cairo,

    New Cairo City, Egypt

    hani,@essex.ac.uk

    Abstract The widespread of embedded computer networksas part of everyday peoples lives is leading the current

    research towards smart environments and Ambient

    Intelligence (AmI). AmI is a new information paradigm where

    people are empowered through a digital environment that is

    aware of their presence and context and is sensitive, adaptive

    and responsive to their needs. In this paper, we describe the

    intelligent Classroom (iClass) which aims to realize the AmI

    vision in Education in universities and schools. We will

    describe the architecture employed to build the iClass and we

    will present three different directions including the utilization

    of RFID technology, interacting with the user via speech and

    developing intelligent agents to learn the user behavior and

    adapt to its change over short and long time intervals.

    Keywords- intelligent classroom; sensor networks, fuzzy

    logic, RFID

    I. INTRODUCTIONMark Weiser described the smart environments as

    physical worlds that invisibly interact with smart sensors,

    actuators, displays, and computational elements that are

    seamlessly implanted into our daily live activities. However,smart environments have to be associated with different

    Artificial Intelligence (AI) techniques and algorithms

    including artificial neural networks, evolutionary

    computation, swarm intelligence, artificial immune systems,

    and fuzzy systems. Together with logic, deductive reasoning,

    expert systems, case-based reasoning and symbolic machinelearning systems, these intelligent algorithms help in forming

    smart environments. Combining the AI techniques and

    algorithms with smart environments leads to new research

    field named Ambient Intelligence (AmI). AmI is defined

    as an electronic environment that is sensitive and responsive

    to the presence of people in specific environment.

    AmI techniques and algorithms have been utilized in

    many of smart environments research. For instance, at

    university of Essex [11] the authors tried to achieve the

    vision of ambient intelligence by embedding intelligent

    agents in the user environments so that they can control them

    according to the needs and preferences of the user. A novel

    fuzzy learning and adaptation technique for agents that are

    embedded in ambient intelligence environments have been

    presented.In the field of education, ambient intelligence also plays a

    key role. For instance , there are some efforts that have been

    done in this regard including North Carolina State

    Universitys Web Lecture System (MANIC) [10], theBerkeley Multimedia Research Centers Lecture Browser

    [7], , AutoAuditorium [1], STREAMS [9], and AutoTutor

    [12].

    The recent advances in RFID technology made it possible

    to somehow to have the advantages of using passive tags

    with high frequency ranges. These RFIDs have been used in

    many applications; for example, it has been used for personidentification as in universities and/or companies. Nowadays,

    new passports save information like, a digital picture of the

    owner, a digital version of the passport and biometric

    information about the owner in the passport's RFID tag

    permanently. There are many other applications that involve

    RFID usage in hospitals, animal identification,transportation, stores payments, and banks [4].

    Figure 1 shows the market in terms of RFID tags sold for

    different purposes. As can be seen, the RFID technology is

    used most for retail apparel while is almost neglected for

    people identification; only 1.3 million tags are sold for this

    purpose. However, we believe that our iClass is one of the

    fields that prove the importance of RFID in educational

    smart environments.

    In this paper, we introduce a unique testbed for

    educational ambient intelligence classroom (iClass) where

    different AmI techniques and algorithms have beenexploited. The paper describes the iClass architecture (as part

    of Ambient Intelligence center at German University ofCairo (GUC)) in terms of hardware and networking. Section

    III, portrays the importance of RFID technology in

    classroom environment. Section IV presents how the user

    can interact via speech with the iClass. Section V presenthow the iClass can learn the user behavior and adapt to it

    over short and long time intervals.

  • 7/29/2019 Intelligent Classroom

    2/6

    Figure 1: RFID market in different areas

    II. ICLASS ARCHITECTUREThe iClass as shown in Figure 2 is a test bed for

    educational ambient intelligence system. The iClass looks

    like any other classroom containing normal furniture as in a

    usual room, including the desks, chairs, white board and a

    smart board. However, the iClass consists of a large number

    of embedded sensors, actuators, processors and aheterogeneous network. The iClass is a multiuser space that

    can be used through different teaching activities. As shown

    in Figure 2, there is a standard multimedia PC that combines

    a projector with a flat-screen monitor and another digital

    monitor, which is placed outside the class to inform students

    with the starting and ending time, name of the lecture topicand any other announcements related to the given course as

    shown in Figure 3.

    Figure 4 shows the iClass network infrastructure. The

    iClass is equipped with a weather station. In addition the

    iClass has the following sensors and actuators: time of the

    day and date, internal light level sensor, external light level

    sensor, internal temperature sensor, external temperaturesensor, humidity sensor and a presence sensor. The effectors

    can control the following in the class: six dimmable spot

    lights, two window blinds and heater/cooler air conditioning.

    These sensors and actuators are obscured in the class with

    the intention that the user should be completely unaware of

    the intelligent infrastructure of the class, which is required to

    reach the aim of educational ambient intelligence. Although

    the iClass looks like any other class, the ceiling and walls

    hide numerous networked embedded devices residing on two

    different networks: Lonworks and IP network. Thesenetworks provide the diverse infrastructure present in

    ubiquitous-computing environments and let us develop

    network independent solutions. Because we need to manageaccess to the devices, gateways between the different

    networks are critical components in such systems, combining

    appropriate granularity with security [6].Lonworks, Echelons proprietary network, includes a

    protocol for automating buildings. Many commercially

    available sensors and actuators exist for this system. The

    physical network installed in the iClass is the Lonworks

    iLON Smartserver network which provides the gateway to

    the IP network. This server lets us read and alter the statesand values of sensors and actuators via a standard Web

    browser using HTML forms which passes its data to a

    notepad file created by a parser java program that the agent

    read its input from. Most of the sensors and effectors in theiClass are connected via a Lonworks network. The Echelons

    i.LON SmartServer shown in Figure 5 is the key to

    businesss energy conservation and operations strategies. It

    not only lets us access, control, and monitor virtually any

    electronic device the iClass, but it also gives the power to use

    information intelligently to save energy, improve operations,and lower maintenance costs.

    Figure 2: iClass internal view.

    Figure 3: iClass external view.

    Figure 4: iClass network infrastructure

    Figure 5: Echelons i.LON SmartServer [8]

    Figure 6 shows photos of the various sensors and weather

    station located with the iClass. The weather station is

    installed outside the iClass to measure the outdoor humidity,cloud cover, wind direction, wind speed, rain fall, solar

    radiation and the outdoor temperatures. Any networked

    computer that can run as standard Java process can access

    the iClass, thus, this multimedia PC can also act as an

  • 7/29/2019 Intelligent Classroom

    3/6

    interface controlling the devices inside the class room.

    Equally, the interface can be accessed from wireless devices

    such as the mobile phone using a 3G interface, which is asimple extension of the web interface, which can monitor

    and control the iClass directly. Currently our fuzzy agent

    learning mechanism and interface operates from the standard

    multimedia PC in the iClass.

    III. RFID IN ICLASSIn this section, the role of RFID technology in iCLass is

    explained. There are two RFID readers as shown in Figure 4;

    one for the lecturers and the other one for students. Each

    lecturer has an RFID tag that includes the lecturer identifier

    (ID). Once the lecturer enters the iClass, the lecturer RFID

    reads his/her ID and sends it to the multimedia computer. A

    smart agent is designed specially to deal with thisinformation. The smart agent is designed to lookup the

    classroom schedule out of the school schedule and get 1) the

    classroom assigned lecturer name and ID at this time, 2) the

    students names and IDs that are currently assigned to the

    classroom at this time, and 3) a copy from the lecture

    materials that were uploaded by the lecturer before the

    lecture time. The agent is also responsible for turning on the

    data show and the smart board and shows the materials on

    the smart board. On the other hand, once the lecturer is

    recognized and students start to enter the class room, the

    students RFID reader begins to read their RFID tags andsends this data to the multimedia computer as well. The

    Students process is similar to lecturer process; however, a

    timer and a number of times to read are set to the RFID

    reader to read the students tags.

    To evaluate the overall RFID system, a software agent

    has been implemented using dot net on the iClass multimediacomputer to utilize the automatic attendance of students

    during last semester on one of the subjects. The performance

    of such system is tested against manual attendance and found

    that the automatic attendance system accuracy is on average

    97% which are acceptable results. The other 3% error

    percentage was due to the time threshold that we set and/or

    the problem with RFID signals. The time threshold that we

    set restricts the student attendance to half of the lecture time

    while manual attendance (lecturer takes the attendance by

    himself/herself) does not have this condition. The problem

    with the RFID signals could be due to students putting theircards on a wallet and put them on their back bucket, have

    other cards with them or unethical issues such as a student

    having other classmates cards.

    IV. SPEECH INTERACTION WITH THE ICLASSSpeech communication is an essential part of human

    psychology. In fact, through the speech communication

    human symbolic behavior can be studied. It is also one of

    the oldest academic discipline as well as one of the most

    modern academic interests. However, speech

    communication is not only limited to human interpersonalcommunication, but also extended through technological

    mediation such as telephony, movies, radio, television, and

    the Internet which reflect the dominance of spoken

    communication in many of the human psychological aspects.

    Figure 6: The iClass sensors, weather station and multimediavideoprojector.

    The challenge is in designing spoken communication

    language between the human and the computer where the

    computer can listen, speak, understand and more importantlyto learn. Therefore, it is expected with modern technology,

    the current interest will be in developing voice controllable

    systems. it is also expected that the human-machine spoken

    language will change the way we live and work [14].

    One of the challenges in iClass is to allow speech

    interaction with its users. Since iClass software was builtwith modularity in mind, we were able to import one of the

    speech recognition library named Sphinx-4 [13]. In

    iClass speech interaction, we utilized the features introduced

    in Sphinx-4 library for the benefit of iClass environment

    control. Along with Sphinx-4 speech recognition library, we

    had to define our grammar for iClass control. This grammar

    includes, Open light, Close light, Amplify light, Decreaselight, Open curtain, Close curtain, Amplify curtain, Decrease

    curtain, Open air condition, Close air condition, Amplify air

    condition, and Decrease air condition commands.

    In addition, we designed a fuzzy agent named Speech

    Recognizer Based Intelligent Fuzzy Agent (SRBFA). It is

    based on unsupervised data-driven one-pass approach for

    extracting fuzzy rules and membership functions from data

    to teach a fuzzy controller that will model the users

    behaviors. The data is collected by monitoring the user in

    the environment over a period of time. The learned Fuzzy

    Logic Controller (FLC) provides an inference mechanism

    that will produce output control responses based on the

    current state of the inputs. Our adaptive FLC will thereforecontrol the environment on behalf of the user and will also

    allow the rules to be adapted and extended online,

    facilitating life-long learning as the users behavior driftsand environmental conditions change over time.

    SRBFA is comprised of five phases in addition to the

    environment readings, as shown in Figure 7, :1) monitoring

    the users interactions and capturing input/output data

    associated with their actions (the user input is done through

    speech and interface; 2) extraction of the fuzzy membershipfunctions from the data; 3) extraction of the fuzzy rules

  • 7/29/2019 Intelligent Classroom

    4/6

    from the recorded data; 4) the agent controller; 5) life-long

    learning and adaptation mechanism.

    Figure 7: SRBFA phases

    It is necessary to be able to categorize the accumulated

    user input/output data into a set of fuzzy membership

    functions which quantify the raw crisp values of the sensors

    and actuators into linguistic labels, such as normal, cold, or

    hot. SRBFA is based on learning the particularized

    behaviors of the user and, therefore, requires thesemembership functions to be defined from the users

    input/output data recorded by the agent. A clusteringapproach [2] based on fuzzy-C-means (FCM) clustering was

    used for extracting fuzzy membership functions from the

    user data.

    Our dataset of user instances contains many attributes.

    We start by generating p initial clusters using the FCM

    approach. Each cluster has a center , which is an r-

    dimensional vector having rcentroid values . The final

    cluster centers are then converted to the extracted fuzzy sets

    (linguistic labels).We used that algorithm because it is ableto learn the individual behavior of the user. Different

    memberships were generated for different users due to thedifferent behaviors of the users observed when the iClass

    interface was used in the first experimental phase.

    To study the performance of our speech interaction

    system, we conducted different experiments. In one of theseexperiments the user had to spend three consecutive days

    inside the iClass. Once the user entered the iClass, he

    recorded a voice sample which allowed the system to

    recognize the speaker successfully and created the user

    profile to associate the fuzzy rules with as it was the firsttime for the user to use the classroom.

    As shown in Figure 8, during the first day, the user had to

    define the meaning of each voice commands to the system

    on different environmental conditions. The system rate oflearning new rules was the highest on that day. As any

    surrounding condition is changed while adapting the classroom, the system had to generate the new rules that are

    relative to this adaptation. On the second day the user was

    not satisfied by all the adaptation applied by the classroom

    when voice command is given. The user had to override

    some rules to adapt the system again according to the new

    situation. On the third day, the system has stabilized as theuser was satisfied by the adaptations that occur when he

    gave voice commands and no more overriding occurred.

    Figure 8: The number of rules learned during the experiments.

    V. AN INTELLIGENT AGENT TO LEARN AND ADAPT TOTHE USERSBEHAVIOURS

    Fuzzy logic is proved to provide a good framework for

    modeling various types of uncertainties in information.

    Fuzzy Logic Controllers (FLCs), the most popular

    application of fuzzy logic, provide an adequate

    methodology for designing robust controllers that are able to

    deliver satisfactory performance when contending with the

    uncertainty, noise and imprecision attributed to real worldenvironments.

    However, the linguistic and numerical uncertainties

    associated with dynamic unstructured environments cause

    problems in determining the exact and precise antecedents

    and consequents membership functions during the FLC

    design. Type-2 fuzzy logic is an extension of ordinary type-1 fuzzy logic where the membership function is fuzzy rather

    than crisp. As shown in Figure 9, in type-2 FLCs, the crisp

    inputs from the input sensors are first fuzzified into input

    type-2 fuzzy sets. The input type-2 fuzzy sets then activate

    the inference engine and the rule-base to produce outputtype-2 fuzzy sets. The type-2 FLC rule-base is the same as

    that of a type-1 FLC (i.e. a set of IF Then rules). Theonly difference is that for type-2 rule bases, the antecedents

    and/or the consequents will are represented by type-2 fuzzy

    sets. The inference engine combines the fired rules and

    gives a mapping from input type-2 fuzzy sets to output type-

    2 fuzzy sets. The type-2 fuzzy outputs of the inference

    engine are then processed by the type-reducer, which

    combines the output sets and performs a centroid calculation

    that leads to type-1 fuzzy sets called the type-reduced sets.

    The type-reduced sets are then defuzzified to produce crisp

    output values.

    Our agent operations can be divided into the following

    phases (as shown inFigure10):

    A. Building individual type-1 fuzzy profiles forinput/output variables.B. Building the type-2 model for input/output

    variables

    C. Monitoring users behaviorD. Generating the type-2 FLCE. System control and adaptationF. Rule-base optimization

    In the following subsections, these phases are explained

    in some details.

    Recognize speech from

    user

    Capture data on userinteraction

    Extract membershipfunction

    Extract Fuzzy rules

    Agent control and onlinecreation/adaptation to fuzzy

    rulesEnvironment

  • 7/29/2019 Intelligent Classroom

    5/6

    Figure 9: Structure of a type-2 FLC

    A. Building individual type-1 fuzzy profiles forinput/output variables

    The agent starts by modeling individual type-1 fuzzy

    profiles that encapsulate the preferences of individual users.

    These sets are acquired by two different methods. In the first

    method, the agent is adjusted to automatically monitor the

    iClass users in the classroom for a certain period of time and

    extracting their fuzzy profiles using some techniques such

    as Fuzzy C-Means clustering (FCM) technique [5]. The

    second method was intentionally designed to be more intomanual process. The iClass users are asked to fill in a

    carefully crafted survey in which the users fill in only few

    values for each fuzzy variable.

    B. Building the type-2 model for input/outputvariables

    In this phase the system aggregates the individual type-

    1 profiles to produce the type-2 fuzzy model for the

    input/output variables. The aggregated type-2 model

    characterizes the collective behavior of the class occupants

    making use of type-2 fuzzy logic capability of incorporating

    higher levels of uncertainty. It effectively models the

    uncertainties present in the environment especially the inter-user uncertainties about the meanings of input/output

    variables.

    C. Monitoring users behaviorAfter building type-2 models, the system then starts to

    monitor users actions in the environment to incrementally

    build the system fuzzy rule base. Based on the IAOFIS

    approach [3], whenever a user changes actuator settings, the

    system records a snapshot of the current inputs (sensor

    states) and the outputs (actuator states with the new altered

    values of whichever actuators were adjusted by that user).

    The set of accumulated multi-input multi-output data pairs

    are then used to construct the rule-base of the system type-2

    FLC.

    D. Generating the type-2 FLCNow, the set of interval type-2 membership functions

    generated from phase 2 are combined with the accumulated

    user input/output data to extract fuzzy rules defining users

    collective behavior. After generating the interval type-2

    membership functions in the previous stage and generating

    the fuzzy rules from the user data in the current phase we

    have a type-2 FLC that models the users behavior in the

    environment, which makes the system FLC ready to operatethe iClass on behalf of its occupants.

    Figure 10: Phases of operation of the proposed system

    E. Agent control and online adaptationOnce the system FLC rule-base is ready, the system can

    take control of the environment. The system FLC regularly

    reads sensory values and fuzzifies them into type-2 fuzzysets. It then uses the rule-base to do inference on the input

    sets and produce the type-2 output fuzzy sets representing

    the decision taken on behalf of the users which reflects their

    learnt behavior. These type-2 sets are then type-reduced to

    produce type-1 fuzzy sets which are then defuzzified into

    crisp values used to drive the different actuators in the

    classroom.

    The system not only controls the environmentreproducing the users behaviors but also has adaptation

    capability. There are two types of adaptation that the system

    can perform:

    1. Short term online adaptation: whenever a userintervenes by actuating one or more of theclassroom actuators to override a control action bythe system, the system records these interventions

    and updates the rule-base accordingly online.

    2. Long term adaptation, as the changes in usersbehavior or in the operation conditions accumulate

    the amount of uncertainty that the system has to

    model becomes big enough to degrade the systemperformance. The system transitions to long term

    adaptation by jumping back to phase 3 where users

    are monitored again to rebuild the FLC rule-base to

    more accurately reflect their preferences.

    F.

    Rulebase optimizationThe explosion of the rule-base size is a major problem

    in rule-based systems that arises from redundancy in the

    rules. In this phase of operation, the system optimizes the

    rule-base size by tackling both attribute redundancy and ruleredundancy. In most of the optimization experiments, the

    rule-base to-be-optimized had nine input variables: Time-of-

    day, inside light, outside light, inside temperature, outside

    temperature, humidity, wind speed, wind direction and

    occupancy.

  • 7/29/2019 Intelligent Classroom

    6/6

    Figure 11 plots the number of eliminated attributes due

    to insignificance versus the number of rules in the rule base.

    At rule-base size 1350, the percentage of attributeseliminated was 44.4% which is nearly half of the antecedent

    attributes of the rule base. The optimization phase thus, not

    only helps us to reduce the size of the rule-base and

    enhances the overall performance; it also helps extract the

    most significant attributes of the users' behavior. The

    elimination of irrelevant attributes leads to substantialreduction in the size of the rule base. After discarding the

    irrelevant attributes (i.e. decreasing FLC input

    dimensionality) duplicate rules in the rule-base are

    eliminated and the size of the rule-base shrinks significantly.

    Figure 11: The number of rules vs the number of eliminated attributes

    To appreciate the reduction in the rule-base size due to

    irrelevant attribute elimination the following example it

    suffices to say that eliminating 4 attributes from the input

    set of our system resulted in a 99.65% reduction in size.

    G. Intelligent Agent EvaluationTo evaluate our systems performance, we ran the

    system controlling the environment for 48 hours with 4users and recorded the number of rule-base updates that

    measures users satisfaction with the system. The system

    operated 6 input variables; Time-of-day, inside light, outside

    light, inside temperature and outside temperature andoccupancy. It controlled 4 output type-2 fuzzy variables:

    front window blinds, rear window, front dimmable lights,rear dimmable lights. Figure 12 shows the cumulative

    number of rule-base updates due to user dissatisfaction with

    the system behavior or due to encountering new points in

    the control surface that haven't been covered during the

    monitoring phase. Rule-base updates have been recorded

    every triple of hours. Figure 12 clearly suggests growinguser satisfaction with the system which acceptably gets to a

    stable level where few rule updates are required every now

    and then due to uncovered points on the control surface or

    an occasional change in the users' behavior.

    Figure 12: The cumulative number of rule_base updates (adds or

    modifications) vs the operation time.

    VI. CONCLUSIONIn this paper, we introduced the architecture of our

    intelligent classroom (iClass) in terms of hardware and

    software. In addition, we explained three main components

    of the iClass which are RFID , speech interaction, and users

    behavior components. Fuzzy logic is utilized in these main

    components where a novel Type-2 fuzzy approach is

    proposed and implemented to capture the iClass usersbehaviors. The Type-2 fuzzy approach is also used to

    control the iClass different actuators according to the iClass

    occupants. Through a set of experiments, the results proved

    the efficiency of our design as well as the used techniques

    and algorithms.

    REFERENCES

    [1] Auto Auditorium. http://www.autoauditorium.com/[2] F. Doctor, H. Hagras and V. Callaghan, A fuzzy embedded agent-

    based approach for realizing ambient intelligence in intelligentinhabited environments. IEEE Transactions on Systems, Man, andCybernetics, Part A, Vol. 35, no. 1, pp. 55-65, 2005.

    [3] G. Cruz, and R. Hill, Capturing and Playing Multimedia Events withSTREAMS, In Proceedings of ACM Multimedia94 (October 15-20,San Francisco, CA), ACM/SIGMM, pp. 193-200, 1994.

    [4] G. Simson and B. Rosenberg, RFID: Applications, Security, andPrivacy, ISBN: 0321290968, 2005.

    [5] H. Hagras, "Type-2 FLCs: A new generation of fuzzy controllers,"IEEE Computational Intelligence Magazine, vol. 2, no. 1, pp. 30-43,2007.

    [6] H. Hagras, V. Callaghan, M. Colley, G. Clarke, A. P. Cornish, and H.Duman, "Creating an Intelligent Environment using embeddedagents", IEEE Intelligent Systems, pp. 12-19, 2004.

    [7] http://bmrc.berkeley.edu/frame/projects/lb/[8] i.LON SmartServer, Echelon Corporation,

    http://www.echelon.com/Products/cis/smartserver/default.htm, 2009.

    [9]

    J. Cooperstock, S. Fels, W. Buxton, and K. Smith, ReactiveEnvironments: Throwing Away Your Keyboard and Mouse,Communications of the ACM, Vol. 40, No. 9, September 1997.

    [10] M. Stern , J. Steinberg , H. Imm , J. Padhye , and J. Kurose,MANIC: Multimedia Asynchronous Networked IndividualizedCourseware, In Proceedings of Educational Multimedia andHypermedia, 1997.

    [11] S. Mello, S. Craig, B. Gholson, S. Franklin, R. Picard, and A.Graesser, "Integrating Affect Sensors in an Intelligent TutoringSystem", In Affective Interactions: The Computer in the AffectiveLoop Workshop at the International conference on Intelligent UserInterfaces , pp. 7-13, New York: AMC Press, 2005.

    [12] T. Zhang, M. Hasegaw-Johnson, and S. E. Levinson "ChildrensEmotion Recognition in an Intelligent Tutoring Scenario",Interspeech, 2004.

    [13] W. Walker, P. Lamere, P. Kwok, B. Raj, R. Singh, E. Gouvea, P.Wolf and J. Woelfel,Sphinx-4: A Flexible Open Source Frameworkfor Speech Recognition, Sun Microsystems Technical Report, No.TR-2004-139, 2004.

    [14] X. Huang, A. Acero, and H. Wuen , Spoken Language processing, Aguide to Theory algorithm and system development, Prentice Hall,ISBN-13: 978-0130226167, 2001.