UBICC Article 2008 - Version Camera-ready_284

Embed Size (px)

Citation preview

  • 8/7/2019 UBICC Article 2008 - Version Camera-ready_284

    1/14

    Ubiquitous Computing and Communication Journal 1

    MIDDLEWARE FOR UBIQUITOUS ACCESS TO MATHEMATICALEXPRESSIONS FOR VISUALLY-IMPAIRED USERS

    Ali Awde 1, Manolo Dulva Hina 1,2 , Chakib Tadj 1 1 Universit du Qubec, cole de technologie suprieure, Montral Qubec, Canada

    2

    PRISM Laboratory, Universit de Versailles-Saint-Quentin-en-Yvelines, [email protected], [email protected], [email protected] Bellik ,

    Universit de Paris-Sud, France LIMSI-CRNS, Universit de Paris-Sud, [email protected]

    Amar Ramdane-Cherif 2,3 3 LISV Laboratory, Universit de Versailles-Saint-Quentin-en-Yvelines, France

    [email protected]

    ABSTRACTProviding visually-impaired users with ubiquitous access to mathematicalexpressions is a challenge. Our investigation indicates that most, if not all, of thecurrent state-of-the art systems and solutions for presentation of mathematical

    expressions to visually-impaired users are generally not pervasive, that they do nottake into account the users interaction context (i.e. combined contexts of the user,his environment and his computing system) into their systems configuration andthat they present mathematical expressions in only one format. We address theseweaknesses by providing a middleware that provides a ubiquitous access tomathematical expressions. Our middleware gathers various suppliers in which onesupplier is selected based on its suitability to the given instance of interactioncontext and of the users preferences. The configuration of the chosen supplier,including its quality of service (QoS) dimensions, is also adaptive to the userspreferences. This paper discusses the challenges in designing this middleware andpresents a proposed solution to each of these challenges. This paper discusses theconcepts and principles applied on the middleware design as well as theirvalidation through case studies and formal specification. This work is intended tocontribute on the ongoing research to make informatics accessible to handicappedusers, specifically providing them ubiquitous access to mathematical expressions.

    Keywords: pervasive computing, visually-impaired users, mathematicalexpressions presentation, multimodal computing, adaptive system.

    1 INTRODUCTION

    In pervasive computing [1], the user can con-tinue working on his computing task whenever andwherever he wishes. Thetask is usually realizedthrough the use of one or more software applica-tions. If the task is the ubiquitous access to mathe-matical expressions for visually-impaired users,then the applications are those that will correctlypresent mathematical expressions to these userswith special needs. There are available applicationsthat can do the task. Hence, instead of inventinganother one, it is more practical to collect all theseapplications and put them in one middleware. Themiddleware then selects anapplication, henceforthcalled a supplier , based on its suitability to thegiven context. Thequality-of-service (QoS) dimen-sions of the chosen supplier are also configuredbased on users preferences.

    To realize the ubiquity of documents contain-ing mathematical expressions, the document isstored in a server and its copy is replicated on everymember of the server group, making it accessible tothe user whenever and wherever he wishes. Themathematical document serves as input file to themiddleware.

    This research work is a result of an investiga-tion of the current state-of-the-art systems and solu-tions in which we found out that the available sys-tems for presentation of mathematical expressionsto visually-impaired users are not pervasive; thatmost of them do not take into account the currentinteraction context in the systems configurationand that the prevailing systems provide only onetype of format for the presentation of mathematicalexpressions. As such, they are not multimodal. Ourwork addresses these weaknesses.

    This paper presents the concepts and principles

  • 8/7/2019 UBICC Article 2008 - Version Camera-ready_284

    2/14

    Ubiquitous Computing and Communication Journal 2

    used in the design of this middleware. Data valida-tion is done through case studies and formal speci-fications. This proposed middleware is functionalunder these assumed conditions: (i) that wired andwireless communication facilities are available tosupport our system, (ii) that the mathematical ex-pression as input to the middleware is stored in aMathML [2] or LaTex [3] input file, and (iii) thatthere are available application suppliers and mediadevices to support the optimal modalities selectedby the middleware.

    Apart from this introductory section, the rest of this paper is structured as follows. Section 2 pro-vides a brief review of the state of the art related toour work; Section 3 lists down the challenges, pro-posed solutions and contributions of our work. Sec-tion 4 presents the infrastructure and system archi-tecture of our middleware. In Section 5, we explainthe concept of machine learning in relation to thesystem design so it adapts seamlessly to the giveninteraction context. Section 6 presents our method-ology to configure an application supplier and op-timize settings, taking into account users satisfac-tion. Section 7 discusses interaction contexts, mo-dalities and media devices that characterize a typi-cal scenario of a blind user trying to access mathe-matical expressions. Through scenario simulationsin Section 8, we show the design specification of our infrastructure. Future works and conclusion arepresented in Section 9.

    2 REVIEW OF THE STATE OF THE ART

    To a visually-impaired user, the mere under-standing of a mathematical expression is already achallenge. It requires repeated passage on an ex-

    pression where user often skips secondary informa-tion only to revert back to it again and again untilhe fully grasps the meaning of the expression. Acomplicated task like this is explained in [4]. Fortu-nately, some applications were developed to lessenthe complexity of performing similar task, some of them are MathTalk [5], Maths [6] and AudioMath[7, 8]. MathTalk and Maths convert a standard al-gebraic expression into audio information. InMaths, the user can read, write and manipulatemathematics using a multimedia interface contain-ing speech, Braille and audio. VICKIE (VisuallyImpaired Children Kit for Inclusive Education) [9]and BraMaNet (Braille Mathmatique sur Inter-

    Net) [10] are transcription tools that convertmathematical document (in LaTex, MathML,HTML, etc.) to its French Braille representation.Labradoor (LaTex-to-Braille-Door ) [11] converts anexpression in LaTex into its Braille equivalent us-ing the Marburg code. MAVIS ( Mathematics Ac-cessible To Visually Impaired Students) [12] sup-ports LaTex to Braille translation via Nemeth code.

    As a background, some special Braille notationshave been developed for mathematics, different

    countries adopting different Braille code notations.Some of these are theNemeth Math code [13]which is used in the USA, Canada, New Zealand,Greece and India; the Marburg code [14] used inGermany and Austria, theFrench Math code [15].

    Lambda [16] translates expression in MathMLformat into multiple Braille Math codes that are

    used in the European Union. ASTER ( Audio Sys-tem for Technical Readings) [17] reformats elec-tronic documents written in LaTeX into their corre-sponding audio equivalent. AudioMath providesvocal presentation of mathematical content encodedin MathML format.

    Of all the tools cited above, none is pervasive.None takes into account the users interaction con-text into its configuration. Our approach, therefore,is to get the strength of each tool, integrate each oneof them into our work in order to build a middle-ware that (1) broadens the limits of utilization, (2)provides the user with opportunities to accessmathematical expressions written in either MathML

    or LaTex format, and (3) selects appropriate appli-cation supplier and its configuration depending onthe given instance of interaction context.

    In [18], the use of multimodal interaction fornon-visual application was demonstrated. The mul-timodal system selects the modality over anotherafter determining its suitability to a given situation.Multimodality is important to visually-impairedusers because it provides them equal opportunitiesto use informatics like everybody else.

    Recently, agents or multi-agent systems (MAS)[19, 20] have been widely used in many applica-tions, such as on a large, open, complex, mission-critical systems like air traffic control [21]. Gener-ally, agency is preferred over traditional techniques(i.e. functional or object-oriented programming)because the latter is inadequate in developing toolsthat react on environment events. Some works onMAS for visually-impaired users include [18, 22].In contrast to those works, ours focuses on ubiquit-ous access to mathematical expressions.

    Incremental machine learning (IML) is a pro-gressive acquisition of knowledge. In the literature,various IML algorithms exist but in this work,su- pervised learning is adopted because limited scena-rios have been considered.Supervised learning is aML method by which the learning process producesa function that maps inputs to certain desired out-

    puts. More details of ML design are in Section 5.Our focus has always been pervasive multimo-dality for the blind. This work was initially inspiredby [23]. As our work evolves, however, the domainof application and its corresponding optimizationmodel becomes completely different as this paper isreflective of our intended user. The methodology isdifferent; this work adoptsmachine learning (ML)to acquire knowledge. Such knowledge is storedonto theknowledge database(KD) so that it can be

  • 8/7/2019 UBICC Article 2008 - Version Camera-ready_284

    3/14

    Ubiquitous Computing and Communication Journal 3

    made omnipresent, accessible anytime and any-where via wired or wireless networks.

    A major challenge in designing systems forblind users is how to render them autonomy. To thisend, several tools and gadgets have emerged,among them are the GPS (global positioning sys-tem), walking stick that detects user context [24]and a talking Braille [25]. Our work aims the same.Ours is adaptive to users condition and environ-ment. Through pervasive computing networks, theML acquired knowledge, and users task and profileall become omnipresent, and the systems know-ledge on the feasible configuration for the userstask is generated without any human intervention.

    3 CHALLENGES, PROPOSED SOLUTIONSAND CONTRIBUTION

    In this section, we define the requirements indesigning a pervasive system that will present ma-thematical expressions to visually impaired users byposing specific technical challenges to solve andthen describe the approach to address them.Contribution 1 : Design of a scheme in which amathematical document can be made accessibleanytime and anywhere to visually-impaired users.

    Related Questions: Given that the computing taskis to provide ubiquitous access to a mathematicaldocument, how can this document be made accessi-ble whenever and wherever the user wishes to giventhat the user can be either stationary or mobile?Also, given that any computing machine or servermay fail, what configuration must be established inorder to ensure the ubiquity of such document?Proposed Solutions: A mathematical expression

    becomes accessible if it is stored as a MathML orLaTex document. This document, along with theusers data and profile, becomes available for ubi-quitous computing if it is stored in a server. Thecontent of this server becomes omnipresent if it isreplicated to the other members of the server group.Hence, the server group renders the users docu-ment ubiquitous, available anytime and anywhere.The user connects to a member of the server groupupon login. This server, one that is closest to theusers location, becomes a point of access for theusers profile, data and mathematical document.Contribution 2 : Conceptualization of an adaptivesystem that selects an optimal application supplier,one that provides visually-impaired users access tomathematical expressions, based on the given in-stance of interaction context.

    Related Questions: How do we associate a mathe-matical document to an optimal application suppli-er? What would be the basis of such association? If this optimal supplier happens to be not available,how will the system find a relevant replacement?Proposed Solutions: We incorporate machine

    learning mechanism wherein the system remembersscenarios the input conditions and the resultingoutput conditions of each scenario. This learningmechanism assists the system in selecting the op-timal supplier based on the given scenario. Differ-ent suppliers are apt for different scenarios. Part of this learning mechanism is the ranking of applica-tion suppliers; the ranking takes into account theusers preference. In general, it is possible that in agiven scenario, two or more suppliers may be foundsuitable for invocation, with the top-ranked supplierbeing activated by default. When the chosen suppli-er is not available, then the next-ranked supplier istaken as its replacement.Contribution 3 : Design of a system that is tolerant to faults due to failure of servers and media devices.

    Related Questions: In case of server failure, whatmust be done to guarantee that the current usersdata, profile and machine knowledge are not lost? If selected media devices are malfunctioning, whatmust be done to keep the system remain operationaland persistent?Proposed Solutions: Like any pervasive system,ours assume that there are many members of theserver group by which the user can connect. Duringthe time that the server to which the user is con-nected is down, the user may continue working andthat his data, profile and machine knowledge are allstored in the local cache. As soon as the server is up,the server starts communicating with the usersterminal, and the servers copy of ubiquitous infor-mation is updated. Afterwards, the servers ubiquit-ous information is sent to all members of the servergroup. Our system also adopts a ranking scheme formedia devices suitability to a given modality.When a highly-ranked media device is not availableor has failed, a lower-ranked media device is takenas its replacement. If no replacement is found, thenthe system re-evaluates the current instance of inte-raction context and accordingly chooses the optimalsupplier and modalities. Afterwards, the systemactivates the available media device(s) that supportthe chosen modalities.Contribution 4 : Design of a system that providesconfiguration suitable to the users satisfaction.

    Related Questions: How do we represent and quan-tify users satisfaction? What parameters are usedto measure the users satisfaction with regards tosystem configuration?Proposed Solutions: We use the utility measure[0,1] to denote user satisfaction in which 0 = condi-tion is inappropriate while 1 = user is happy withthe condition. In general, the closer the configura-tion setting is to the users pre-defined preferences,the higher is the user satisfaction. In this work, weconsider the application supplier and its QoS di-mensions as parameters for system configuration.

  • 8/7/2019 UBICC Article 2008 - Version Camera-ready_284

    4/14

    Ubiquitous Computing and Communication Journal 4

    4 INFRASTRUCTURE AND SYSTEMARCHITECTURE

    Here, we present the architectural framework of our pervasive multimodal multimedia computingsystem that provides ubiquitous access to mathe-matical expression for blind users.

    4.1 System ArchitectureFigure 1 shows the layered view of our perva-

    sive multimodal multimedia computing system. It isa multi-agent system organized into layers. Layer-ing is a method by which communication takesplace only between adjacent layers. It limits theundesirable ripple-effect propagation of errors with-in the boundary of the layer involved. The varioussystem layers and their functionalities are:

    Figure 1: The architectural abstraction of a genericMM computing system for visually-impaired users.

    (1) The Presentation Layer Here, the mathemati-cal expression is presented to the user via optimalpresentation format. This layer is responsible forthe conversion of the mathematical expressionsencoded format (from Data Analysis Layer) into itsfinal presentation format.(2) The Data Access Layer Here, users may editor search for a term in a mathematical expression. (3) The Data Analysis Layer Here, the mathemat-ical expression presentation format is selectedbased on available media devices and supplier, andinteraction context. Also, an agent takes in aMathML or LaTex expression and converts it to itsencoded format. Furthermore, its machine learningcomponent selects the optimal presentation formatof the expression based on parameters just cited.(4) The Control and Monitoring Layer This layercontrols the entire system, coordinating the detec-tion of users interaction context, and the manipula-tion and presentation of the mathematical expres-sion. In this layer, there is an agent that retrievesMathML or LaTex document and the users profile.

    (5) The Context Gathering Layer Here, the cur-rent interaction context is detected. Also, the usersmedia devices and application supplier preferencesare also detected. The layers agents detect theavailable media devices and the contexts of theusers environment and of the computing system.(6) The Physical Layer It contains all the physical

    entities of the system, including media devices andsensors. The raw data from this layer are sampledand interpreted and forms the current instance of interaction context.

    4.2 The Ubiquitous Mathematical DocumentWe define task as users work to do and its

    accomplishment requires the use of a computingsystem. Hence, in this paper, the users task is toaccess (i.e. read, write, edit) a mathematical expres-sion in a document in a ubiquitous fashion.4.2.1Creating a mathematical document

    The processes of creating a ubiquitous mathe-matical document are as follows: (i) Using Brailleor speech as input media, the user writes a mathe-matical equation using the syntax of MathML orLaTex; (ii) There is a supplier (selected by the mid-dleware as the most appropriate to the current inte-raction context) that takes in this input equation andyields its corresponding output via speaker orBraille (note that in a Braille terminal, there is aseparate section where the user can touch/sense theoutput); (iii) The supplier saves the document in thelocal cache; and (iv) Our middleware saves thedocument onto the server which is also propagatedto other members of the server group. Figure 2shows a specimen fraction (in bi-dimensional form)and its equivalent representation in MathML, La-Tex and Braille.

    1

    1

    x

    x

    Figure 2: A fraction in bi-dimensional form and itscorresponding equivalent in LaTex, MathML andBraille.4.2.2Reading and presenting the mathematical

    expression in a documentA ubiquitous mathematical document becomes

    readily available to the user once a connection ismade between the users terminal and a member of the server group. In general, the processes for read-

  • 8/7/2019 UBICC Article 2008 - Version Camera-ready_284

    5/14

    Ubiquitous Computing and Communication Journal 5

    ing or presenting a mathematical document to theuser are as follows: (i) Our middleware retrieves themathematical document from the server and down-load it to the users cache/local terminal; (ii) Themiddleware selects a supplier based on the giveninteraction context; (iii) The middleware reads orpresents the mathematical equation to the visually-impaired user via speech or Braille.4.2.3Modifying a mathematical document

    At any time, the user may delete or modify amathematical equation within a document. Once amathematical document is identified, our middle-ware opens up the file and ourData Access Layer (see Section 4.1) allows users to search for a termwithin the equation and edit same. Once editing isdone, the document is saved accordingly.

    4.3 Anytime, Anywhere MathematicalComputing

    As shown in Figure 3, computing with mathe-matical equations for visually-impaired users ispossible anytime, anywhere the user wishes to. Inthe diagram, the user is assumed to be working athome using our middleware.

    Figure 3: An anytime, anywhere adaptive comput-ing with mathematical document for visually-impaired users.

    The system detects the users interaction con-text (i.e. getting the values of parameters that makeup the contexts of the user, his environment and hiscomputing system more details in Section 7), de-tects the currently available media devices and se-lects the most suitable application supplier, andeventually accessing the users mathematical doc-ument. The user then reads or modifies the mathe-matical equations in this document or writes a newone. At the end of the users session, his mathemat-ical document is saved onto the server and on allother members of the server group.

    The user may continue working on the samemathematical document in, say, a park. In this newsetting, the middleware detects the users interac-tion context and the available media devices. Thesystem then provides an application supplier that islikely different from what the user used when hewas at home. Although different in setting, thepoint is that the user is still able to continue work-ing on an interrupted task.

    5 DESIGNING AN INTERACTIONCONTEXT ADAPTIVE SYSTEM

    Here, we elaborate on the systems knowledgeacquisition that makes it adaptive to the given inte-raction context.

    5.1 Theoretical Machine LearningIn machine learning (ML), a program is said to

    have learned from experience E with respect to aclass of taskT and a performance measureP if itsperformanceP at taskT improves with experience

    E [26]. In this work, the learning problem is definedas follows: (i)task T : selecting the modalities (andlater the media devices) that are appropriate to thegiven instance of interaction context, (ii) perfor-mance P: measure of the selected modalities suita-bility to the given interaction context, as given bytheir suitability score (iii)training experience E:various combinations of possible modalities forvisually-impaired users.

    Figure 4 shows the functionality of a genericML-based system. In general, ascenario is an eventthat needs to be acted upon appropriately. An inputto the ML component is the pre-condition of a sce-nario. Here, the pre-condition scenario is a specific

    instance of an interaction context. The ML compo-nent analyzes the input, performs calculations anddecisions and yields an output called the post-condition of a scenario. For knowledge acquisitionpurposes, the pre- and post-conditions of every sce-nario are stored in a databank called thescenariorepository (SR). Each entry in SR forms a distinctscenario.

    Figure 4: A generic ML system: an instance of interaction context serves as input to a ML compo-nent yielding a corresponding post-condition result.

    When a ML component is given a situation (i.e.pre-condition of a scenario), it consults SR for asimilar entry. If a match is found then it simply

  • 8/7/2019 UBICC Article 2008 - Version Camera-ready_284

    6/14

    Ubiquitous Computing and Communication Journal 6

    retrieves the post-condition scenario and imple-ments it. If no match is found (i.e. scenario is new),then the ML component, using its acquired knowl-edge, performs calculations producing the post-condition of the scenario. If accepted by the expert(i.e. the user), then the complete scenario is storedonto SR as a new entry. This process is performedwhenever a new scenario is encountered. Overtime, the ML component accumulates enoughknowledge that it knows the corresponding reaction(i.e. post-condition) given a certain situation (i.e.pre-condition).

    Incremental machine learning (IML) is a pro-gressive acquisition of knowledge. In the literature,various IML algorithms exist, such as the candidateelimination [26], and ID5 (i.e. theiterative di-chotomizer version 5, an incremental implementa-tion of ID3 which itself is a decision-tree inductionalgorithm developed by Quinlan) [28]. In this work,supervised learning is adopted because limited sce-narios have been considered.Supervised learning is

    a ML method by which the learning process pro-duces a function that maps inputs to certain desiredoutputs. Let there be a set of inputsX withn com-ponents and a set of outputsY withm components.Let f be the function to be learned which will mapsome elements inX to the elements inY, andh bethe hypothesis about this function. Furthermore, letX be the set of interaction context instances, i.e. setof pre-condition scenarios. Hence,Y would be a setof post-condition scenarios, and the mapping be-tween the pre- and post-condition scenarios is de-noted byf: X Y. See Figure 5.

    Figure 5: The relation between pre-condition sce-narios and the post-condition scenarios using su-pervised learning.

    As shown in the diagram, Xi = an element of X,and i 1 ...n, and Y j = a element of Y, and j 1 ...m. When a new pre-condition Xnew occurs, thelearning system finds the best possible output valueYbest using hypothesis. In general, the learner com-pares the results of hypothesis functionsh(f(Xnew)= Y1), h(f(Xnew) = Y2), , h(f(Xnew) = Ym) andselects one that yields the best score. The new caseis the newly-acquired knowledge which is later onadded to the knowledge repository. This learningprocess is calledincremental machine learning.

    5.2 Basic Principles of Interaction ContextIn this paper, the following logic symbols are

    used: = Cartesian product yielding a set com-posed of tuples, 1 = set of positive integers ex-cluding zero, = universal quantifier, = existen-tial quantifier, the basic logical connectives (AND) and (OR), and the intervals [x, y] whichdenotes that a valid data is in the range of x and y,and (a, b] which denotes that a valid data is higherthana and up to a maximum of b.

    The interaction context,IC = {IC1, IC2,, ICmax},is a set of all possible interaction contexts. At anygiven time, a user has a specific interaction contexti denotedICi, 1 i max. Formally, an interactioncontext is a tuple composed of a specific user con-text (UC), environment context (EC) and systemcontext (SC). An instance of IC may be written as:

    mSClECkUCiIC = (1)

    where 1 k maxk, 1 l maxl, and 1 m maxm, and maxk = maximum number of possibleuser contexts,maxl = maximum number of possibleenvironment contexts, andmaxm = maximum num-ber of possible system context. The Cartesian prod-uct means that at any given time,IC yields a specif-ic combination of UC, EC andSC.

    The user contextUC is made up of parametersthat describe the state of the user during the conductof an activity. A specific user contextk is given by:

    ICParamkxkmax

    1xkUC

    == (2)

    whereICParamkv = parameter of UCk wherek is thenumber of UC parameters. Similar in form withUC,any environment contextECl and system contextSCm are given as follows:

    ICParamlylmax

    1ylEC

    == (3)

    ICParammzmmax

    1z mSC

    == (4)

    The first knowledge the ML component mustlearn is to relate the interaction context to an appro-

    priate modality. In general, a modality is possible if there exists at least one modality for data input andat least one modality for data output. Given a mo-dality setM = {Vin, Tin, Vout, Tout} whereinVin = vocalinput , Vout = vocal output, Tin = tactile input andTout = tactile output then modality is possible underthe following condition:

    )outTout(V)inTin(VPossibleModality = (5)

  • 8/7/2019 UBICC Article 2008 - Version Camera-ready_284

    7/14

    Ubiquitous Computing and Communication Journal 7

    The failure of modality, therefore, can be speci-fied by the relationship:

    Failed))(TFailed)((V Failed))T(Failed)((VFailureModality

    outout

    inin

    ==

    === (6)

    Let M j = element of the power set of M, that is,M j P PP P (M)where 1

    j

    mod_max(maximum mo-

    dality). Also, letM = the most suitableM j for a giv-en interaction contextICi. As stated,X is a set of pre-condition scenarios. Hence, the relationshipbetweenX and IC may be written as Xi = ICi. Forsimplicity purposes, we let the pre-condition setX be represented by interaction contextIC . Each inte-raction contexti, denoted as ICi, is composed of various attributes of n components, that is, ICi =(A1, A2, ...,An ) where an attribute may be a para-meter belonging toUC or EC or SC . We also letthe set of post-conditionY be represented by a setof modalityM . Let the function f map the set of IC to the set of M , in whichh calculates the suitabilityscore of such mapping, that is,

    ( ) >

  • 8/7/2019 UBICC Article 2008 - Version Camera-ready_284

    8/14

    Ubiquitous Computing and Communication Journal 8

    Figure 6: Media selections to support a modality.

    Throughout this paper, the following acronymswill be used to denote names of media devices:MIC = microphone, KB = keyboard, OKB = overlay key-board, SPK = speaker , HST = headset, BRT = Brailleterminal, TPR = tactile printer . The availability of supporting media devices is important if a modalityis to be implemented. The following is a sample setof elements of f 1 for visually-impaired users:f 1 = {(V in, (MIC,1)), (V in, (Speech Recognition,1)), (V out ,

    (SPK,1)), (V out , (HST,2)), (V out , (Text-to-Speech,1)), (T in,(KB,1)), (T in, (OKB,2)), (T out , (BRT,1)), (T out , (TPR,2))}

    Note that althoughmedia technically refers tohardware components, a few software elements,however, are included in the list as vocal input mo-dality would not be possible without speech recog-nition software and the vocal output modality can-not be realized without the presence of text-to-speech translation software. Fromf 1, we can obtainthe relationship in implementing multimodality:

    f 1(V in ) = (MIC,1) (Speech Recognition,1)f 1(V out ) = ((SPK,1) (HST,2)) (Text-to-Speech,1)f 1(T in ) = (KB,1) (OKB,2)

    f 1(T out ) = (BRT,1) (TPR,2)Therefore, the assertion of modality, as ex-

    pressed in Eq. (5), with respect to the presence of media devices becomes:

    TPR))(BRTSpeech)-to-TextHST)(((SPKOKB))(KBn)RecognitioSpeech((MIC

    PossibleModality

    =(15)

    5.5 Machine Learning Training for Selectionof Application Supplier

    In supervised learning, there exists a set of data,calledtraining set , from which the learning know-

    ledge is based. The function to be learned maps thetraining set to a preferred output. The user providesthe result of the mapping. A successful mapping,called positive example, is saved while an incorrectmapping, callednegative example, is rejected. Thepositive examples are collected and form part of thesystems knowledge. The acquired knowledge isthen saved onto theknowledge database(KD). Fig-ure 7 shows the ML knowledge acquisition.

    A mathematical document of type LaTex (with

    filename extension.tex) and MathML (with exten-sion .xml) is supported by several suppliers. Letthere be a functionf 2 that maps a given mathemati-cal document (of type MathML and LaTex) andavailable media device(s) to the users preferredsupplier and its priority ranking as given byf 2:Document, {media devices} Preferred suppli-er, Priority . For demonstration purposes, let usassume that the user chooses his 3 preferred suppli-ers and ranks them by priority. The learned functionis saved onto knowledge repository and is calleduser supplier preference.

    Given thatMedia Devices = {MIC, KB, OKB, SPK,HST, BRT, TPR}and Supplier = {MAVIS, LaBraDoor,ASTER, AudioMath, BraMaNet, Lambda} and Docu-ment = {.tex, .xml}then the following are some poss-ible mappings withinf 2:f 2 = {((.tex, BRT) (MAVIS, 1)), ((.tex, BRT) (LaBraDoor,2)), ((.tex, {BRT, SPK |HST}), (ASTER, 1)), ((.tex, {BRT,SPK|HST}), (MAVIS, 2)), ((.tex, {BRT, SPK|HST}), (La-BraDoor, 3)), ((.tex, {SPK|HST}), (ASTER, 1)), ((.xml,

    {BRT}), (Lambda, 1)), (.xml, {BRT}), (BraMaNet, 2)),((.xml, {BRT, SPK|HST}), (AudioMath,1)), ((.xml, {BRT,SPK|HST}),(Lambda, 2)), ((.xml, {BRT, SPK|HST}),(BraMaNet, 3)), etc.}

    Figure 7: The training process for ML knowledgeacquisition: (Up) the mapping of data type to a sup-plier and (Down) building a users preferred QoSdimensions for a supplier.

    Every supplier has its set of quality of service (QoS) dimensions that consumes computing re-sources. Here, the only important QoS dimensionsare those that are valuable to visually-impaired us-ers. A functionf 3 creates a mapping of a supplierand its QoS dimensions that the user prefer (f 3:

    Supplier QoS dimension j, Priority ) Also, 1 j qos_max (maximum number of QoS dimensions).Priority is of type 1. Since there are many possiblevalues for each QoS dimension, the user arrangesthese values by their priority ranking. A samplef 3 isgiven below:f 3 = {(Lambda, (40 characters per line, 1)), (Lambda, (60 characters per line, 2)), (Lambda, (80 characters per line,3)), (Lambda, (French Math code, 1)), (Lambda, (Nemeth

  • 8/7/2019 UBICC Article 2008 - Version Camera-ready_284

    9/14

    Ubiquitous Computing and Communication Journal 9

    code, 2)), (AudioMath, (medium volume, 1)), (AudioMath,(high volume, 2)), (AudioMath, (low volume, 3)), etc.}

    6 CONFIGURATION AND OPTIMIZATIONOF APPLICATION SUPPLIER

    Here, we discuss the method by which the sys-tem would self-configure taking into account theusers satisfaction.6.1 Alternative Configuration Spaces

    Given a user task, application suppliers areinstantiated to realize the task. For each supplier,however, there are various QoS dimensions that canbe invoked in its instantiation. Respecting the userspreferences is the way to instantiate a supplier, butif this is not possible, the dynamic reconfigurationmechanism should look upon the various configura-tion spaces and determine the one that is suitableand optimal for the users needs.

    A QoS dimension is an applications parameterthat consumescomputing resources (battery, CPU,

    memory, bandwidth). As an applications QoS di-mension improves, then the applications quality of presentation (e.g. clarity of sound, etc.) also im-proves but at the expense of larger resources con-sumption. Given a task and its various applications,the tasks QoS dimension space is given by:

    iDqos_max

    1i spaceDimensionQoS

    == (16)

    In this work, the QoS dimensions that matterare those that are valuable to blind users. For sup-pliers that convert LaTex or MathML to Braille, theQoS dimensions of importance are thenumber of characters per line, and the number of Math codesthat it can support . For suppliers that convert.tex and .xml document into its audio equivalent, theQoS dimensions are thevolume, age of the speaker , gender of the speaker , words uttered by the speaker per unit of time, and thespeakers language.

    6.2 Optimizing Configuration of Users TaskAn optimal configuration is a set-up that tries

    to satisfy the users preferences given the currentinteraction context and available media devices.When the configuration is optimal, it is said that theusers satisfaction is achieved. Let theusers satis- faction to an outcome be within theSatisfactionspace. It is in the interval of [0, 1] in which 0 meansthe outcome is totally unacceptable while a 1 cor-responds to a users satisfaction. Whenever possible,the system tries to achieve a result that is closer to 1.

    Given a supplier, users satisfaction improves if his preferences are enforced. The supplier prefe-rences in instantiating an application are given by:

    scsf xshs spreferenceSupplier = (17)

    wheres Supplier space is an application supplierand the term cs [0, 1] reflects how the user caresabout suppliers. Given a task of n suppliers ar-ranged in order of users preference, then csupplier1=1, csupplier2= 1 1/ n, csupplier3= 1 1/ n 1/ n, and soon. The last supplier therefore has cs value close tozero which means that the user cares not to have it

    if given a choice. In general, in a task, the cs as-signed to supplier i, 1 i n, is given by:

    =1-i

    1(1/n)-1isupplierc (18)

    The term f s: dom(s) [0, 1] is a function thatdenotes the expected features present in suppliers.The supplier features are those that are important tothe user, other than the QoS dimensions. For exam-ple, for a supplier that produces an audio output, theuser might prefer one that provideswave intonation,or capable of informing the user of the nature of thenext mathematical expression, etc. For example, if the user listedn = 3 preferred features for an appli-cation, and the selected supplier supports them all,then f s = 1. If, however, one of these features ismissing (either because the feature is not installedor the supplier does not have such feature), then thenumber of missing featurem = 1 and f s = 1 m /(n + 1) = 1 = 0.75. In general, the user satisfac-tion with respect to supplier features is given by:

    1nm -1supplierf +

    = (19)

    The term hs Xs expresses the users satisfactionwith respect to the change of the supplier, and isspecified as follows: hs (0, 1] is the users toler-ance for a change in the supplier. If this value isclose to 1, then the user is fine with the changewhile a value close to 0 means the user is not happywith the change. The optimized value of hs is:

    +=

    cs2crepcsmaxargsh (20)

    where crep is value obtained from Eq. (18) for thereplacement supplier. The term xs indicates if change penalty should be considered. xs = 1 if thesupplier exchange is done due to the dynamicchange of the environment, while xs = 0 if the ex-change is instigated by the user.

    The algorithm for finding the optimized suppli-er configuration of a task is given in Figure 8. In thealgorithm, the default configuration is comparedwith other possible configurations until the one thatyields the maximum value of users satisfaction isfound and is returned as result of the algorithm.

  • 8/7/2019 UBICC Article 2008 - Version Camera-ready_284

    10/14

    Ubiquitous Computing and Communication Journal 10

    Figure 8: Algorithm for optimized supplier and itsQoS configuration.

    7 INTERACTION CONTEXT, MODALITYAND MEDIA DEVICES

    Interaction context is formed by combining thecontexts of theuser , hisenvironment, and his com-putingsystem. The user context, in this work, is afunction of user profile (including any handicap)and preferences. A sample user profile, in genericformat, is shown in Figure9. The users specialneeds determine other affected modalities (i.e. theuser is already disqualified from using visual in-put/output modalities). For example, being muteprevents the user from using vocal input modality.

    Figure 9: A sample user profile.

    As a function of modality, the user contextUC can be represented by a single parameter, that of theusers special needs. This parameter is a4-tuple representing additional handicaps, namely themanual disability, muteness, deafness, and unfa-miliarity with Braille. Each handicap affects userssuitability to adopt certain modalities. The failure of modality, adopted from Eq. (6), with respect toUC parameters is:

    (Deaf)) Braille)r withUnfamilia Disabled-((Manually

    (Mute))Disabled)-((ManuallyFailureModality

    =(21)

    The environment context EC is the assessmentof a users workplace condition. To a blind user, aparameter such as lights brightness has no signific-ance, while others, such as noise level, are signifi-cant. In this work, the environment context is based

    on the following parameters: (1) the workplacesnoise level identifies if it is quiet/acceptable ornoisy, and (2) theenvironment restriction identi-fies whether a workplace imposes mandatory si-lence or not. Based on the specified parameters, theenvironment context, therefore, is formally givenby the relationship:

    tion)nt RestricEnvironme(Level)(NoiseEC = (22)

    Table 1 shows the affected modality based onenvironments context. It also shows the conventiontable we have adopted forEC.

    Table 1: The tabulation for affected modalities bycombined noise level and environment restriction

    The unit of noise isdecibel (dB). In our work,50 dB or less is considered acceptable or quietwhile51 dB or more is considered noisy. In oursystem design, this range can be modified, throughuser interface, by the end user based on his percep-tion. In general, when the users workplace is noisy,the effectiveness of vocal input modality is doubt-ful; hence an alternative modality is necessary.In an environment where silence is required,sound-producing media (e.g. speaker) needs to bemuted or deactivated. For environment noise re-striction, we have defined a database of pre-definedplaces (e.g. library, park) and their associated noiserestrictions (e.g. library: silence required, park: si-lence optional). User can modify some databaserecords. Also, new ones can be added through theuser interface.

    In our work, thesystem context (SC) signifiesthe users computing device and the available me-dia devices. See Figure 10. The computing device(e.g. PC, laptop, PDA, cellular phone) also affects

    the modality selection. For example, using a PDAor cell phone prevents the user from using tactileinput or output modality.

    Let SC, for the purpose of modality selection,be represented by a single parameter, the userscomputing device. LetComputing Device = {PC,MAC, Laptop, PDA, Cellular phone}. The computingdevice convention is shown in Table 2. For exam-ple,SC = 1 means that the users computer is eithera PC, a laptop or a MAC. Note that whenSC = 2

  • 8/7/2019 UBICC Article 2008 - Version Camera-ready_284

    11/14

    Ubiquitous Computing and Communication Journal 11

    (i.e. PDA),Tin = Failed because the computing de-vice has no tactile input device; itsTout = Failed be-cause, in a regular set-up, it is not possible to attacha tactile device (e.g. Braille terminal) onto it.

    Figure 10: The system context as function of userscomputing device and available media devices.

    Table 2: The convention table of users computingdevice and its effect on modality selection

    Using ML, the following are the derived rules onmodality failures given the parameters of interac-tion context:

    required)SilencenRestrictiont(Environme Noisy)Level(NoiseMute)(UserFailureinV

    =

    === (23)

    Deaf)(UserFailureoutV == (24)

    PDA)Device(Computing Disabled)-Manually(UserFailureinT

    =

    == (25)

    Cellphone)Device(Computing PDA)Device(Computing Braille)r withUnfamilia(User

    Disabled)-Manually(UserFailureoutT

    =

    =

    =

    ==

    (26)

    8 DESIGN SPECIFICATION ANDSCENARIO SIMULATIONS

    Having formulated various ML knowledge tooptimize the configuration setting of users task,this knowledge is then put to test via sample scena-rios. The design specification comes along as thesescenarios are further explained.8.1 Selection of Modalities

    Consider, for example, an interaction contextthat is composed of the following parameters:IC =(A1, A2, A3, A4, A5, A6, A7) wherein: A1 = {true | false} = if user is manually disabled,

    A2 = {true | false} = if user is mute, A3 = {true | false} = if user is deaf, A4 = {true | false} = user familiarity with Braille, A5 = {quiet | noisy} = environments noise level, A6 ={silence required | silence optional} = envi-ronments noise level restriction, and A7 = {PC or Laptop or MAC | PDA | Cellphone}

    = users computing device.The set of possible modalities (i.e. refer to Eq.(5)) is given byM = {M1,M2,M3,M4,M5,M6, M7, M8,M9} whereinM1 = {Tin, Tout}; M2 = {Tin, Vout}; M3 = {Vin,Vout}; M4 = {Vin, Tout}; M5 = {Vin, Tout, Vout}; M6 = {Tin,Tout , Vout}; M7 = {Tin, Vin, Vout}; M8 = {Tin, Vin, Tout}; M9 ={Tin, Vin, Tout, Vout} (see its derivation from Section5.3). In this example, let us assume the followinginteraction context: (i)user context: blind with nofurther handicaps, familiar with Braille; henceA1 =False, A2 = False,A3 = False,A4 = True, (ii)envi-ronment context: the user is in a classroom, thenA5 = noisy,A6 = silence required, (iii)system context:the user works on a laptop;A7 = Laptop. The sys-tem now finds the modality that suits the given in-teraction context. The system does so using theprinciples discussed in Section 5. Let us assumethat the computing systems SR contains recordedscenarios as shown in Figure 11. The given figure isgenerated by using WEKA (Waikato Environment for Knowledge Analysis) [30] which is a collectionof machine learning algorithms for data miningtasks. It is used in testing a machine learning algo-rithm as it contains tools for data pre-processing,classification, regression, clustering, associationrules, and visualization. In this work, a sample SRis shown usingarff viewer (i.e. a WEKA tool).

    As shown in the diagram, there are already19scenarios representing the systems acquiredknowledge. The 20th scenario represents a newcase. Using Equation 14, and with reference to thegiven interaction context and SR, the suitabilityscore of M j (where j = 1 to 9) can be calculated. Letus consider, for instance, the calculations involvedwith modality M1: Suitability_Score (M1) = P(A1 = False | M1) P(A2 =False | M1) P(A7 = Laptop | M1) P(M1) = 1 0.67 0 . 3/19 = 0wherein P(A1 = False | M1) = 3/3, P(A2 = False | M1) =,P(A3 = False | M1) = 0/3, P(A4 = True | M1) = 3/3, P(A5 =Noisy | M1) = , P(A6 = silence optional | M1) = , andP(A7 = Laptop | M1) =. Also, P(M1) = 3/19.

    Similarly, we do calculate the suitability scoreof all other remaining modalities. Using the sameprocedure, the calculations involved with the mod-ality that yields the highest suitability score, M6, isshown below:Suitability_Score (M6) = P(A1 = False | M6) P(A2 =False | M6) P(A7 = Laptop | M6) P(M6) = 1 1 1 3/19 = 0.015671

  • 8/7/2019 UBICC Article 2008 - Version Camera-ready_284

    12/14

    Ubiquitous Computing and Communication Journal 12

    Figure 11: A sample of a scenario repository (SR).

    As explained in Section 5.3, M6 = {Tin, Tout , Vout} respects the conditions imposed in Eq. (5), hence, itis chosen as the optimal modality for the givenIC.This new scenario is added to SR as a newly-acquired knowledge (i.e. scenario #20 in SR).

    8.2 Selection of Media DevicesAs stated in Section 5.4, functionf 1 associates

    selected modalities to available media devices. Forsimulation purposes, let us assume that currentlyavailable media devices are theMIC, Speech Recog-nition, SPK, HST, Text-to-Speech, KB , and BRT. Giv-en this scenario,Vin is not part of optimal modali-ties, then media devicesMIC, and Speech Recogni-tion which supportVin are automatically excludedfrom the set of supporting media devices: f 1 = {(Vin, (Mic,1)), (Vin, (Speech Recognition,1)), (Vout,(SPK,1)), (Vout, (HST,2)), (Vout, (Text-to-Speech,1)), (Tin,(KB,1)), (Tin, (BRT,2)), (Tout, ((BRT,1)}

    Since the selection of media device is based onthe user preferences, therefore the system automati-cally activates the media devices having 1 as priori-ty for each modality.

    8.3 Selection of Application SupplierConsider for example a user on the go (i.e. in a

    park) working on a LaTex document (work1.tex)and another document MathML (work2.xml) aspart of his homework. To do so, our user needsinstantiation of these files using appropriate suppli-ers. Finding relevant suppliers can be done usingfunctionf 2 which yields the following values:f 2 = {((.tex, {KB, BRT}) (MAVIS, 1)), ((.tex, {KB,BRT})(LaBraDoor, 2)), ((.tex, {KB,BRT}) (VICKIE, 3)), ((.tex,{KB, SPK |HST}), (ASTER, 1)),((.xml, {KB,BRT}), (Lamb-da, 1)), (.xml, {KB,BRT}), (BraMaNet, 2)),((.xml, {KB,SPK|HST}), (AudioMath,1)), ((.xml, {KB, SPK|HST}),(VoiceXML, 2))}

    Formally,x: data format d: media devices, y:Supplier and p: Priority of type 1| (x, d) (y, p) f 2.

    Given that supplier priority is involved inf 2 then the most-preferred supplier is sought. Withreference to Eq. (19), the numerical values asso-ciated with users preferred suppliers are as fol-lows: (i) priority = 1 (high), user satisfaction = 1,

    (ii) priority = 2 (medium), user satisfaction = 2/3,and (iii) priority = 3 (Low), user satisfaction = 1/3.Now, consider further that the users preferred sup-plier, MAVIS, is absent as it is not available in theusers laptop. The method by which the systemfinds the optimal supplier configuration is shownbelow:Case 1: (Lambda, MAVIS) not possibleCase 2: (Lambda, LaBraDoor) alternative 1Case 3: (Lambda, VICKIE) alternative 2

    Then the replacement selection is based on usersatisfaction score:User Satisfaction (Case 2) =(1+1+)/3 = 8/9 = 0.89User Satisfaction (Case 3) =(1+1+)/3 = 7/9 = 0.78

    Hence, Case 2 is the preferred alternative. Formally,if f 2: Document Format, {Media devices} (Supplier, Priority) where Priority: 1, then thechosen supplier is given by:(doc, m): (Document,Media devices) y: Supplier, p1: Priority, p2:Priority | (doc, m) y (y, p1) f 2 (p1 < p2).

    8.4 Optimizing Users Task ConfigurationConsider a scenario where all suppliers that

    convert LaTex document to its Braille equivalentare available. For example, given the suppliersMAVIS, LaBraDoor, VICKIE, then the correspond-ing user satisfaction with respect to these suppliersare as follows:

    cMAVIS= 1.0, cLaBraDoor = 2/3, cVICKIE= 1/3.This indicates that the user is most happy withthe top-ranked supplier (MAVIS) and least happywith the bottom-ranked supplier (VICKIE). Con-sider further that these suppliers haven = 3 pre- ferred features (i.e. Math Braille code, capability of informing the user of the nature of the next mathe-matical expression, navigation within the expres-sion). If in MAVIS set-up, the Nemeth Math codeis not installed, then the missing feature m = 1 andthe user satisfaction becomes f s = 1 m /(n + 1) = 1 = 0.75. This also reduces the users satisfaction,as given by the relationship cMAVIS * f MAVIS =(1.0)(0.75) = 0.75.

    Now, consider a case of a dynamic reconfigura-tion wherein the default supplier is to be replacedby another. Not taking f s into account yet (assump-tion: f s = 1), if the current supplier is BraMaNet,then the users satisfaction is cBraMaNet= 2/3 = 0.67.What would happen if it will be replaced by anothersupplier through dynamic reconfiguration (xsupplier=1.0)? Using the relationship hsupplier = (csupplier +creplacement) / 2* csupplier then the results of possiblealternative configurations are as follows:

  • 8/7/2019 UBICC Article 2008 - Version Camera-ready_284

    13/14

    Ubiquitous Computing and Communication Journal 13

    Replacing BraMaNet (supplier 2):Case 1: Replacement by MAVIS (supplier 1): (0.67)(1) * [(0.67 + 1)/2*(0.67)]1 = 0.835Case 2: Replacement by itself (supplier 2):(0.67)(1) * [(0.67 + 0.67)/2*(0.67)]1 = 0.67Case 3: Replacement by VICKIE (supplier 3):(0.67)(1) * [(0.67 + 0.33)/2*(0.67)]1 = 0.50

    Hence, if the reconfiguration aims at satisfyingthe user, then the second-ranked supplier should bereplaced by the top-ranked supplier.

    8.5 Specification for Detecting Suitability of Modality

    Petri Net 1 is a formal, graphical, executabletechnique for the specification and analysis of aconcurrent, discrete-event dynamic system. Petrinets are used in deterministic and in probabilisticvariants; they are a good means to model concur-rent or collaborating systems. In the specificationsin this paper, only a snapshot of one of the manyoutcomes is presented due to space constraints. Weuse HPSim2 in simulating Petri Net.In Figure 12, a Petri Net specification is shownwith modalities and interaction context. As shown,the combination of interaction contexts parametersyields the implementation of some modalities (M1 up to M9). The Net illustrates the snapshot simula-tion of the case cited in Section 7. As shown, thesimulation begins with a token in Modality placeand Interaction Context place. The firing of thetoken in Interaction Context yields a specific valuefor User Context, Environment Context andSystem Context based on Computing Deviceplaces, which is exactly similar to the values of A1,,A7 in the previous section. The traversal of the tokens in different places is noted by green co-lored places. As shown, the result yields modalityM6 being selected as the optimal modality. ThePetri Net simulation confirms the result obtained inthe previous section. Also, the same case yields aVin failure result (i.e. due to noisy environment).

    8.6 Simulation ResultsUsing users preferences, we have simulated

    the variations in users satisfaction as these prefe-rences are modified through dynamic configuration.The results are presented through various graphs inFigure 13. The first three graphs deal with applica-tion supplier, and the variation of users satisfaction

    as additional parameters (supplier features and al-ternative replacements) are taken into account. Thelast two graphs deal with QoS dimensions and theirvariations. In general, user is satisfied if the suppli-er and its desired features and QoS dimensions areprovided. Whenever possible, in a dynamic confi-guration, the preferred setting is one where the pa-rameters are those of users top preferences.

    1 http://www.winpesim.de/petrinet2 "HPSim. http://www.winpesim.de

    Figure 12: A snapshot of the simulated selection of optimal modality based on interaction context.

    0

    0,2

    0,4

    0,6

    0,8

    1

    1 2 3Suppliers

    (by priority)

    U s e r S a t i s f a c t i o n

    0

    0,2

    0,4

    0,6

    0,8

    1

    1 2 3

    Suppliers(by priority)

    U s e r S a t i s f a c t i o n

    No missing featureOne missing featureTwo missing features

    0

    0,2

    0,4

    0,6

    0,8

    1

    1 2 3

    QoS(by priority)

    U s e r S a t i s f a c t i o n

    0

    0,2

    0,4

    0,6

    0,8

    1

    1 2 3 4 5 6 7 8 9Suppliers

    (by priority)

    U s e r S a t i s f a c t i o n

    Supplier1Supplier2Supplier3

    Figure 13: Various graphs showing variations of users satisfaction with respect to its preferred sup-plier and QoS dimension and their replacements.

    9 CONCLUSION

    Our investigation on the state of the art systemsand solutions for providing visually-impaired userswith access to mathematical expressions indicatesthat none of these systems are pervasive and not asingle one adapts its configuration based on thegiven interaction context. To address theseweaknesses, we have designed a multimodalmultimedia computing system that would providemathematical computing to blind users wheneverand wherever the users wish. This paper presented

  • 8/7/2019 UBICC Article 2008 - Version Camera-ready_284

    14/14

    Ubiquitous Computing and Communication Journal 14

    the infrastructure design of a middleware thatrealizes a successful migration and execution of users task in a pervasive multimodal multimediacomputing system, the task being the ubiquitousaccess to mathematical expressions for visually-impaired users. Through ML training, we illustratedthe acquisition of positive examples to form userspreferred suppliers and QoS dimensions forselected applications. In a rich computingenvironment, alternative configuration spaces arepossible which give the user some choices forconfiguring the set-up of his application. We haveillustrated that configuration could be dynamic oruser-invoked, and the consequences, with respect tousers satisfaction, of these possible configurations.Optimization is achieved if the system is able toconfigure set-up based on users preferences.

    In this work, we have listed modalities suitableto blind users. Given sets of suppliers, modalities,computing devices, and the possible variations of interaction context, we stated conditions in whichmodality will succeed or fail. Similarly, we showeda scenario wherein even if a specific modality isalready deemed possible, still it is conceivable thatit would fail if there are not sufficient mediadevices that would support it or the environmentrestriction imposes the non-use of the needed mediadevices. We validated all these affirmations throughscenario simulations and formal specifications.

    10 REFERENCES

    1. Satyanarayanan, M.,Pervasive Computing: Visionand Challenges. IEEE Personal Communications,2001(August): p. 10-17.

    2. MathML.http://www.w3.org/Math[cited.3. Lamport, L., LaTeX: The Macro Package,

    http://web.mit.edu/texsrc/source/info/latex2e.pdf .1994.

    4. Stger, B., et al., Mathematical Working Environ-ment for the Blind Motivation and Basic Ideas , in

    ICCHP . 2004, Springer. p. 656-663.5. Edwards, A.D.N. et al., A Multimodal Interface for

    Blind Mathematics Students . in INSERM . 1994. Par-is, France.

    6. Cahill, H., et al., Ensuring Usability in MATHS , inThe European Context for Assistive Technology .1995, IOS Press: Amsterdam. p. 66-69.

    7. Ferreira, H., et al., Enhancing the Accessibility of Mathematics for Blind People: The AudioMathProject . in ICCHP-9 th Intl Conf. on Computer Help-ing People with Special Needs . 2004. Paris, France.

    8. Ferreira, H., et al., AudioMath: Towards Automatic Readings of Mathematical Expressions . in Human-Computer Interaction Intl (HCII) . 2005. L. V., USA.

    9. Moo, V. et al.,VICKIE: A Transcription Tool for Mathematical Braille . in AAATE-Asso. for the Adv.of Assistive Tech. in Europe. 2003. Dublin, Ireland.

    10. Schwebel, F., et al., BraMaNet: Quelques rglessimples connatre pour qu'un aveugle puisse lirevos documents mathmatiques et vos pages web . in

    Journes nationales Caen . 2005. Caen, France.

    11. Batusic, M., et al.,A Contribution To Making Mathematics Accessible For The Blind . in Interna-tional Conf. on Computers Helping People withSpecial Needs . 1998. Oldenbourg, Wien, Mnchen.

    12. Karshmer, A.I., et al., Reading and writing ma-thematics: the MAVIS project , in Proceedings of thethird international ACM conference on Assistivetechnologies . 1998, ACM Press. p. 136-143.

    13. Nemeth, A.,The Nemeth Braille Code for Ma-thematics and Science Notation. American PrintingHouse for the Blind, 1972.

    14. Epheser, H., et al., Internationale Mathematiks-chrift fur Blinde, Deutsche Blindenstudienanstalt. Marburg (Lahn) 1992.

    15. Commission Evolution du Braille Francais. Nota-tion Mathematique Braille, mise jour de la nota-tion mathematique de 1971. 2001.

    16. Edwards, A.D.N., et al., Lambda: a Multimodal Approach to Making Mathematics Accessible to Blind Students . in Proceedings of the 8 th interna-tional ACM SIGACCESS conference on Computersand accessibility . 2006. Portland, Oregon, USA.

    17. Raman, T.V., Audio System For Technical Read-ings 1994, Cornell University.

    18. Awde, A., et al., Information Access in a Multi-modal Multimedia Computing System for MobileVisually-Impaired Users , in ISIE 2006, IEEE Intl.Symp. on Industrial Elec. 2006: Montreal, Canada.

    19. Weiss, G., Multiagent systems. MIT-Press, 1999.20. Ferber, J., Les systemes multi-agents , ed. V.u.i.

    collective. 1995, Paris: InterEditions.21. Jennings, N.R., et al., Applications of Intelligent

    Agents , in Agent Technology: Foundations, Appli-cations, and Markets , 1998, Springer-Verlag: Hei-delberg, Germany. p. 3-28.

    22. Awde, A., et al., A Paradigm of a Pervasive Mul-timodal Multimedia Computing System for the Vi-sually-Impaired Users , in The First InternationalConference on Grid and Pervasive Computing .2006: Tunghai University, Taichung, Taiwan.

    23. Sousa, J.P., et al.,Task-based adaptation for ubi-quitous computing. IEEE Transactions on Systems,Man and Cybernetics, Part C: Applications and Re-views 2006.6(3): p. 328-340.

    24. Jacquet, C., et al.A Context-Aware Locomotion Assistance Device for the Blind . in ICCHP 2004,9th International Conference on Computers HelpingPeople with Special Needs . Universit Pierre et Ma-rie Curie, Paris, France. .

    25. Ross, D.A., et al.,Talking Braille: A WirelessUbiquitous Computing Network for Orientation and Wayfinding in Proceedings of the 7th International

    ACM SIGACCESS Conf. on Computers and Acces-sibility 2005. Baltimore, MD, USA ACM Press.

    26. Mitchell, T., Machine Learning . 1997, McGraw-Hill.

    28. Utgoff, P., ID5: An incremental ID3 , in 5th Intl.Workshop on Machine Learning . 1988. p. 107-120.

    29. Kallenberg, O.,Foundations of Modern Probabil-ity, ed. S.S.i. Statistics. 2002: Springer. 650.

    30. Witten, I.H. and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques, 2nd ed. 2005, San Francisco: Morgan Kaufmann. 525.