8
A Tool-supported Development Process for Bringing Touch Interactions into Interactive Cockpits for Controlling Embedded Critical Systems Arnaud Hamon 1,2 , Philippe Palanque 1 , Yannick Deleris 2 , David Navarre 1 , Eric Barboni 1 1 ICS-IRIT, University Toulouse 3, 118, route de Narbonne, 31062 Toulouse Cedex 9, France {lastname}@irit.fr 2 AIRBUS Operations 316 route de Bayonne 31060 Toulouse cedex 9 {Firstname.Lastname}@airbus.com ABSTRACT Since the early days of aviation, aircraft cockpits have been involving more and more automation providing invaluable in improving navigation accuracy, handling unusual situations, performing pre-planned sets of tasks and gathering always increasing volume of information produced by more and more sophisticated aircraft systems. Despite that long lasting unavoidable tide of automation, the concept of human in the loop remains of prime importance playing a complementary role with respect to automation. However, the number of commands to be triggered, the quantity of information (to be perceived and aggregated by the crew) produced by the many aircraft systems require integration of displays and efficient interaction techniques. Interactive Cockpits as promoted by ARINC 661 specification [2] can be considered as a first significant step in that direction. However, a lot of work remains to be done in order to reach the interaction efficiency currently available in many mass market systems. Interactive cockpits belonging to the class of safety critical systems, development processes and methods used in the mass market industry are not suitable as they usually focus on usability and user experience factors upstaging dependability. This paper presents a tool-supported model-based approach suitable for the development of new interactions techniques dealing on an equal basis with usability and dependability. We demonstrate the possibility to describe touch interaction techniques (as in the Apple iPad for instance) and show its integration in a Navigation Display application. Finally, the paper describes how such an approach integrates in the Airbus interactive cockpits development process. Keywords Tactile interactions, development process, model-based approaches, interactive cockpits INTRODUCTION With the advent of new technologies in everyday life, users are frequently confronted with new ways of interacting with computing systems. Beyond the well-known fun effect concurring to the user experience grail [16], these new interaction technologies aim at increasing the bandwidth between the users and the interactive system. Such an increase of bandwidth can be performed by improving the communication from the computer to the operator for instance by offering sophisticated and integrated visualization techniques making it possible to aggregate large sets of data in meaningful presentations as proposed in [35]. This paper focusses on the other communication channel i.e. from the operator towards the computer by exploiting new interaction techniques in order to improve performance. More precisely the target interaction techniques belong to the recent trend of touch interactions [37]. Beyond the bandwagon effect of touch interfaces, the paper presents why such interfaces are relevant for the command and control interactions in cockpits. However, recent research contributions in the area of Human-Computer Interaction demonstrate that touch interactions decrease usability due to both the device itself (the hand of the user is always in the way between the device and the eyes) and to the lack of consistency between different environments [25]. These usability issues (derived from basic principles in HCI) have been confirmed by empirical studies on touch interactions on the Apple iPad [24] leading to the strong statement from Don Norman Gestural Interfaces: A Step Backwards In Usability” [25]. Beyond these usability issues, dependability ones have also to be considered. Indeed, classical WIMP [36] interfaces have been standardized for more than 20 years [13] and many development platforms, specification techniques [3] and components-of-the-shelf (COTS) are widely available. These components have thus been thoroughly tested and validated over the years. This is not the case for touch-based interfaces for which no standards are available (beyond the always evolving ones provided by major players such as Microsoft [22] or Apple [1] which are of course conflicting), no dedicated programming environments and no long-term experience to build upon. This ends up with even less dependable interfaces where faults are distributed to the hardware, the operating systems, the interaction drivers and finally the application itself. LEAVE BLANK THE LAST 2.5 cm (1”) OF THE LEFT COLUMN ON THE FIRST PAGE FOR THE COPYRIGHT NOTICE.

A Tool-supported Development Process for Bringing Touch Interactions into Interactive Cockpits for Controlling Embedded Critical Systems

Embed Size (px)

Citation preview

A Tool-supported Development Process for Bringing Touch

Interactions into Interactive Cockpits for Controlling

Embedded Critical Systems

Arnaud Hamon1,2

, Philippe Palanque1, Yannick Deleris

2, David Navarre

1, Eric Barboni

1

1ICS-IRIT, University Toulouse 3,

118, route de Narbonne,

31062 Toulouse Cedex 9, France

{lastname}@irit.fr

2AIRBUS Operations

316 route de Bayonne

31060 Toulouse cedex 9

{Firstname.Lastname}@airbus.com

ABSTRACT

Since the early days of aviation, aircraft cockpits have been

involving more and more automation providing invaluable in

improving navigation accuracy, handling unusual situations,

performing pre-planned sets of tasks and gathering always

increasing volume of information produced by more and

more sophisticated aircraft systems. Despite that long lasting

unavoidable tide of automation, the concept of human in the

loop remains of prime importance playing a complementary

role with respect to automation. However, the number of

commands to be triggered, the quantity of information (to be

perceived and aggregated by the crew) produced by the many

aircraft systems require integration of displays and efficient

interaction techniques. Interactive Cockpits as promoted by

ARINC 661 specification [2] can be considered as a first

significant step in that direction. However, a lot of work

remains to be done in order to reach the interaction efficiency

currently available in many mass market systems. Interactive

cockpits belonging to the class of safety critical systems,

development processes and methods used in the mass market

industry are not suitable as they usually focus on usability

and user experience factors upstaging dependability. This

paper presents a tool-supported model-based approach

suitable for the development of new interactions techniques

dealing on an equal basis with usability and dependability.

We demonstrate the possibility to describe touch interaction

techniques (as in the Apple iPad for instance) and show its

integration in a Navigation Display application. Finally, the

paper describes how such an approach integrates in the

Airbus interactive cockpits development process.

Keywords

Tactile interactions, development process, model-based

approaches, interactive cockpits

INTRODUCTION

With the advent of new technologies in everyday life, users

are frequently confronted with new ways of interacting with

computing systems. Beyond the well-known fun effect

concurring to the user experience grail [16], these new

interaction technologies aim at increasing the bandwidth

between the users and the interactive system. Such an

increase of bandwidth can be performed by improving the

communication from the computer to the operator for

instance by offering sophisticated and integrated

visualization techniques making it possible to aggregate large

sets of data in meaningful presentations as proposed in [35].

This paper focusses on the other communication channel i.e.

from the operator towards the computer by exploiting new

interaction techniques in order to improve performance.

More precisely the target interaction techniques belong to the

recent trend of touch interactions [37]. Beyond the

bandwagon effect of touch interfaces, the paper presents why

such interfaces are relevant for the command and control

interactions in cockpits.

However, recent research contributions in the area of

Human-Computer Interaction demonstrate that touch

interactions decrease usability due to both the device itself

(the hand of the user is always in the way between the device

and the eyes) and to the lack of consistency between different

environments [25]. These usability issues (derived from basic

principles in HCI) have been confirmed by empirical studies

on touch interactions on the Apple iPad [24] leading to the

strong statement from Don Norman “Gestural Interfaces: A

Step Backwards In Usability” [25].

Beyond these usability issues, dependability ones have also

to be considered. Indeed, classical WIMP [36] interfaces

have been standardized for more than 20 years [13] and

many development platforms, specification techniques [3]

and components-of-the-shelf (COTS) are widely available.

These components have thus been thoroughly tested and

validated over the years. This is not the case for touch-based

interfaces for which no standards are available (beyond the

always evolving ones provided by major players such as

Microsoft [22] or Apple [1] which are of course conflicting),

no dedicated programming environments and no long-term

experience to build upon. This ends up with even less

dependable interfaces where faults are distributed to the

hardware, the operating systems, the interaction drivers and

finally the application itself.

LEAVE BLANK THE LAST 2.5 cm (1”) OF THE LEFT

COLUMN ON THE FIRST PAGE FOR THE

COPYRIGHT NOTICE.

As we are envisioning the introduction of such interfaces in

the domain of cockpits for large civil aircrafts these two

aspects of usability and dependability have to be addressed in

a systematic way.

This paper proposes a contribution for engineering such

interaction techniques by proposing a model-based approach

(presented in section 3) supporting both reliability and

usability making such interactions amenable to the

constraints imposed by certification authorities. We detail

how this approach can be used for the complete and

unambiguous description of touch-based interaction

techniques and how such interactions can be tuned to support

better usability (section 4). This approach combines different

techniques including formal analysis of models, simulation

and, in particular, analysis of log data in a model-based

environment. The applicability of the approach is

demonstrated on a real-size case study (section 5) providing

interactions on the Navigation Display (ND). Finally, the

paper presents how the approach can be integrated in Airbus

development process which involves suppliers’ activities

before concluding the paper and highlighting future work

(section 6).

THE CONTEXT OF INTERACTIVE COCKPITS AND WHY

TOUCH INTERACTIONS CAN HELP

Since the early days of aviation, each equipment in the

cockpit was offering to pilots an integrated management of

an aircraft system. Indeed, each aircraft system (if needed)

was proposing an information display (such as a lamp or a

dial) and an information input mechanism (typically physical

buttons or switches). Such integrated cockpit equipment was

offering several advantages such as high coherence (the

equipment offers in the same location all the command and

display mechanisms related to a given aircraft system) and

loosely coupling (failure of one equipment had no effect on

the other ones). However, in order to perform the tasks

required to fly the aircraft, the crew needs usually to integrate

information from several displays and to trigger command in

several aircraft systems. With the increase of equipment in

the cockpit and the stronger constraints in terms of safety

imposed by regulatory authorities such information

integration has become more and more demanding to the

crew.

In the late 70’s the aviation industry developed a new kind of

display system integrating multiple displays and known as

“glass cockpit”. Using integrated displays it was possible to

gather within a single screen a lot of information previously

distributed in multiple locations. This generation of glass

cockpit uses several displays with electronic technology

called CRT. Such CRT screens receive information from

aircraft system applications, then process and display this

information to crew members. In order to send controls to the

aircraft systems, the crew members have to use physical

buttons usually made available next to the displays. Control

and displays are processed independently (different hardware

and software) and without integration. In summary,

integration of displays was a first step but commands

remained mainly un-centralized i.e. distributed throughout

the cockpit.

This is something that has changed radically with interactive

cockpits applications which have started to replace the glass

cockpit. The main reason is that it makes it possible to

integrate information from several aircraft systems in a single

user interface in the cockpit. This integration is nowadays

required in order to allow flight crew to handle the always

increasing number of instruments which manage more and

more complex information. Such integration takes place

through interactive applications featuring graphical input and

output devices and interaction techniques as in any other

interactive contexts (web applications, games, home

entertainment, mobile devices …).

Figure 1. The interactive cockpit of the Airbus A350

However, even though ARINC 661 specification compliant

interfaces have brought to aircraft cockpits the WIMP

interaction style that prevailed in PC’s for the last 20 years as

aforementioned, post-WIMP interactions can provide

additional benefit to aeronautics as it did in the area of

mobile interfaces or gaming.

Next section presents a study that have been carried out with

Airbus Operation in order to assess the potential benefits of

touch interactions in the cockpit context and in case of

positive results to define processes, notations and tools to go

from the requirements to the certification.

TOUCH INTERFACES IN A COCKPIT CONTEXT

Despite previous researches showing that multi-touch

interfaces do not always provide with high level of usability

[25] the aeronautic context is so specific only mentioning

thorough training of the crew and severe selection process

that this type of modality might be considered.

Several taxonomies of input modalities have been presented

in past such as the ones of Buxton [6], Foley [9], Mackinlay

[19], and Lipscomb [18]. Among these existing input

modalities, only tactile HMI (on touch screens or tablets) fits

the criteria required for integration in a cockpit. Indeed,

voice recognition’s rate (~80-95%) is too low even compared

to a maximum error rate of 10-5

needed for non-critical

systems in cockpits. 3D gestures technologies such as

Microsoft Kinect are not mature enough to be implemented

as well. Parker et al. [31] demonstrated that the use of mouse

is more adapted for touch screens than fingers which

introduce unwanted cluttering and ergonomic issues among

others. On the other hand, [12] developed a detailed

comparison of touch versus pen and suggest combining the

two modalities. These pen devices, such as ImpAct [38] or

MTPen [33], allow a large variety of innovative interactions:

a precise selection technique [5] and page switch [33] for

example. However, such devices must be physically linked to

the cockpit for safety reasons. On one hand, such a link shall

be long enough to allow pilot to interact freely with the

display; but on the other hand such a length would allow the

link to wind around critical cockpit hardware such as side

sticks and throttles. This could cause catastrophic events.

Therefore, the use of styli is not appropriate inside cockpits.

Finally, [6] does not take into account tangible interfaces

such as [32]. Due to similar problems these modalities are

not envisioned in the frame of new cockpit HMI.

Similarly to [10] creating an adapted set of gestures for

diagram editions, we intend to design a set of multi-touch

interactions for interactive cockpits. This set of interactions

intend to balance ([17]) tactile non-precision for clicking

with the use of gestures such as [5] for example. However,

compared to [10], the target software environment is

regulated with strict certifications detailed later on in the

paper. Finally, translating physical controls towards software

components will increase their modifiability and the

evolutions capabilities of the entire cockpit system.

As current interactive cockpit HMI are ARINC 661 based,

whose design is adapted for WIMP interaction paradigms

(Windows, Icons, Menus, Pointing Device), these HMI rely

on such WIMP interaction paradigms. The envisioned

change of modality is part of the whole HMI redesign.

Indeed changing only the interaction modality without

adapting the HMI content and structure would not be

satisfying in term of usability and user experience. Regarding

systems functionalities, multi-touch in cockpit allows

navigating in 2D content or 3D and implement sets of

gestures similar to [15], multi-touch curve edition as in [34]

for example. A recent survey of tactile HMI is available in

[7]; however, the considered tactile interactions in this paper

do not include gestures concerning touch-screens like [20].

MODELING TOOL AND IMPLEMENTATION SUITE

he IC formalism is a formal description techni ue

dedicated to the specification of interactive systems [23]. It

uses concepts borrowed from the object-oriented approach

(dynamic instantiation, classification, encapsulation,

inheritance, client/server relationship) to describe the

structural or static aspects of systems, and uses high-level

Petri nets to describe their dynamic or behavioral aspects.

ICOs are dedicated to the modeling and the implementation

of event-driven interfaces, using several communicating

objects to model the system, where both behavior of objects

and communication protocol between objects are described

by the Petri net dialect called Cooperative Objects (CO). In

the ICO formalism, an object is an entity featuring four

components: a cooperative object which describes the

behavior of the object, a presentation part (i.e. the graphical

interface), and two functions (the activation function and the

rendering function) which make the link between the

cooperative object and the presentation part.

An ICO specification fully describes the potential

interactions that users may have with the application. The

specification encompasses both the "input" aspects of the

interaction (i.e. how user actions impact on the inner state of

the application, and which actions are enabled at any given

time) and its "output" aspects (i.e. when and how the

application displays information relevant to the user).

This formal specification technique has already been applied

in the field of ir raffic Control interactive applications

[23 , space command and control ground systems [28 , or

interactive military [4 or civil cockpits [3].

The ICO notation is fully supported by a CASE tool called

PetShop [30] . All the models presented in the next section

have been edited and simulated using PetShop.

DESIGN, SPECIFICATION AND PROTOTYPING OF

TOUCH INTERACTION TECHNIQUES

Left-hand side of Figure 2 presents a development process

dedicated to the design of interfaces involving dedicated

interaction techniques. This process is a refinement of the

one presented in [26] exhibiting the issue of deployment of

application in a critical context.

Due to space constraints, we don’t present the entire process

(which can be found in [26]) but focus on the interaction

technique design phase (box on the top right-hand side of

Figure 2). The goal of this phase is to specify and design

gesture models from the description formulated earlier during

the development cycle. This phase is the result of iterations

over three steps: interaction definition, interaction modeling

and analysis of non-nominal cases detailed on the right-hand

side of Figure 2. The following paragraphs will explain more

precisely these parts of the process.

The Process Applied for the Design of a or Interaction

Techniques Design and Evaluation

Interaction Definition

This design phase of the process consists in analyzing high

level requirements to define the gestures set and their

properties (number of fingers, temporal constraints…). s a

lot of work has been devoted in the past on touch-based and

gesture interactions exploiting the literature provides

information useful for determining usable gestures. However,

as explained above, the context of aircraft cockpits is very

different from classical environments and requires

adapting/modifying them to fit that specific context.

Here is an example of an information touch-based interaction

technique (as could be seen in any manual describing such

interaction). Single finger tap-and-hold is defined as follows:

“The pilot touches the screen without moving. A graphical

feedback displays the remaining time to press before the tap-

and-hold is detected. Once the tap-and-hold is detected, the

corresponding event is triggered when the pilot lifts his/her

finger off the screen.”

For such high level and informal descriptions, we were not

able to use specific tools: frameworks for high level gesture

formalization, such as [14] lack to describe, for example, the

quantitative temporal aspects of interactions as well as how

to ensure their robustness and, for instance, how to assess

that all the possible gestures have been accounted for.

Figure 2 - Development process for Interaction

specification and implementation

Indeed, the constraint such as “without moving” in the

description above is very strong and not manageable by the

user in case of turbulences or other unstable situations. To

improve usability, such constraints have to be relaxed usually

by adding thresholds (time, distance ...). This is the case on

standard mouse interactions (such as double-click) on a PC.

The particularity of this phase is also to already involve

system designers to provide interaction designers with

operational and aircraft systems specificities that might

require specific gestures of adaptation of existing ones.

Technical aspects such as number of fingers involved for

input, touch properties (pressure, fingers’ angle…) are also

defined at that stage. It allows gesture designers to define the

most adapted gestures taking the entire cockpit environment

into account as well as operational aspects. For instance,

gestures might require adaptation according to the various

flight phases. This joint work also increases the efficiency of

the overall process by setting development actors around the

same table and enhancing their comprehension of the overall

process of gesture definition as well as to forecast interaction

techniques evolution in the cockpit.

However, remaining at this high-level of abstraction does not

provide enough details to the developer which will in the end

lead to leave a lot of choices outside of the design process.

The developer will have to make decisions without prior

discussion with the designers leading to inconsistent and

possibly un-adapted interactions.

In order to solve this problem, the interaction modeling phase

aims at formalizing these features in the early phases of the

design as well as to ensure a continuity of the process and

requirement traceability needed for certification.

Interaction modeling

This second step consists in refining the high-level gesture

definition into a behavioral model. We present here two of

the various iterations that have to be performed in order to

reach a complete description of the interaction technique.

Beyond, the standard interactions i.e. with which the operator

make no mistake, the final description also has to cover

unintended interaction errors that can be triggered by the user

(slips) or due to unfriendly environmental issues (vibrations,

low visibility, …) which are common in aeronautics.

Figure 3 represents the model describing the behavior of the

tap-and-hold interaction technique as previously defined.

This models built using the ICO formal description technique

does not cover every possible manipulation done by the user

and can be considered as a nominal tap-and-hold interaction.

Figure 3 - Initial tap-and-hold model in ICO

From the initial state (one token in place Idle) only transition

touchEvent_down is available. This means that the user can

only put the finger on the screen, triggering that event from

the device. When that event occurs, the transition is fired and

the token removed from place Idle and set in place

Long_Press. If the token remains in that place more than 50

milliseconds, (condition [50] in transition called validated)

that transition automatically fires setting a token in place

Validated. In that state, when the user removed the finger

from the touch screen, the event toucheventf_up is received,

the transition called toucheventf_up_1 is fired triggering the

event long_press that can then be interpreted as a command

by the application. Right after touching the device and before

the 50 milliseconds have elapsed, the user car both move the

finger on the device or remove it. In both case the interaction

comes back to the initial state and no event long_press is

triggered (this behavior is modeled by transition

toucheventf_up_2 and toucheventf_move). These are the two

means offered to the user in order to cancel the command.

Non-nominal cases analysis and modelling

A non-nominal case is a flow of user’s events that does not

lead to the completion of the intended interaction. This third

step consists in analyzing possible user input to determine

non-nominal case during the interaction, that can occur due

to interaction mistakes (as presented above) or due to

conflicts within the detection process when several

interactions are competing (possible interference with a

simple tap presented in Figure 4).

Figure 4. Behavioral model of the simple tap

As a user interface is likely to offer several gestures it is

important to carefully define their behavior so as they don’t

interfere. For instance, the interface should allow the

coexistence of both the tap and the tap-and-hold, but if the

tap-and-hold detection is detected too early, a simple tap will

become impossible to perform.

Figure 6 - Graphical description of a long press

Having two independent models as the ones in Figure 3 and

in Figure 4 raises additional problems related to the graphical

feedback that the interaction technique has to provide in

order to inform the user about the progression of its

interaction. Two models would solve the problem related to

the difficulty to trigger the tap event but this would produce

interfering feedback (as both models will have to provide

feedback at the same time even though only either tap or tap-

and-hold will be triggered. To avoid unwanted fugitive

feedback in this case, our design proposes to trigger the tap-

and-hold feedback after a certain delay. The corresponding

specification is presented Figure 6.

The lower part of the figure represent the feedback time. The

figure represents the fact that this graphical feedback only

starts after a duration t1 (after the user’s finger has been in

contact with the touch device). If all non-nominal cases,

timing and graphical constraints have not been covered

during the analysis phase, the gesture behavioral model is

adapted to refine and make the description comprehensive.

The resulting model corresponding to our last iteration is

presented in Figure 8.

Figure 5 and Figure 7 present two different scenarios covered

by our model. In the first scenario, the pilot presses on the

device and after a few milliseconds the standard feedback

appears. As long as the pilot holds his finger on the

touchscreen, the inner radius of the ring decreases until the

ring becomes a disk. When the finger moves a little, the ring

becomes orange. When the tap_and_hold is completed, a

green disk appears under the pilot’s finger. When the pilot

removes his finger from the device, the tap_and_hold event

is generated and forwarded to the upper applicative levels.

In the second scenario, after the graphical feedback has

appeared, the pilot moves the finger far away from the

position where the initial press has occurred and the

interaction is missed.

Allowing the use of high-level events such as the one

triggered by the tap-and-hold interaction technique requires

the use of a transducer to translate low level touchscreen

events into higher level events. There are three main low

level touchscreen events (down, up and move) produced

when the finger starts or stops touching or moves on the

screen. These low level events must be converted into a set

of five higher level events corresponding to the different

steps while using the tap-and-hold interaction technique

(succeed when the gesture detection is completed, sliding

while the finger moves within the correct range, too early,

when the finger stops touching the screen before the correct

delay, outside, when the finger is too far from the initial

contact point, and missed when the finger stops touching too

far from the initial point.

Figure 5. First scenario (success) refining the behavior and describing the graphical feedback

Figure 7. Second scenario (failure) refining the behavior and describing the graphical feedback

Figure 8 presents the behavior of such a transducer using the

ICO formal description technique which is the result of the

last iteration within our design process.

Additionally to the behavior of the detection of the correct

gesture, such interaction requires some graphical feedback to

keep the user aware of the gesture detection. As presented on

Figure 5 and Figure 7, several feedbacks are provided to

make explicit the different phases of the detection. In an ICO

specification, these feedbacks correspond to the rendering

related to token movements within Petri net places. As

explain at the beginning of this section, no rendering is

performed until the first step is reached to avoid unwanted

feedback, meaning that there is no rendering associated to the

places Idle, Delay, Sliding_1 and Out_1. The feedback of the

interaction technique only occurs with the rendering

associated to the four places Tap_and_hold (beginning of the

progress animation), Sliding_2 (orange progress animation),

Out_2 (red circle) and Validated (green circle).

Interaction tuning

The distinction is to be made between the validity of the

behavior model and the tuning of this model. In previous

phases of the process, the objective is to define precisely the

behavior of the model and both to identify and to define a set

of parameters in the models that might require further tuning

to produce smooth interactions. At that stage we consider

that the behavior will not be altered but only some of its

parameters (as for instance the 50 milliseconds

aforementioned).

Based on the gesture modeling and specification, the gesture

model can be tuned using the parameters identified during

the previous steps. As for the tap-and-hold touch interaction,

the set of parameters that have been identified are

summarized in Table 1.

When the transducer is in its initial state (represented by the place Idle at the center of the Petri net) it may receive the low level event toucheventf_down (handled by the transition

toucheventf_down_1). When received a token is put in the place Delay, representing the beginning of the first waiting phase until the first time threshold is reached. While waiting, two low level

events may occur, leading to three cases:

The toucheventf_up event (handled by the transition toucheventf_up_1) that leads to abort the tap-and-hold as the touch has been released too early (before the threshold is reached), triggering the high level event TapAndHold_early) and making the Petri net going back to its initial state.

The toucheventf_move event (handled by the transition toucheventf_move_1) that leads to triggering a high level event TapAndHold_sliding meaning a short move (within the correct range) has been detected but the tap-and-hold interaction is still possible (a token is then put in the place Sliding_1). As a recursive definition, when a token is in the place Slinding_1, the Petri net behaves in the same way as when a token is in the place Delay: o While waiting to reach the first time threshold, any move within the correct range does not modify the state of the Petri net (firing of transition toucheventf_move_2, with respect to its

precondition “P0.distance(P) < threshold1” and triggering TapAndHold_sliding event). o Within this delay if the finger moves outside the correct range (firing of transition toucheventf_move_3 and triggering TapAndHold_out event) or a premature low level event

touchevent_up occurs (firing of the transition toucheventf_up_2 and triggering TapAndHold_early) the tap-and-hold detection is aborted (in the first case a token is put in place Out_1, waiting for the touch to be released, and in the second case the Petri net returns to its initial state).

o While a token is in the place Out_1, toucheventf_move or toucheventf_up events may occur. In the first case, the state of the Petri net does not change (firing of toucheventf_move_4) and in the second case (firing of the transition toucheventf_up_3) a high level event TapAndHold_missed is triggered, while the Petri net returns in its initial state.

If no event occurs within the current delay (firing of transitions Timer_1_1 or Timer_1_2), the beginning of the tap-and-top interaction technique has been detected, leading to put a token in the place Tap-and-hold.

When a token is in the place Tap-and-hold, a second step of the detection begins, with the same behavior as the first one (waiting for the second threshold to be reached, while moves within the correct range are allowed). The behavior remains exactly the same and is not described due to space constraints

If the corresponding token stays within the correct delay in place Tap_and_hold and Sliding_2, the tap-and-hold gesture is validated (firing of transition Timer_2_1 or Timer_2_2) and a token is put in place Validated, making possible to trigger the event TapAndHold_succeed, when the next toucheventf_up event occurs (firing of transition toucheventf_up_6), corresponding to the successful detection of the interaction technique.

Figure 8 - Behavior of the touchscreen transducer

Table 1 – Tap-and-hold model's parameters

Element Type Unit Value

R_max Distance Pixels tunable

Δt Time ms tunable

Δt1 Time ms tunable

Returned value Position Pixel^2 Lift

From Design to Deployment

With the model conversion and implementation phase, this

process addresses the following issues: compatibility with

existing/future hardware/software environments, link with

suppliers and certification authorities. We have explicitly

taken into account this activity in our process as it is useless

to design interaction in the cockpit that will never be

implementable in that context and that will never go through

certification. Our formal description technique provides

additional benefits such as properties verification [27]. This

is not detailed further in the current paper due to space

constraints. However, as [29] demonstrated, the process is

fully compliant with the safety requirements for cockpits

HMI. In addition, transformations of ICO models into

SCADE Suite descriptions is on the way for which the code

generator is DO178B [8] DAL-A certified which allows

implementing models in the CDS easily.

CASE STUDY

Figure 9 - A tactile Navigation Display

The context of the case study is the development of a new

HMI for an existing User Application (UA) of the cockpit.

We have considered the Navigation Display (ND)

application of the A350. On the display configuration Figure

9, the NDs are circled in orange. The NDs display the

horizontal situation of the aircraft on a map. The possible

information displayed is weather, traffic information… he

Flight Management System (FMS) also sends to the NDs

information regarding the aircraft flight plan and flying

mode. Then NDs then display the various graphical

constructions (routes, waypoints…) according to the FMS

data. Figure 9 represents a ND of an aircraft flying in

HE DING mode (pilots are commanding the aircraft’s

heading). The displays in the A350 are not tactile but we

based our work on plausible scenarios such as: “ he pilots

have just received a message from an air traffic controller,

for a heading modification. The flying pilot needs to execute

the order and wants to modify the current heading of the

aircraft using the ND touchscreen.”

The right side of Figure 9 represents the symbols used for the

heading mode on the ND. The circled symbol corresponds to

the heading selected by the pilots. In order to change the

selected heading value, the Tap_and_Hold is used to select

the current heading symbol directly on the touch screen.

CONCLUSIONS AND PERSPECTIVES

The paper has presented the use of formal description

techniques for the specification and prototyping of touch-

based interfaces. Beyond the design issues that have been

presented in this paper more technical ones might have a

strong impact on the deployability in the cockpit of future

aircrafts. Indeed, while, ARINC 661 [2] describes a client-

server communication protocol which was developed to

facilitate and reduce the costs of the WIMP HMI

implementation into cockpits, the use of another modality,

such as multi-touch, raises the question of the compatibility

of the protocol. Work needs to be done to determine whether

or not an extension of the actual A661 is enough or if the

current standard has to be modified. Indeed, in the actual

architecture, the response time between CDS request and UA

answer is too long to provide usable multi-touch interactions

and the interactions feedback will not be displayed fast

enough. In order to palliate this problem, more

responsibilities may be included in the user interface server

itself. Currently, the conversion from ICO models to

SCADE Suite is done manually. Future work will consist in

developing an automated conversion tool to support moving

from the design/specification level to implementation.

ACKNOWLEDGMENTS

This work is partly funded by Airbus under the contract

CIFRE PBO D08028747-788/2008 and R&T CNES

(National Space Studies Center) Tortuga R-S08/BS-0003-

029.

REFERENCES 1. Apple Corp. iOS Human Interface Guidelines.

http://developer.apple.com/library/ios/#documentation/UserExperience/

Conceptual/MobileHIG. Date of Access: 04/03/2011

2. ARINC 661-4, Prepared by Airlines Electronic Engineering Committee.

Cockpit Display System Interfaces to User Systems. ARINC

Specification 661-4; (2010)

3. Barboni E., Conversy S., Navarre D. & Palanque P. Model-Based

Engineering of Widgets, User Applications and Servers Compliant with

ARINC 661 Specification. 13th conf. on Design Specification and

Verification of Interactive Systems (DSVIS 2006), LNCS Springer

Verlag. p25-38

4. Bastide R., Navarre D., Palanque P., Schyn A. & Dragicevic P. A

Model-Based Approach for Real-Time Embedded Multimodal Systems

in Military Aircrafts. Sixth International Conference on Multimodal

Interfaces (ICMI'04) October 14-15, 2004, USA, ACM Press.

5. Benko H., Wilson A. & Baudisch P. Precise selection techniques for

multi-touch screens. In Proceedings of the SIGCHI conference on

Human Factors in computing systems (CHI '06), ACM, New York, NY,

USA, 1263-1272.

6. Buxton W.. 1983. Lexical and pragmatic considerations of input

structures. SIGGRAPH Comput. Graph. 17, 1 (January 1983), 31-37.

7. Chouvardas, V G, Amalia N Miliou, and M K Hatalis. 2005. Tactile

display applications: A state of the art survey. Proc. of the 2nd Balkan

Conference in Informatics: 290–303. C

8. DO-178B Software Considerations in Airborne Systems and Equipment

Certification: Radio Technical Commission for Aeronautics (RTCA)

European Organization for, 1992.

9. Foley, James D., Wallace, V. L. and Chan, Peggy. "The Human Factors

of Computer Graphics Interpretation Techniques." IEEE Computer

Graphics and Applications 4, no. 11 (1984) 13-48.

10. Frisch M., Heydekorn J. & Dachselt R. Investigating multi-touch and

pen gestures for diagram editing on interactive surfaces. ACM Int.

Conf. on Interactive Tabletops and Surfaces (ITS '09). ACM, New

York, NY, USA, 149-156.

11. Hesselmann T., Boll S. & Heuten W. SCIVA: designing applications

for surface computers. 3rd ACM symp. on Engineering interactive

computing systems (EICS '11). ACM, New York, NY, USA, 191-196.

12. Hinckley K., Koji Yatani, Michel Pahud, Nicole Coddington, Jenny

Rodenhouse, Andy Wilson, Hrvoje Benko, and Bill Buxton. 2010. Pen

+ touch = new tools. 23rd annual ACM symp. on User interface

software and technology (UIST '10). ACM, 27-36.

13. IBM (1989) Common User Access: Advanced Interface Design Guide.

IBM, SC26-4582-0

14. Kammer D., Wojdziak J, Keck M., Groh R. & Taranko S. Towards a

formalization of multi-touch gestures. ACM Int. Conf. on Interactive

Tabletops and Surfaces (ITS '10). ACM, New York, NY, USA, 49-58.

15. Kin K., Miller T., Bollensdorff B., DeRose T., Hartmann B. &

Agrawala M. Eden: a professional multitouch tool for constructing

virtual organic environments. Proc. of conf. on Human factors in

computing systems (CHI '11). ACM, USA, 1343-1352.

16. Lai-Chong Law E., Roto V., Hassenzahl M.. Vermeeren A. & Kort J.

Understanding, scoping and defining user experience: a survey

approach. 27th international conference on Human factors in

computing systems (CHI '09). ACM, New York, NY, USA, 719-728.

17. Lee S. & Zhai S. The performance of touch screen soft buttons. In

Proceedings of the 27th international conference on Human factors in

computing systems (CHI '09). ACM, New York, NY, USA, 309-318.

18. Lipscomb, James S., and Pique, Michael E. "Analog Input Device

Physical Characteristics." SIGCHI Bul. 25, no. 3 (1993) 40-45.

19. Mackinlay, Jock D., Card, Stuart K. and Robertson, George G. "A

Semantic Analysis of the Design Space of Input Devices." Human

Computer Interaction. Vol. 5. Lawrence Erlbaum (1990) 145-90.

20. Marquardt N., Jota R., Greenberg S. &. Jorge J. The continuous

interaction space: interaction techniques unifying touch and gesture on

and above a digital surface. 13th IFIP TC 13 international conference

on Human-computer interaction -. Springer-Verlag, 461-476.

21. McDermid J. & Ripken K. 1983. Life cycle support in the Ada

environment. Ada Lett. III, 1 (July 1983), 57-62.

22. Microsoft Corporation. Microsoft Surface User Experience Guidelines.

Available on MSDNAA. 2009.

23. Navarre D., Palanque P., Ladry J-F. & Barboni E. ICOs: A model-based

user interface description technique dedicated to interactive systems

addressing usability, reliability and scalability. ACM Trans. Comput.-

Hum. Interact., 16(4), 18:1–18:56. 2009.

24. Nielsen, J. (2010): iPad Usability: First Findings From User Testing.

Jakob Nielsen's Alertbox, April 26, 2010

25. Norman, D. A. (2010). Natural User Interfaces Are Not Natural.

Interactions, 17, No. 3 (May - June).

26. Palanque P., Barboni E., Martinie C., Navarre D., Winckler M. A Tool

Supported Model-based Approach for Engineering Usability Evaluation

of Interaction Techniques. ACM SIGCHI conference Engineering

Interactive Computing Systems (EICS 2011), Pisa, Italy, ACM DL.

27. Palanque P. & Bastide R. Verification of an Interactive Software by

analysis of its formal specification. IFIP Human-Computer Interaction

conference (Interact'95) Norway., 27-29 June 1995, p. 191-197.

28. Palanque P., Bernhaupt R., Navarre D., Ould M. & Winckler M.

Supporting Usability Evaluation of Multimodal Man-Machine

Interfaces for Space Ground Segment Applications Using Petri net

Based Formal Specification. Ninth Int. Conference on Space

Operations, Italy, June 18-22, 2006.

29. Palanque P., Eric Barboni, Célia Martinie, David Navarre, and Marco

Winckler. A model-based approach for supporting engineering usability

evaluation of interaction techniques. 3rd ACM SIGCHI symp. on

Engineering interactive computing systems (EICS '11). ACM, New

York, NY, USA, 21-30.

30. Palanque P., Ladry J-F, Navarre D. & Barboni E. High-Fidelity

Prototyping of Interactive Systems can be Formal too 13th Int. Conf. on

Human-Computer Interaction (HCI International 2009) LNCS, Springer

31. Parker J.K., Mandryk R.L. & Inkpen K. (2006). Integrating Point and

Touch for Interaction with Digital Tabletop Displays. IEEE Computer

Graphics and Applications, 26(5), pg. 28-35.

32. Pedersen E.W. & Hornbæk K. Tangible bots: interaction with active

tangibles in tabletop interfaces. In Proceedings of the 2011 annual

conference on Human factors in computing systems (CHI '11). ACM,

New York, NY, USA, 2975-2984.

33. Song H., Benko H., Guimbretiere F., Izadi S., Cao X., & Hinckley K.

Grips and gestures on a multi-touch pen. 2011conf. on Human factors

in computing systems (CHI '11). ACM, USA, 1323-1332.

34. Sun Q., Fu C-W. & He Y. An interactive multi-touch sketching

interface for diffusion curves. In Proc. of the 2011 conf. on Human

factors in computing systems (CHI '11). ACM, New York, NY, USA,

1611-1614.

35. Tesone D. & Goodall J. Balancing Interactive Data Management of

Massive Data with Situational Awareness through Smart Aggregation.

IEEE Symp. on Visual Analytics Science and Technology (VAST '07).

IEEE, pp. 67-74.

36. van Dam A. 1997. Post-WIMP user interfaces. Commun. ACM 40, 2

(February 1997), 63-67

37. Wang F. and Ren X. 2009. Empirical evaluation for finger input

properties in multi-touch interaction. In Proceedings of the 27th

international conference on Human factors in computing systems (CHI

'09). ACM, New York, NY, USA, 1063-1072.

38. Withana A., Makoto Kondo, Gota Kakehi, Yasutoshi Makino, Maki

Sugimoto, and Masahiko Inami. ImpAct: enabling direct touch and

manipulation for surface computing. 23rd annual ACM symposium on

User interface software and technology (UIST '10). ACM, New York,

NY, USA, 411-412.