14
Operationalizing the Integration of User Interaction Specifications in the Synthesis of Modeling Editors Vasco Sousa Université of Montréal Canada [email protected] Eugene Syriani Université of Montréal Canada [email protected] Khady Fall Université of Montréal Canada [email protected] Abstract A long shortcoming in the automatic generation of domain- specific modeling (DSM) editors has been the lack of user experience, in particular, user interaction adapted to its user. The current practice relies solely on the abstract and concrete syntax of the language, restricting the user interaction with the editor to a set of generic interactions built-in the tool. To increase the user experience with DSM editors, we propose to specify the different viewpoints of interactions (e.g., I/O devices, component of the interface, behavior of the editor) each modeled at the right level of abstraction for its user expert. The goal of this paper is to demonstrate the feasibility of the approach, by anchoring the operational semantics of all these viewpoints in a Statecharts model that controls the DSM editor. We report on the complex transformation that takes as input different viewpoints are expressed in at distinct levels of abstraction, to produce a custom interaction with the DSM editor based on a RETE algorithm. Our implementation shows that we can produce correct and responsive results by emulating existing DSM editors. CCS Concepts Software and its engineering Do- main specific languages; Visual languages; Software con- figuration management and version control systems. Keywords IDE generation, User experience, Domain-specific modeling ACM Reference Format: Vasco Sousa, Eugene Syriani, and Khady Fall. 2019. Operationalizing the Integration of User Interaction Specifications in the Synthe- sis of Modeling Editors. In Proceedings of the 12th ACM SIGPLAN International Conference on Software Language Engineering (SLE ’19), October 20–22, 2019, Athens, Greece. ACM, New York, NY, USA, 14 pages. hps://doi.org/10.1145/3357766.3359539 Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. SLE ’19, October 20–22, 2019, Athens, Greece © 2019 Association for Computing Machinery. ACM ISBN 978-1-4503-6981-7/19/10. . . $15.00 hps://doi.org/10.1145/3357766.3359539 1 Introduction It is common practice in model-driven engineering (MDE) to automatically generate modeling editors for domain-specific modeling (DSM) languages (DSL). All streamlined frame- works, such as those based on the Eclipse Modeling Frame- work (EMF) [36] (e.g., Sirius, Xtext), MetaEdit+ [23], and AToMPM [40], rely solely on the abstract and concrete syn- tax of the language to produce the editors. As such, the user interaction with the editor is restricted to a set of built-in generic interactions. These interactions are common to all DSM editors generated from the framework and often follow a drag-and-drop or click-popup interaction style. Along with these interactions, a fixed layout is also enforced: toolbars are always placed in the same position and populated with the elements of the DSL. In the past few years, we have seen keynotes, vision pa- pers and panel discussions to promote the integration of User Experience (UX) design in DSM editors [1, 8, 35]. There have been some attempts to address interaction at the design level, like IFML [13]. However, such attempts are too specific to the target platform. For instance, IFML requires the defi- nition of the interactions at the level of UML objects and the information on the specific system calls of the interaction, instead of staying at abstractions closer to the domain and UX practices. Our long-term goal is to tailor the interaction with the editor to the domain , to improve the UX. Sousa et al. [35] coined the idea of a set of languages to model the different viewpoints of interactions. These allow the specifications of multiple modes of interaction with dif- ferent input/output (I/O) devices (e.g., mouse, touch), custom editor layouts, and to configure the behavior of the editor, to deploy it on various platforms and adapted to the domain. However, Sousa et al. have not provided a way to produce functional modeling editors from the different models spec- ifying the interactions. The main challenge is that the view- points are specified in various languages, expressed at differ- ent levels of abstraction. Although the languages proposed are at an adequate level of abstraction for UX experts [34], an operational semantics that yields a sound and usable model- ing editor is still missing. Therefore, in this paper, we propose to anchor the semantics of all the different aspects of the user interaction into an executable artifact that opens the inter- action possibilities of a DSM editor to the needs of its target

Operationalizing the Integration of User Interaction ...dasilvav/files/pdf/SousaSLE2019.pdf · The Interface modeling language is aimed at UX designers. It specifies what elements

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Operationalizing the Integration of User Interaction ...dasilvav/files/pdf/SousaSLE2019.pdf · The Interface modeling language is aimed at UX designers. It specifies what elements

Operationalizing the Integration of User InteractionSpecifications in the Synthesis of Modeling Editors

Vasco SousaUniversité of Montréal

[email protected]

Eugene SyrianiUniversité of Montréal

[email protected]

Khady FallUniversité of Montréal

[email protected]

AbstractA long shortcoming in the automatic generation of domain-specific modeling (DSM) editors has been the lack of userexperience, in particular, user interaction adapted to its user.The current practice relies solely on the abstract and concretesyntax of the language, restricting the user interaction withthe editor to a set of generic interactions built-in the tool. Toincrease the user experience with DSM editors, we proposeto specify the different viewpoints of interactions (e.g., I/Odevices, component of the interface, behavior of the editor)each modeled at the right level of abstraction for its userexpert. The goal of this paper is to demonstrate the feasibilityof the approach, by anchoring the operational semantics ofall these viewpoints in a Statecharts model that controls theDSM editor. We report on the complex transformation thattakes as input different viewpoints are expressed in at distinctlevels of abstraction, to produce a custom interactionwith theDSM editor based on a RETE algorithm. Our implementationshows that we can produce correct and responsive resultsby emulating existing DSM editors.

CCS Concepts • Software and its engineering → Do-main specific languages; Visual languages; Software con-figuration management and version control systems.

Keywords IDE generation, User experience, Domain-specificmodeling

ACM Reference Format:Vasco Sousa, Eugene Syriani, and Khady Fall. 2019. Operationalizingthe Integration of User Interaction Specifications in the Synthe-sis of Modeling Editors. In Proceedings of the 12th ACM SIGPLANInternational Conference on Software Language Engineering (SLE’19), October 20–22, 2019, Athens, Greece. ACM, New York, NY, USA,14 pages. https://doi.org/10.1145/3357766.3359539

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies are notmade or distributed for profit or commercial advantage and that copies bearthis notice and the full citation on the first page. Copyrights for componentsof this work owned by others than ACMmust be honored. Abstracting withcredit is permitted. To copy otherwise, or republish, to post on servers or toredistribute to lists, requires prior specific permission and/or a fee. Requestpermissions from [email protected] ’19, October 20–22, 2019, Athens, Greece© 2019 Association for Computing Machinery.ACM ISBN 978-1-4503-6981-7/19/10. . . $15.00https://doi.org/10.1145/3357766.3359539

1 IntroductionIt is common practice in model-driven engineering (MDE) toautomatically generate modeling editors for domain-specificmodeling (DSM) languages (DSL). All streamlined frame-works, such as those based on the Eclipse Modeling Frame-work (EMF) [36] (e.g., Sirius, Xtext), MetaEdit+ [23], andAToMPM [40], rely solely on the abstract and concrete syn-tax of the language to produce the editors. As such, the userinteraction with the editor is restricted to a set of built-ingeneric interactions. These interactions are common to allDSM editors generated from the framework and often followa drag-and-drop or click-popup interaction style. Along withthese interactions, a fixed layout is also enforced: toolbarsare always placed in the same position and populated withthe elements of the DSL.In the past few years, we have seen keynotes, vision pa-

pers and panel discussions to promote the integration ofUser Experience (UX) design in DSM editors [1, 8, 35]. Therehave been some attempts to address interaction at the designlevel, like IFML [13]. However, such attempts are too specificto the target platform. For instance, IFML requires the defi-nition of the interactions at the level of UML objects and theinformation on the specific system calls of the interaction,instead of staying at abstractions closer to the domain andUX practices. Our long-term goal is to tailor the interactionwith the editor to the domain , to improve the UX.

Sousa et al. [35] coined the idea of a set of languages tomodel the different viewpoints of interactions. These allowthe specifications of multiple modes of interaction with dif-ferent input/output (I/O) devices (e.g., mouse, touch), customeditor layouts, and to configure the behavior of the editor,to deploy it on various platforms and adapted to the domain.However, Sousa et al. have not provided a way to producefunctional modeling editors from the different models spec-ifying the interactions. The main challenge is that the view-points are specified in various languages, expressed at differ-ent levels of abstraction. Although the languages proposedare at an adequate level of abstraction for UX experts [34], anoperational semantics that yields a sound and usable model-ing editor is still missing. Therefore, in this paper, we proposeto anchor the semantics of all the different aspects of the userinteraction into an executable artifact that opens the inter-action possibilities of a DSM editor to the needs of its target

Page 2: Operationalizing the Integration of User Interaction ...dasilvav/files/pdf/SousaSLE2019.pdf · The Interface modeling language is aimed at UX designers. It specifies what elements

SLE ’19, October 20–22, 2019, Athens, Greece Vasco Sousa, Eugene Syriani, and Khady Fall

users. This anchoring entails transforming all the viewpointmodels of the DSM editor into a single executable model.To achieve this, we decided to operationalize the seman-

tics of the user interaction using Statecharts [19], given theirreactive and modal behavior. To tackle the complexity of thetransformation and to avoid performance issues, our solu-tion is based on a novel RETE algorithm to construct thestatechart incrementally. In the following section, we presenta running example to help illustrate the different aspects ofour approach in a more practical manner. In Section 3, webriefly describe the required notions of the formalisms thatspecify the different viewpoints of the user interaction of aDSM editor, and define a process to synthesize such editors.In Section 4, we present our main contribution: how to oper-ationally produce an executable statechart that controls theDSM editor from the different viewpoints of interactions. InSection 5, we outline how we implemented the generationprocess. To evaluate the generation process, we evaluate theperformance of the process and demonstrate that we can pro-duced DSM editors with similar user interactions as existingones in Section 6. In Section 7, we discuss challenges basedon our experience with designing these editors. Finally, weoverview related work in Section 8 and conclude in Section 9.

2 Running ExampleAs a running example, let us define a simple DSM editor tomodel a configuration of Conway’s Game of Life (GoL) [14].We assume DSM editors with a graphical user interface (GUI)and a visual canvas to manipulate models in their graphicalconcrete syntax. GoL consists of a cell grid. Each cell canbe either dead or alive (blank and black respectively in ourfigures). The cells are subject to a series of rules that changetheir life status. In this example, we construct a DSM editorto model configurations of the cell grid. We consider thatthe operational semantic rules are specified and simulatedusing a model transformation.

If we were to synthesize a graphical GoL editor with com-mon MDE frameworks, we would first create a metamodel ofGoL. For instance, this metamodel would consist of a singleclass Cell with a boolean attribute alive, two integer at-tributes xPos and yPos for the coordinates of the cell, and anassociation neighbors to itself allowing up to eight neighbors.A possible graphical concrete syntax would represent a cellas a square that changes color based on the value of aliveand a geometrical constraint to snap cells according to theirneighbors relation. From the metamodel and the concretesyntax definitions, we can generate a graphical GoL editorusing common MDE frameworks. To create a GoL model,the user needs to manually instantiate each cell from thetoolbar and set the alive property of the cells alive.

Due to the simplicity of GoL, its editor should be reducedto its minimum, such as traditional GoL engines1, thus re-moving accidental complexity. Ideally, the editor should only

provide a single language element Cell with the attributelife set to TRUE, forgoing any language element selectiontoolbar or attribute view. Additionally, this editor shouldprovide direct access to running the simulation.

3 Viewpoints of the user interactionOur approach allows the designer to customize the interfaceof the editor, its behavior, and the I/O devices involved in theinteraction. Therefore, we augment the artifacts required togenerate a DSM editor with additional models, each repre-senting different viewpoints of the user interaction of theeditor. On top of the abstract and concrete syntaxes of theDSM language, we add an interface model to define the ele-ments of the GUI and their layout, an interaction model todefine the behavior of the editor, and an event mapping modelto map I/O events to a specific platform and implementation.We now briefly introduce each model in what follows.

L1

IS1 IS2

L2 L3

Figure 1. Desktop interface layers example

3.1 Interface modelThe Interface modeling language is aimed at UX designers. Itspecifies what elements (visual and non-visual) the user caninteract with, their representation, and their distribution inthe interaction space. As such, this model takes its cues fromapproaches like sketching [5]. On top of providing designersthe freedom to design the appearance of the editor, thismodel also affects user interaction. For example in Figure 1,the play button used to start the simulation of the GoL modelis placed at the top left of the screen for a desktop application.Similarly, the user interaction influences other properties ofthe interface model, such as the size of elements, their color,and other forms of feedback.Figure 2 presents an excerpt of the metamodel of the in-

terface modeling language. It is based on OMG’s DiagramDefinition standard [15]. The elements of the interface modelare very different from DSL elements. Graphical elementsare used to represent toolbars, buttons, icons, widgets, andany other shape in the graphical editor. In a desktop versionof our example (Figure 1), we define one toolbar with twobuttons. Given that there is only one element type in thisDSL, there is no need for a palette and button to instanti-ate a cell. Instead, we leave cell instantiation as a defaultinteraction via, for example, interacting with the canvas.1 http://pmav.eu/stuff/javascript-game-of-life-v3.1.1/

Page 3: Operationalizing the Integration of User Interaction ...dasilvav/files/pdf/SousaSLE2019.pdf · The Interface modeling language is aimed at UX designers. It specifies what elements

User Interaction in the Synthesis of Modeling Editors SLE ’19, October 20–22, 2019, Athens, Greece

InteractionStream Layer

position : EIntisDefault : EBoolean = falsename : EString

Element

pos : Positionvisible : EBoolean = trueid : EString

id : EString

isDefault : EBoolean = falsename : EString

Group

GraphicalElement

backgroundColor : EIntforegroudColor : EIntborder : EIntborderPattern : EStringheight : EIntwith : EInt

Polygon Text

value : EStringfont : EStringfontSize : EInt

Image

source : EString

Widget

Canvas

Light

[1..*] layers

[0..*] elements

[0..*] groups

GridedCanvas

unitSize : EInt

color : Colorintensity : EInt

Figure 2. Excerpt of the Interface metamodel

The canvas, in which the user performs the modelingactions, is typically a grid where DSL elements are instan-tiated in their concrete syntax. The size of the cells of thegrid depends on the editor and the language for which itis designed. In our example, the canvas is set up so that agrid cell matches the size of a GoL cell, as to provide thelook and feel of traditional GoL editors. The canvas containsall instantiated DSM elements represented in their concretesyntax. Therefore, all interactions with them (e.g., selection,resize) are performed through the canvas. This allows us totreat the canvas and the DSM as a black-box that connectsthe editor to any model-aware back-end.

We add the concept of layer to segment the interface byrelated elements. In visual representations, this also impliesthat the elements in one layer are rendered overlaying theelements of lower layers. In Figure 1, there is one layer forthe canvas (L1) and one for the toolbar (L2). In this interface,the toolbar will always be rendered on top of the canvas andits elements. Interaction Streams segment forms of inter-action that are in different dimensions, e.g., screen elements,keyboard inputs, sound. In GoL, we add a light element in aseparate interaction stream, which could be available on themobile device or externally connected to the computer. Forsimplicity, we change the visual color of the icon to reflectthe value of its RGB color attribute.

3.2 Interaction modelThe Interaction modeling language allows UX designers todefine the semantics of elements of the interface. Once theinterface model and abstract/concrete syntax of the DSL aredefined, the user can tailor the user interaction of the DSM

State

Transition

ModifiableElement

IntElement

ref_id : EStringref : Interface.element

OpElement

ref : Mapping.essentialActionname : EString

LangElement

operation : LangOperation = None

InteractionSet

AtomicInteraction

name : EString

StateElement

id : EStringref : EJavaObject

Condition

PostElement

Effect

PreElement

negative : EBoolean = false

LangOperation

NoneCreateUpdateDelete

[0..*] rules

[1..*] transitions

[1..*] effects

[1..1] condition

[1..1] target

[1..1] source

[0..*] elements[0..*] elements

[1..1] element

[1..1] element

[0..1] context

[0..1] event

Figure 3. Interaction metamodel

editor to the target users. As defined by its metamodel in Fig-ure 3, an interaction model consists of a set of interactionrules, like the five needed for the GoL editor in Figure 4.As commonly represented in UX using Trigger-Action

rules [42], interaction rules represent declarative context-dependent effects on the editor: when an event occurs andthe interface or the DSM satisfies a specific condition, therule produces an effect on either of them. Semantically, aninteraction model takes its root from the model transforma-tion paradigm and Event-Condition-Action (ECA) rules [11].A rule has a precondition (condition) that describes therequirements on the state of the editor for the interactionto occur. The condition can comprise elements from the in-terface model (IntElement) or the DSL (LangElement). Oneof the elements in the condition is the context of the rule,represented in Figure 4 by a mouse arrow. It is used to deter-mine the exact point of interaction. The postcondition of arule is a set of effects that describe what actions to take onthe interface or what operations specific to the DSL it shouldapply. Like in ECA, an interaction rule must be explicitlytriggered by an event, typically as a result of the user usingan input device. For example, in Figure 4, rule A specifies thecreation of a cell: if the focus is on the canvas and we receivethe select event, the rule instantiates an alive cell elementon the canvas at the position in focus on the grid. Here, theLangElement refers to the Cell class in the GoL metamodelthrough its ref attribute. Thus, we can set the value of all ofits attributes by reusing that class, such as alive=true. When

Page 4: Operationalizing the Integration of User Interaction ...dasilvav/files/pdf/SousaSLE2019.pdf · The Interface modeling language is aimed at UX designers. It specifies what elements

SLE ’19, October 20–22, 2019, Athens, Greece Vasco Sousa, Eugene Syriani, and Khady Fall

select

B: CS Rem Instance

D: End Transformation

� _finish �

E: Turn off light

Figure 4. Interaction rules for GoL

a mapping from the metamodel to a concrete syntax is pro-vided by the modeling back-end, we can reuse the concretesyntax definition from the DSL inside interaction rules.Rule B specifies the removal of a cell: when the context

is an alive cell element and we receive the select event,the rule deletes that cell. An effect can create instancesof the abstract syntax of the DSL, delete them, or updatetheir attribute values. An effect can also modify propertiesof elements of the interface model (such as their visibility,position, size). In the metamodel in Figure 3, IntfElementreuses classes from the interface metamodel, as we do forLangElement. The ref_id attribute ensures that the interfaceelement corresponds to an existin one in the interface model.An effect may also call an external operation, defined in theevent mapping model, such as RUN in rule C to execute themodel transformation that simulates the GoL model.

The running system can also internally signal events thattrigger an interaction rule. For example, in rule D, the _finishevent corresponds to an event that is raised when the execu-tion of the RUN operation terminates. The rule then enablesthe play button and turns the notification light to green. Notehow no context is needed when the event is internal.

The interaction model expresses interactions at a level ofabstraction that is closer to the intent than the implementa-tion of the interaction within the DSM editor. For instance,rule C states that a button is pressed independently fromwhether it is a mouse click or a touch on the screen.

3.3 Event mapping modelInteraction rules rely on domain-specific events, called es-sential actions to keep the interaction rules independentfrom the implementation and match the intent of the user.There are three types of essential actions. Input events rep-resent an interaction of the user on the editor (e.g., pressinga button) or complex interactions using frameworks suchas [33]. Operation events represent an operation to be per-formed on an interface element, often corresponding to afunction call, e.g., running the simulation. Internal eventsrepresent the reaction of the system to an operation event,

Interface

Interactions

AbstractSyntax

Mapping

Statechart

Generate initialStatechart

RefineStatechart

SerializeStatechart

Platform-specificStatechart

ExecutableGUI controller

CompileStatechart

Generate GUIand API

GUI and API

SyntaxMapping

ConcreteSyntax

1

2

3

4

5

Figure 5. Process to synthesize DSM editors

e.g., the termination of a function. An underscore precedesthe name of internal events like _finish.The event mapping model defines all the available essen-

tial actions and provides a matrix that translates each one toa system action, a platform-specific function offered by theimplementation of the system. The mapping is left-total: allessential actions require a mapping. However, that mappingdoes not need to be unique, as the same essential action canbe mapped to different system actions. This enables the inter-action rules to be applicable regardless of the I/O device or tosupport multiple I/O to accomplish the same interaction. Forexample, the event press can be mapped to both a touch anda left mouse click. Conversely, we can also map two differentessential actions to the same system action. This is especiallyuseful when the I/O device offers fewer interactions than theinteraction model permits.

3.4 Synthesis of DSM editorFigure 5 illustrates the process to produce custom DSM edi-tors. The generation process follows typicalMDE automationtechniques. In activity ➀, a code generator takes as inputthe mapping between the abstract and concrete syntaxes ofthe DSL, the interface model, and the event mapping modelto produce the GUI of the editor. For example, this can bethe HTML and Javascript code of the web-based front-endeditor or the front-end code of a smartphone application. Italso generates the function calls to the modeling back-endto retrieve and manipulate the abstract and concrete syntaxDSM elements. These functions are encapsulated as an APIprovided by the canvas.

Page 5: Operationalizing the Integration of User Interaction ...dasilvav/files/pdf/SousaSLE2019.pdf · The Interface modeling language is aimed at UX designers. It specifies what elements

User Interaction in the Synthesis of Modeling Editors SLE ’19, October 20–22, 2019, Athens, Greece

The challenge of the operationalization process resides inthe generation of the controller code of the GUI that definesthe custom user interaction in the DSM editor. We modelthis behavior with a Statecharts model, which serves as se-mantic anchoring of the interaction model. Statecharts isa common formalism to control the behavior of GUI [20].Thanks to its event-driven and modal paradigm, it simplifiesthe development process for coding, debugging, and testingthe behavior of GUI. However, transforming the interface,interaction, and event mapping models to a statechart is acomplex transformation that requires multiple steps.In activity ➁, we generate an initial statechart based on

the structure of the interface model. This is a straightfor-ward model-to-model transformation that creates de-fault states placed in parallel regions in the statechart foreach layer of the interface model. This is followed by activity➂, where we perform successive incremental refinementsof the statechart to encode the semantics of every interac-tion rule. This is a complex in-place transformation. Wemust evaluate the applicability of each interaction rule aswe are adding new states and transitions to the statechartby symbolically applying the rules. Then, with activity ➃,we serialize the statechart to be compatible with the imple-mentation framework. A model-to-text transformationreplaces all essential action in the statechart with system ac-tions, given the target platform, and serializes it to a standardstate machine format (e.g., SCXML). Finally, in activity ➄we compile the state machine into code that controls the be-havior of the generated GUI of the editor using off-the-shelfstate machine compilers.

4 Generating the user interactioncontroller

4.1 Mapping onto StatechartsWe produce a statechart to control when the effects of inter-action rules are applied, according to the triggering eventand the satisfaction of their condition. Thus, the statechartmust encode all possible rule applications for every stateof the DSM editor, i.e., all reachable configurations of theGUI elements specified by the interface model. We assumethat the interface model is partitioned into distinct layersand that a single user interacts with the editor. Therefore,at any given time, the state of the GUI involves at most oneconfiguration of the elements from each layer. Hence, thestructure of the hierarchical statechart shall consist of onehigh-level AND-state containing one parallel region (a.k.a.orthogonal component [19]) per layer as in Figure 6. Sincewe assume all graphical DSM editors have a canvas, there isalways one layer dedicated to the canvas.

We define a statechart by SC = ⟨L, S,T , layer ⟩ to containa set of layers L, a set of basic states S , and the functionlayer : S → L identifying the layer of each state. Transitionsin T = ⟨S, S,Ev, ℘(A),д⟩ connect states within the same

L3

L2

L1

1canvas

[focus=canvas] select/addCell()

[focus=cell]select/remCell(cell.ID)

4play_disabledlight_enabled

2play_enabledlight_enabled

[in(6) & focus=Play]press /run()

[in(5)]_finish

[in(3)]_finish

[in(3) & focus=Light]press

[in(6) & focus=Light]press [in(3) & focus=Play]press /run()

[in(5) & focus=Light]press

[in(3) & focus=Light]press

5light_blue

3light_off

6light_green

[in(2) & focus=Light]press

[in(4) & focus=Light]press

[in(2) & focus=Light]press

[in(2) & focus=Play]press

[in(2) & focus=Play]press

[in(4) & focus=Light]press

[in(4)]_finish[in(4)]_finish

Figure 6. The resulting statechart after refinements accord-ing to the five rules

layer, triggered by an event in Ev . They perform a set ofactions A and have a guard д as triggering condition.States within a layer represent the state of all interface

elements within that layer at a given point of interaction.For example in GoL, layer L3 contains only the light. Afterseveral refinement iterations, there will be three states in theorthogonal component L3: one for every color and intensityof the light present in the interaction rules (see the bottomlayer of Figure 6). Any statechart produced will have a layercorresponding to the modeling canvas (L1). In the scope ofthis work, we consider the canvas as a black-box; thus weonly need one state to represent its reachable configuration.

Transitions represent how the state of the interface ele-ments can change according to the application of a rule. Forexample, because of rule E, each state in L3 has a transitionto state 3 where the light is off. The event of the rule triggerseach transition it corresponds to, press in this example. How-ever, the light button is the context of rule E, meaning thatwe cannot take a press transition unless the correspondingstates in L2 are active and the context (focus) is on the light.Therefore, we must add a guard on transitions to guaranteethat we are in the required states in all layers referencedby the rule. The actions of a transition apply the effects ofthe interaction rule. So changes to interface elements are allencoded as actions in the transitions. For example, all tran-sitions incoming to state 3 must set the light to RGB(0,0,0).Actions can also manipulate DSL elements, such as the cre-ation of a cell in rule A. Any operation invoked in the effectof a rule is added to the actions of the transition, such as RUN.We denote a configuration by conf iд : S → ℘(Modi f i-

ableElement) to map each state to a set ME of interface orlanguage elements. It represents the reachable snapshot ofthe state of the editor at a given time. For (s,ME) ∈ conf iд, sis the active state of the orthogonal component correspond-ing to layer (s). ME encodes the attribute values of everyinterface element in that layer corresponding to s , possibly

Page 6: Operationalizing the Integration of User Interaction ...dasilvav/files/pdf/SousaSLE2019.pdf · The Interface modeling language is aimed at UX designers. It specifies what elements

SLE ’19, October 20–22, 2019, Athens, Greece Vasco Sousa, Eugene Syriani, and Khady Fall

with instances of the metamodel of the DSL. The initial con-figuration represents the default states of each layer, i.e., theway they are set up at the startup of the editor. For GoL, theinitial configuration is thus {(1,ME1), (2,ME2), (3,ME3)}respectively representing an idle canvas, a toolbar with theplay and light buttons enabled, and the light turned off.

4.2 Challenge of incremental refinementThe initial statechart consists of one orthogonal componentper layer, as defined in the interface model, each containingone state. As we have not considered any interaction rule yet,it represents an unresponsive editor. The following transfor-mation refines the initial statechart by detecting which rulesare applicable at any given state configuration. It applieseach interaction rule incrementally, by adding new statesand transitions that represent these interactions. Each stepalso gives us a new configuration representing the state ofthe editor resulting from the rule application. It is then, inturn, evaluated to detect which rules are now applicable. Inthis manner, after all five rules are processed, the statechartevolves to its final form as depicted in Figure 6.

One major challenge of this transformation is that we donot know how many states and configurations will be eval-uated apriori. Therefore, we must consider all the states ofthe statechart in all configurations as potential candidates toapply every interaction rule. Suppose that we are now apply-ing rule C in the GoL example. The transformation creates anew state and transition in the statechart that disables theplay button, turns on the blue light, and runs the simulation.However, we now have to verify if this new state enablesnew rule applications, such as rule E to turn off the light.

This will lead to an explosion of the number of state con-figurations and require a considerable time and memoryfootprint for complex interaction models. However, typi-cally each interaction rule only modifies a few elements, andthus it is not necessary to re-evaluate complete configura-tions every time. Ideally, we only evaluate the changes ofthe elements affected by a rule. Therefore, we need a trans-formation engine that allows us to control how much of astate configuration should be evaluated and combine thatpartial evaluation with previous evaluation results. Similarchallenges have been identified in the model transformationcommunity which led to the rise of incremental transforma-tions [44]. Inspired by these approaches, we define an in-cremental refinement procedure to construct the statechart,while reusing evaluations of previous state configurations.

4.3 Incremental rule evaluationTo address the re-evaluation of state configurations we pro-vide a RETE-based algorithm [12]. The algorithm takes aseries of facts, in our case configurations, and evaluates whatinteraction rules to apply on these facts. The algorithm re-lies on a RETE network composed of ⟨A,B∨,B∧,E, elem,layer , rule⟩. Figure 7 shows the RETE network for GoL.

L1L2L3

α

βV

βΛCDE B A

Figure 7. Structure of the RETE network for GoL

4.3.1 Construction of the networkAlpha nodes α ∈ A filter the facts to values required byinteraction rules. Therefore, an α-node represents a modifi-able (interface or language) element present in the conditionor effects of a rule with a distinct set of attribute values.For instance, we will have three α-nodes for the light, onefor each color/intensity defined in the rules. The functionelem : A→ ME assigns a modifiable element to each node.

Beta nodes aggregate the results from its incoming nodes.We distinguish between two sets of beta nodes. A β∨ ∈ B∨ ⊂B node groups α-nodes by layer and a β∧ ∈ B∧ ⊂ B noderepresents each interaction rule. Consequently, layer : B∨ →L assigns a layer to a β∨-node and rule : B∧ → IR assignsan interaction rule to a β∧-node. Edges in E connect nodesaccording to their structure. An edge connects α to β∨ ifelem(α) pertains to layer (β∨). An edge connects β∨ to β∧if there is an element in layer (β∨) involved in rule(β∧). Tomake sure that the whole condition of an interaction ruleis satisfied, we also connect α to β∧ if elem(α) is part of thecondition of rule(β∧).For example, to encode rule C in the network, we need

three α-nodes: one for an enabled play button, one for a dis-abled play button, and one for a blue light. The play buttonsbelong to layer L2, so their α-nodes are connected to thesame β∨, whereas the light belongs to L3. Furthermore, thecondition of the rule is to have the button enabled. Thus wehave an edge from its α-node to the β∧-node correspondingto rule C. We also connect edges from the β∨-nodes for L2and L3 to that β∧-node.

4.3.2 Evaluation of the networkThe purpose of the RETE network is to detect when aninteraction rule is applicable during the refinement pro-cess. Instead of naively evaluating the applicability of eachrule on every possible configuration, the network should

Page 7: Operationalizing the Integration of User Interaction ...dasilvav/files/pdf/SousaSLE2019.pdf · The Interface modeling language is aimed at UX designers. It specifies what elements

User Interaction in the Synthesis of Modeling Editors SLE ’19, October 20–22, 2019, Athens, Greece

tell us if a given configuration induces a change that trig-gers the application of a rule. To do so, we define a to-ken function ⟨A → bool × bool,B → [bool]in⟩ for eachtype of node. This function assigns two booleans to eachα-node. The first boolean is the presence token. It is set toTRUE only if, for a given configuration conf iд, ∀(s,ME) ∈conf iд,∃ME such that elem(α) ∈ ME. The second booleanis the change token representing if the presence token haschanged. Each beta node is assigned a boolean array of thesize of its incoming edges in to identify the provenance ofeach token. Initially, all tokens are set to FALSE throughoutthe network, but the change tokens are set to TRUE to ensurethat all rules will be considered.To determine which rules are applicable from a given

configuration, we propagate tokens in three stages: first fromα-nodes, then from β∨-nodes, and finally from β∧-nodes. Ifmodified, the presence token of an α-node is propagated viathe edge to the β∧-node. This contributes to decidingwhetherthe condition of rule(β∧) is satisfied. If modified, the changetoken of an α-node is propagated via the edge to the β∨-node.This indicates that the effect of a rule leading to conf iд haschanged elem(α) which may require to re-evaluate any ruleit is involved in. When the configuration is processed, andall tokens from α-nodes have propagated to the beta nodes,we evaluate the β∨-nodes. Since it suffices that one elementof the layer involved in an interaction rule is changed totrigger it potentially, β∨-nodes perform the disjunction of allthe tokens they hold (newly received and from the previousiteration). We propagate the result to the β∧-nodes they areconnected to. Finally, a β∧-node performs the conjunction ofall the tokens it holds from incoming α-nodes to verify if thecondition of its rule is satisfied. However, a change may havehappened in a layer that is not involved in the conditionof the rule. Therefore, to intersect all layers involved in therule, β∧-nodes perform the conjunction of all the tokens theyhold, including those from incoming β∨-nodes.

This strategy is particularly efficient when we have a largeset of facts that incur minimal changes with each executionof an interaction rule. To illustrate the evaluation of thenetwork in Figure 7, suppose it receives a configuration en-coding that the play button is disabled (e.g., after rule Cwas applied). Note that there are two α-nodes assigned tothis button for when it is enabled and disabled. The config-uration entails the reevaluation of these α-nodes, flippingtheir presence token in relation to the previous configura-tion. Consequently, the change token of play_disabled is setto TRUE and propagated to the β∧-node of rule D. Similarlywe propagate the FALSE value token of play_enabled. Thepresence token of the disabled node is propagated to theβ∨-node corresponding to layer L2. Through disjunction, itpropagates TRUE to the β∧-nodes. Finally, we perform theconjunction of the β∧-nodes of rules C, D, and E. The resultis that, from the given configuration, rule D is applicable butnot rule C.

4.4 Incremental statechart refinementAlgorithm 1 defines the procedure to refine the initial stat-echart incrementally as new configurations arise from theapplication of the rules. It first creates a RETE network fromthe set of interaction rules and initializes its token function,as described in Section 4.3.1. Then, on line 7, it evaluatesthe network as described in Section 4.3.2 by propagating thetokens according to the current configuration. This givesus the set of applicable rules from this configuration andthe new token function. It then applies each rule to refinethe statechart (explained in Section 4.5), which produces apotentially new configuration. It is crucial to couple newconfigurations with the token assignments that led to it sothat, when it will be processed, the new configuration willbe evaluated on that specific token assignment. The proce-dure continues processing configurations until all reachableconfigurations have been processed.

Algorithm 1: RefineStatechart(SC, config, interactions)

Input: the initial statechart, the initial configuration, and theinteraction set

1 network← GenerateNetwork(interactions)2 token← network.initToken()3 evalQueue← {⟨config, token⟩} ; configHist← ∅4 while evalQueue , ∅ do5 ⟨config, token⟩ ← evalQueue.dequeue()6 (AppRules, new_token)← network.eval(config, token)7 for rule ∈ AppRules // applicable rules do8 new_config← ApplyRule(SC, rule, config, configHist)9 if new_config < configHist then

10 evalQueue.enqueue(⟨new_config, new_token⟩)11 configHist← configHist ∪ { new_config }

The procedure keeps track of the configurations alreadyprocessed; therefore each configuration is processed once.Hence, at any point in time, evalQueue will only containnew configurations. Since the RETE network evaluationsdrive the procedure, it limits the explosion of exploring allpossible state combinations of the interface. In the worst case,a change in a modifiable element triggers the reevaluation ofallα-nodes. Therefore, a strict upper bound of the state-spaceof the RefineStatechart algorithm is |A|2. In practice, thiswill be lower since many interface elements have mutuallyexclusive states, like buttons and lights.

4.5 Construction of the statechartAlgorithm 2 refines the statechart by applying the applicablerules on the current configuration provided by the evalua-tion of the RETE network. These refinements adhere to therefinement method and rules defined in [37].

Recall that a state s in the statechart represents a particularstate of all interface elements of the layer corresponding to

Page 8: Operationalizing the Integration of User Interaction ...dasilvav/files/pdf/SousaSLE2019.pdf · The Interface modeling language is aimed at UX designers. It specifies what elements

SLE ’19, October 20–22, 2019, Athens, Greece Vasco Sousa, Eugene Syriani, and Khady Fall

Algorithm 2: ApplyRule(SC, rule, config, configHist)

Input: a statechart, the interaction rule to be applied, theconfiguration to apply the rule on, and theconfiguration history

Output: a new configuration and updates the statechart to therule

1 L← rule.getAllLayers() // from condition and effect2 ActiveStates← {s | (s,ME) ∈ config ∧ layer(s) ∩L , ∅}3 config’← clone(config)4 actions← rule.effects.getOperationElems()5 event← rule.getEventName()6 for s ∈ ActiveStates do7 guard← ⟨ ActiveStates \{s}, rule.getContextName()⟩8 for e ∈ rule.effects.getModifElems(layer(s)) do9 actions← actions ∪{e}

10 config’[s].replace(e)

11 ME’← config’[s]12 target← configHist.findState(ME’,layer(s))13 if target = NULL then14 target← SC.newState(layer(s))

15 SC.newTransition(s , target, event, guard, actions)16 config[target]← config’[s]17 actions← ∅18 return config

the orthogonal component containing s . On lines 1–2, weidentify the set of active states from the configuration thatis affected by the interaction rule. Thus, this set containsat most one state per orthogonal component. If there arelanguage elements in the interaction rule, the canvas stateis also part of the set so those elements can be treated inthe same way as interface elements. Then, we compute theeffects of the rule on the elements of these active states (lines8–10). Afterward, we create a new transition starting fromeach active state to a state corresponding to the effects ofthe rule on the same layer (line 15), be it a new state, or anexisting state.

The transition is triggered by the event that correspondsto the event of the rule (line 6). The set of actions of atransition is defined by the attribute values of interface ele-ments (line 9), the CRUD operations on DSL elements (line 9),and the external operations to invoke (line 4). The functiongetModifElems returns language elements (e.g., deleted cellin rule B) or interface elements (e.g., unlit light in rule E) of aspecific layer in the effect of the rule. In rule C, the functiongetOperationElems returns the light with a blue color andthe RUN operation with its parameters.

The condition and effects of a rule may involve elementsfrom different layers, such as rule C. Therefore, as explainedin Section 4.1, we must synchronize the transitions acrosslayers to satisfy the condition of the rule. On line 7, we see

that a guard has two components: a state and a context condi-tion. The state condition synchronizes transitions across lay-ers. Suppose we are in the configuration {(1,ME1), (4,ME4),(3,ME3)} and receive a press event on the play button inFigure 6. If we did not have the state condition in(2) on theguard of transition from 3 to 5, then the light could turnblue even when the play button is disabled. The context con-dition distinguishes between transitions sharing the samesource state triggered by the same event. For example, if weare processing state 3, we will eventually apply rules C andE. Since their transition relies on the same event and statecondition, we must distinguish their trigger with a guard onthe context of the rule. The function getContextName returnsthe value of ref_id if the context is an interface element, thetype name when it is a DSL element, or null otherwise.

Along the way, we construct the new configuration result-ing from the application of a rule, by modifying the previousconfiguration with the elements in the effect (line 10). If an-other rule already created the new configuration, then thestatechart already has a state corresponding to it (line 11).Then, this state should be the target of the new transition.The function findState searches through the configurationhistory for a state corresponding to the set of modifiableelements in the new configuration within the layer of s . Therefinement of the statechart ends when Algorithm 1 is com-pleted, resulting in the statechart in Figure 6.

4.6 Adaptation to a specific platformThe serialization process involves two parts. We translate thefinal statechart into an executable format, such as SCXML,typically using a model-to-text transformation engine. Ad-ditionally, we must adapt the events and actions of everytransition to be compatible with the target platform. TwoAPIs must be readily available for the platform: one to manip-ulate DSM elements and one to operate interface elements.The event mapping model defines the required translations.Themodeling framework, such as EMF, provides the first API.Actions on transitions in the canvas layer are translated intothe CRUD operations of the modeling framework API. Thesecond API must be readily available for the I/O devices withwhich the user will interact. Actions on transitions are simplysetters of interface elements. For example, if the light ele-ment in GoL is mapped to an RGB LED in Arduino, then thevalue of its color attribute is mapped to the setColor(r,g,b)function call with the right parameters. Also, the operationevent RUN is replaced by calling the function correspondingto the execution of the simulation.

5 ImplementationTo demonstrate the feasibility of our approach, we imple-mented a prototype as an Eclipse project available online2.We defined the input modeling languages in EMF using Ecore

2https://github.com/geodes-sms/DSMEditorGenerator

Page 9: Operationalizing the Integration of User Interaction ...dasilvav/files/pdf/SousaSLE2019.pdf · The Interface modeling language is aimed at UX designers. It specifies what elements

User Interaction in the Synthesis of Modeling Editors SLE ’19, October 20–22, 2019, Athens, Greece

metamodels and deployed as plugins. We defined a customsimple hierarchical Statecharts language where we enrichstates with references to interface elements they encapsu-late and their values to optimize the configuration lookupsat runtime. We implemented all transformations using theEpsilon tool suite [24] because it integrates different typesof transformations seamlessly and allow us to run them inheadless mode outside of Eclipse.We use the declarative model-to-model transformation

language ETL [25] for the transformation Generate InitialStatechart (see Figure 5). It is composed of a series of declara-tive patterns that translate each layer of the interface modelinto an orthogonal component with a single initial state. Wethen populate that state with the references to all the ele-ments within that layer. Finally, we aggregate all orthogonalcomponents inside a root state. We developed the complextransformation Refine Statechart in EOL [24]. It implementsthe algorithms presented in Section 4: constructing and eval-uating the RETE network, and applying interaction rules.Here, we developed a temporary data structure to representtokens and configuration history. For the Serialize Statecharttransformation, we used the template-based code generatorEGL [30].We transform the statechart into the SCCD [43] for-mat, which is compatible with Harel’s semantics [19]. Apartfrom specifying the serialization format in the templates, weconvert essential actions to system actions according to theevent mapping model (see Section 4.1) and discard referencesfrom states to interface elements.

The tool generatesweb-based graphical DSM editors. There-fore, we generate the GUI in HTML/CSS/Javascript. We addlisteners specified in the event mapping model to raise theappropriate event in the statechart. For instance, we add alistener for a mouse left click event where, if the context isthe play button, it raises pressPlay. We provide a JavascriptAPI for the statechart to control the interaction with the GUI.

6 EvaluationThere are many aspects to evaluate whether our approach togenerate and execute DSM editors with custom interactionsis appropriate. Since the main contribution of this paper isthe generation process, we focus on the following two ob-jectives: (1) Is our approach expressive enough to mimic userinteractions in existing DSM editors? (2) Does the generationof the editor scale with the increasing complexity of user inter-actions? For (1), we qualitatively assess the user interactionwith the generated editors compared to existing ones. For(2), we measure the time performance and memory usageof the transformation process with varying complexities ofinteraction models.

6.1 Reproducing existing editorsTo evaluate that the declarative specification of interactionsresults in a correct interaction environment, we implemented

Number of GoL Pacman Music

Rules 5 42 66Interface elements 6 13 32Layers 3 3 4

α-nodes 8 38 12States 6 17 23Transitions 18 135 515

Table 1. Characteristics of each case study

three editors that replicate the main interaction aspects of ex-isting editors. All three editors are available online2. Table 1outlines the number of interaction rules, interface modelelements, and layers required to generate each editor. It alsodepicts the size of the generated network and statechart.

6.1.1 GoL editorThe web-based GoL application1 allows one to define theinitial alive cells by clicking on a grid and offers a buttonto run the simulation, as described in Section 2. There areother widgets available on the web application, such as but-tons to create predefined cell patterns and statistics on therunning simulation. We generated the GoL editor followingthe description throughout this paper, which concentrateson its essential interaction. The goal was to have a minimaleditor capable of illustrating our approach with which wecan verify its correctness manually. To illustrate the abilityto interact with multiple interaction streams, we added thelight functionality to the editor.

6.1.2 Pacman editorThe goal of this case is to replicate a DSM editor generatedinside a modeling language workbench. The Pacman editorpresent in AToMPM [38] (which has similar behavior to aSirius editor in Eclipse) allows the modeler to define gameconfigurations where Pacman navigates through grid nodessearching for food to eat, while ghosts try to kill him. Theparticularity of this editor is its emphasis on language ele-ments. It requires a toolbar with the different DSL elementsto instantiate and the ability to drag-and-drop elements onthe canvas. As reported in Table 1, the statechart needed17 states and 135 transitions, 21 of which are on the canvasstate to manipulate language elements. The three layers cor-respond to the canvas, toolbar, and popup window to viewthe properties of language elements.The main challenge, in this case, is to enable dragging

DSM elements. Each element type (e.g., Pacman, ghost) re-quired three interaction rules. One rule marks the elementto be dragged, triggered by a mouseDown event. A second ruleallows us to move the element on the canvas triggered bya mouseMove event. A third rule fixes the element at a spe-cific position upon receiving a mouseUp event. Furthermore,we required 30 interaction rules dedicated to selecting theDSL types on the toolbar to make sure exactly one type is

Page 10: Operationalizing the Integration of User Interaction ...dasilvav/files/pdf/SousaSLE2019.pdf · The Interface modeling language is aimed at UX designers. It specifies what elements

SLE ’19, October 20–22, 2019, Athens, Greece Vasco Sousa, Eugene Syriani, and Khady Fall

selected when creating an element on the canvas. This is dueto interaction rules not being capable of sharing information.

6.1.3 Music sheet editorSoftware tools targeted at non-programmers involve var-ious kinds of interactions [31]. For example, Flat3 allowsmusicians to compose music sheets by playing on a MIDIkeyboard. To replicate some of its unique methods of inter-action, we generated a music sheet editor where the useris presented with a series staffs aligned with the griddedcanvas. Via a MIDI or regular keyboard, the user can play asound which places a note next to the previous one on thecanvas. The key pressed places the note on the correspond-ing position (e.g., Do, Re, Mi). The note length (e.g., whole,half, quarter, eighth) can be selected in the toolbar, or theuser can switch on a timed mode. In timed mode, the dura-tion of a key press determines the note length. The editorrequires 26 interaction rules for the toolbar interactions, 28for note input with the toolbar, and 32 for timed note input.The interface model consists of four layers: canvas, toolbar,cursor, and staff/ledger lines. The states in the staff layerhave no transitions since the user cannot directly interactwith the staffs.

The main challenge, in this case, is the extensive relianceon an input method where the user does not select pointson the canvas. Therefore, we need to add a visual cursor andmove it after placing each note. Another challenge is to selectthe right language element determined by time durations. Todo so, we separate a keystroke input into a rule to mark thebeginning and another to mark the end of a note. We mustalso keep track of the passage of time between the trigger ofthese rules.

6.2 Evaluation of the performance of the generationFrom a performance point of view, the most important phasein the generation process of the DSM editor is the construc-tion of the statechart. As described in Section 3.4, it requiresthree steps: generating the initial statechart, refining it, andserializing it. We report the time and memory consumptionof the construction of the statechart.

6.2.1 Experiment setupIn Section 4, we showed that the number of α-nodes influ-ences the construction and evaluation of the RETE network,as well as the refinement of the statechart. We hypothesizethat it is highly correlated with the number of interactionrules. To evaluate this hypothesis we performed a MonteCarlo simulation on a synthesized suite of interaction modelscontaining from 1 up to 300 rules. The synthesis of theserules followed a uniform distribution for the following pa-rameters. The condition and effect of each rule contained atmost 10 modifiable elements, which corresponds to the order

3https://flat.io/

0

4

7

11

14

18

21

0

2

4

6

8

10

12

14

16

18

0 50 100 150 200 250 300

Create RETE time Refine SC time Refine SC memory

(sec) (GB)

(rules)

Figure 8. Time and memory usage of the RefineStatechart

procedure in seconds and gigabytes per number of rules

of magnitude we observed when mimicking the interactionof existing editors. We gave an equal probability for eachelement to be an interface or language element. We also gavea 60% chance for an element in the condition of a rule tomatch an element in the effect of another rule (like rulesA and B in GoL) to provoke sequences of interactions. Toensure only the interaction model influences the results, wefixed the interface model to consist of 10 layers, each con-taining one interface element. We also fixed the metamodelof the DSL to contain 10 language elements. All the synthe-sized models and characteristics are available online2. Weran the experiments on a Windows 10 machine with an i7-4770 processor at 3.8GHz, 32GB of RAM on Eclipse Oxygenwith JDK 1.8 and the heap size set to 24GB. We measuredexecution times with Epsilon Profiler and collected memoryusage with VisualVM.

6.2.2 MethodologyWe ran the transformations on each interaction model 30times and collected the average time and memory consump-tion. To mitigate the risk of biasing the results because of therandom choices in the construction of the rules, we replicatedthe synthesis of the models five times with different seeds.We conducted Levene’s test to assess between-group vari-ance differences and one-way ANOVAs with post-hoc teststo assess between-group mean differences. For both time andmemory performance of the refinement phase, there are nodifferences in variance (p = .421,p = .950 resp.) or in mean(p = .854,p = .994 resp.) among the replicas. Therefore, fora given number of rules, the random choices to constructthe rules do not influence the performance on our sample.Furthermore, a Pearson correlation showed an almost per-fect association between the number of rules and α-nodes(r = .980,p = .001). Therefore, we can safely report theperformance as a function of the number of rules. All thedetailed results are available online2.

Page 11: Operationalizing the Integration of User Interaction ...dasilvav/files/pdf/SousaSLE2019.pdf · The Interface modeling language is aimed at UX designers. It specifies what elements

User Interaction in the Synthesis of Modeling Editors SLE ’19, October 20–22, 2019, Athens, Greece

6.2.3 ResultsSince we fixed the number of layers, the generation of theinitial statechart in ETL requires negligible time and mem-ory usage. The serialization in EGL grows with the size ofthe resulting statechart, staying below 550ms. As expected,the only significant influence on the performance is the re-finement transformation presented in Algorithm 1. Figure 8shows the results of running this step on our data set. Overall,the execution time increases with the number of rules. Wealso note more variability as the number of rules increases.This is due to the randomness injected in the data set, butalso to the Eclipse environment and its impact on the JVM.The figure also shows the time needed to create the RETEnetwork. A regression model fitting the data well (R2 = .960)indicates that it increases linearly with the number of rules(b = .008), yet very slightly as the number of rules grows.However, for interaction models with less than 50 rules, thecreation of the network takes most of the resources.The refinement is an incremental transformation and,

therefore, uses a significant amount of memory. Figure 8shows that memory usage follows the same trend as timeperformance. Compared to the DSM editors we modeled inSection 6.1, generating an editor with 200 rules takes lessthan 10 seconds and under 8GB of memory.

6.2.4 LimitationsThe performance evaluation is highly dependent on the syn-thesized rules and our prototypical implementation of thegeneral approach. Nevertheless, it provides a preliminaryidea of the performance to expect. However, our approachaims at creating DSM editors, not complete developmentenvironments, like Eclipse. Therefore, as observed with thethree qualitative cases, the complexity and number of ruleswe synthesized in the data set remains reasonable. Opti-mizing the implementation of the generator and measuringperformances with a headless deployment are among theitems tomitigate threats to validity. Also, we shall investigatefurther examples with increasing variability — for instance,interaction models where a higher number of rule conditionsdepend on rule effects.

7 DiscussionHaving experimented with this novel approach of generatingDSM editors, we report some future challenges and discusstheir impact on the approach.

7.1 Coping with the number of interaction rulesThe reproduction of the presented case studies has shownan elevated number of interaction rules. This stemed mostlyfrom dealing with determinism between interactions whoseconditions where not were not mutually exclusive at a con-ceptual level, but required to be so for implementation rea-sons. In the Pacman editor case, the number of interaction

rules would be significantly reduced had the interaction lan-guage allowed us to share information between rules. Somemodel transformation languages [39, 44] allow rules to usea context provided from other rules when finding matches.Likewise, we could augment interaction rules with a con-struct that explicitly marks certain elements of a rule to bereused in other rules, e.g., to retain which language elementin the toolbar is selected. The impact on Algorithm 2 shouldbe minimal.In the Music sheet editor case, some rules are dedicated

to placing a note on the staff and others to move the cursorright after. Rules having sequences of effects are very usefulto apply a sequence of actions in a single rule elegantly.One future objective is to reduce the amount of rules byenhancing the interaction language with specific constructsthat dictate a partial order in the application of the rules.

The procedure to construct the statechart from interactionrules ensures the statechart is deterministic thanks to theuse of guards. However, in general, two interaction rulescould be applicable at the same time, or their condition couldoverlap. Extending the interaction model with a mechanismto prioritize or explicitly control the application of rulesis another line of research to explore. Some support to de-tect overlapping or conflicting rules would be useful to theuser. All these improvements will contribute to reducing thenumber of rules to design.

7.2 Analysis of interaction rulesGiven the number of interaction rules the user has to define,we should support the user with general analysis of the rulesto reduce the cognitive effort. The choice of Statecharts assemantic domain brings several advantages, in particular itoffers in-depth analysis thanks to many automated verifi-cation methods [27]. These analysis can then be reportedback at the level of the rules. For example, we can verifygeneral soundness criteria, such as the presence of dead-locks [22], reachability and cycle detection [2]. Several toolsoffer debugging facilities for Statecharts, like Cameo [28]. Itis then possible to propagate the debugging to the level ofinteraction rules, following the methodology in [26].

7.3 Concrete syntax manipulationIn this paper, we have limited the interaction with languageelements to CRUD operations. In general, more interactionsare desirable, such as resizing, rotating, or selecting multipleDSM elements on the canvas. This is achievable by addinganother set of interaction rules specific for concrete syn-tax manipulation. This would expand the canvas layer tohave more states and transitions to handle these kinds ofinteractions.

7.4 Incrementality and refactoring of the statchartThe generated statechart is not optimal: e.g., solitary statesshould be removed, and transitions can be collapsed together.

Page 12: Operationalizing the Integration of User Interaction ...dasilvav/files/pdf/SousaSLE2019.pdf · The Interface modeling language is aimed at UX designers. It specifies what elements

SLE ’19, October 20–22, 2019, Athens, Greece Vasco Sousa, Eugene Syriani, and Khady Fall

We can apply common Statecharts refactoring patterns fol-lowing good design principles [20]. For example, both tran-sitions from state 2 to state 4 in Figure 6 are triggered by thesame event, have the same context condition in their guards,and perform the same action. Therefore, we can collapsethem into one transition and merge their guards.

The generation process of the statechart to control the GUIof the editor doesn’t necessarily have to start from the initialstatechart. Algorithm 1 is incremental, meaning that we canprovide it with a pre-built statechart describing the commondefault functionalities of DSM editors and augment it withinteraction rules specific to the DSL. This would requirethe statechart to be structured in layers and to retain theconfiguration history leading to it.

7.5 Usability of DSM editorsFinally, it will be interesting to conduct a controlled userstudy to identify if the generated DSM editors with custominteractions are more appropriate than those generated inlanguage workbenches and evaluate the productivity andeffort required to construct interface, interaction, and eventmapping models.

8 Related Work8.1 User-defined interactions for GUIThe field of human-computer interaction is rich with a lot ofwork on UX that aims at improving the interfaces, actions,and feedback of the interactions between a user and the soft-ware [4]. However, MDE developers do not have an effectiveand efficient way of describing such interactions to integratethem in the development of DSM editor. Typically, the GUIdesign of these editors relies mostly on the typical approachof sketching [5], where the designer creates GUI mockupsdeciding on the layout of components, their relation, andthe integration of data. These methods can be applied whendesigning the interface model in our approach.

Approaches like IFML [13] allow developers to model andgenerate web applications. In IFML, developers specify userinteractions by describing the components, state objects,data, and control logic of the interactions. Even if it does notintegrate UX concerns, IFML targets object-oriented devel-opers with its close dependency to UML. Interactions areimplemented as object-oriented functions. Our approach tar-gets UX designers to define interactions using high-levelinteraction rules, involving elements of the DSL and the in-terface model. Like IFML, when specific operations must beexecuted in the effect of the rule (like RUN in GoL), an APImust be available to implement the operation. However, theevent mapping model hides this from the designer who cancall a high-level essential action. Following domain-specificmodeling principles, our objective is to promote end-userdevelopment, as motivated in [29].

Many tools allow UX designers to configure interactionrules [6]. The paradigm of the interaction rules we propose isinspired by ECA rules [11] and Trigger-action rules [41, 42]which allow users to specify interactions visually usingdomain-specific concepts. A study in the IoT domain [10]shows that ECA rules are well-suited for the expertise of end-users. In [7], the authors combined MDE techniques withWebML and ECA rules to generate web applications. Ourapproach integrates concepts of the DSL to generate DSMeditors. The work in [32] uses a similar declarative approachto specify interactions with visual domain elements. How-ever, these are sequences of interactions centered on datavisualization and tied to the visualization platform. In themodel transformation paradigm, Guerra and de Lara [16] de-fine graph transformations that are triggered by user actions,such as scaling language elements, which, in turn, triggersGUI events.

8.2 Statecharts controlling GUIThe Statecharts formalism is considered a defacto standardfor modeling reactive systems [17]. It is also very often usedto synthesize source code automatically [18, 21] to a varietyof platforms (e.g., web, embedded systems), and program-ming languages (e.g., C, Java, Python). These two consider-ations make it ideal for describing the reactive nature of amodeling editor and integrate that description in an auto-mated generation process. Horrocks [20] describes designprinciples to control a GUI with statecharts.In MDE, AToMPM [40] and AToM3 [9] are two language

workbenches where a statechart controls the interactionwith the GUI of the DSM editor. It is based on [3] that showshow to design a statechart for drag-and-drop, selection, and,moving elements in concrete syntax on the canvas. Theseinteractions are similar to the Pacman editor case.

9 ConclusionThis work proposes to customize the GUI and user interac-tion of DSM editors generated following the MDE method-ology. Traditionally, the metamodel and concrete syntaxdefinition of the DSL suffice to generate these editors. Weaugment this input with modeling languages geared to UXexperts: an interface model, an interaction model, and anevent mapping model. This paper concentrates on how tooperationalize these user interaction specifications into thegeneration of DSM editors. We transform these models intoa statechart that controls the behavior of the editor. A prelim-inary evaluation shows that the transformation scales withthe increasing complexity of interactions and that the result-ing editor is capable of mimicking interactions commonlyfound in DSM editors. The results of this work open manyresearch opportunities, especially to foster collaborationsbetween the MDE and the UX research communities.

Page 13: Operationalizing the Integration of User Interaction ...dasilvav/files/pdf/SousaSLE2019.pdf · The Interface modeling language is aimed at UX designers. It specifies what elements

User Interaction in the Synthesis of Modeling Editors SLE ’19, October 20–22, 2019, Athens, Greece

References[1] S. Abrahão, F. Bourdeleau, B. Cheng, S. Kokaly, R. Paige, H. Störrle, and

J. Whittle. 2017. User Experience for Model-Driven Engineering: Chal-lenges and Future Directions. In Model Driven Engineering Languagesand Systems. IEEE, 229–236.

[2] R. Alur, M. Benedikt, K. Etessami, P. Godefroid, T. Reps, and M. Yan-nakakis. 2005. Analysis of Recursive State Machines. Transactions onProgramming Language Systems 27, 4 (2005), 786–818.

[3] J. Beard and H. Vangheluwe. 2009. Modelling the reactive behaviourof SVG-based scoped user interfaces with hierarchically-linked state-charts. In SVG Open Conference.

[4] J. Burgoon, J. Bonito, B. Bengtsson, C. Cederberg, M. Lundeberg, andL. Allspach. 2000. Interactivity in human–computer interaction: Astudy of credibility, understanding, and influence. Computers in humanbehavior 16, 6 (2000), 553–574.

[5] B. Buxton. 2007. Sketching User Experiences: Getting the Design Rightand the Right Design. Morgan Kaufmann Publishers Inc.

[6] D. Caivano, D. Fgli, R. Lanzilotti, A. Piccinno, and F. Cassano. 2018.Supporting end users to control their smart home: design implicationsfrom a literature review and an empirical investigation. Journal ofSystems and Software 144 (2018), 295–313.

[7] S. Ceri, F. Daniel, M. Matera, and F. Facca. 2007. Model-driven De-velopment of Context-aware Web Applications. ACM TransactionsInternet Technology 7, 1 (2007).

[8] L. Constantine. 2009. Interaction Design and Model-Driven Develop-ment. In Model Driven Engineering Languages and Systems. Springer,377–377.

[9] J. de Lara and H. Vangheluwe. 2002. AToM3: A Tool for Multi-formalism andMeta-Modelling. In Fundamental Approaches to SoftwareEngineering (LNCS), Vol. 2306. Springer, 174–188.

[10] G. Desolda, C. Ardito, and M. Matera. 2017. Empowering end users tocustomize their smart environments: model, composition paradigms,and domain-specific tools. ACM Transactions on Computer-HumanInteraction 24, 2 (2017), 12.

[11] Klaus R. Dittrich, Stella Gatziu, and Andreas Geppert. 1995. Theactive database management system manifesto: A rulebase of ADBMSfeatures. In Rules in Database Systems. Springer, 1–17.

[12] C. Forgy. 1982. Rete: a fast algorithm for the many pattern/many objectpattern match problem. Artificial Intelligence 19, 1 (1982), 17–37.

[13] P. Fraternali and M. Brambilla. 2015. Interaction Flow Modeling Lan-guage. OMG. Version 1.0.

[14] M. Gardner. 1970. Mathematical Games: The Fantastic Combinationsof John Conway’s New Solitaire Game ’life’. Scientific American 223, 4(1970), 120–123.

[15] Object Management Group. 2015. Diagram Definition, Version 1.1.[16] E. Guerra and J. de Lara. 2004. Event-Driven Grammars: Towards the

Integration of Meta-modelling and Graph Transformation. In Interna-tional Conference on Graph Transformation (LNCS), Vol. 3256. Springer,54–69.

[17] D. Harel. 1987. Statecharts: A visual formalism for complex systems.Science of Computer Programming 8, 3 (jun 1987), 231–274.

[18] D. Harel and E. Gery. 1997. Executable Object Modeling with State-charts. Computer 30, 7 (jul 1997), 31–42.

[19] D. Harel and A. Naamad. 1996. The STATEMATE semantics of state-charts. Transactions on Software Engineering and Methodology 5, 4 (oct1996), 293–333.

[20] I. Horrocks. 1999. Constructing the User Interface with Statecharts.Addison-Wesley.

[21] Itemis. 2019. Generating state machine code. https://www.itemis.com/en/yakindu/state-machine/documentation/user-guide/codegen_generating_state_machine_code.

[22] A. Karatkevich. 2003. Deadlock Analysis in Statecharts. In Forum onSpecification and Design Languages. 414–425.

[23] S. Kelly, K. Lyytinen, and M. Rossi. 1996. MetaEdit+ A fully con-figurable multi-user and multi-tool CASE and CAME environment.In Conference on Advanced Information Systems Engineering (LNCS),Vol. 1080. Springer, 1–21.

[24] D. Kolovos, R. Paige, and F. Polack. 2006. The Epsilon Object Language(EOL). In Model Driven Architecture – Foundations and Applications.Springer, 128–142.

[25] D. Kolovos, R. Paige, and F. Polack. 2008. The Epsilon TransformationLanguage. In Theory and Practice of Model Transformations. Springer,46–60.

[26] R.Mannadiar andH. Vangheluwe. 2011. Debugging in Domain-SpecificModelling. In Software Language Engineering. Springer, 276–285.

[27] E. Mikk, Y. Lakhnech, M. Siegel, and G. J. Holzmann. 1998. Implement-ing statecharts in PROMELA/SPIN. In Workshop on Industrial StrengthFormal Specification Techniques. IEEE, 90–101.

[28] Nomagic. 2019. Cameo Enterprise Architecture. https://www.nomagic.com/products/cameo-enterprise-architecture.

[29] F. Paternò. 2013. End user development: Survey of an emerging fieldfor empowering people. ISRN Software Engineering 2013 (2013).

[30] L. Rose, R. Paige, D. Kolovos, and F. Polack. 2008. The Epsilon Gen-eration Language. In Model Driven Architecture – Foundations andApplications. Springer, 1–16.

[31] J. M. Rouley, J. Orbeck, and E. Syriani. 2014. Usability and SuitabilitySurvey of Features in Visual IDEs for Non-Programmers. In Evaluationand Usability of Programming Languages and Tools. ACM, 31–42.

[32] A. Satyanarayan, K. Wongsuphasawat, and J. Heer. 2014. Declara-tive Interaction Design for Data Visualization. In Symposium on UserInterface Software and Technology. ACM, 669–678.

[33] C. Scholliers, L. Hoste, B. Signer, and W. De Meuter. 2011. Midas:A Declarative Multi-touch Interaction Framework. In InternationalConference on Tangible, Embedded, and Embodied Interaction. ACM,49–56.

[34] V. Sousa. 2017. Adapting Modeling Environments to Domain Spe-cific Interactions. In Symposium on Engineering Interactive ComputingSystems (EICS ’17). ACM, 145–148.

[35] V. Sousa and E. Syriani. 2015. An Expeditious Approach to ModelingIDE Interaction Design. In Workshop on Multi-Paradigm Modeling,Vol. 1511. CEUR-WS.org, 52–61.

[36] D. Steinberg, F. Budinsky, M. Paternostro, and E. Merks. 2008. EMF:Eclipse Modeling Framework (2nd ed.). Addison Wesley Professional.

[37] E. Syriani, L. Lúcio, and V. Sousa. 2019. Structure and behavior pre-serving Statecharts refinements. Science of Computer Programming170 (2019), 45–79.

[38] E. Syriani and H. Vangheluwe. 2013. A Modular Timed Graph Trans-formation Language for Simulation-based Design. Software & SystemsModeling 12, 2 (2013), 387–414.

[39] E. Syriani, H. Vangheluwe, and B. LaShomb. 2015. T-Core: A Frame-work for Custom-built Transformation Languages. Journal on Softwareand Systems Modeling 14, 3 (2015), 1215–1243.

[40] E. Syriani, H. Vangheluwe, R. Mannadiar, C. Hansen, S. Van Mierlo,and H. Ergin. 2013. AToMPM: A Web-based Modeling Environment.In Invited Talks, Demonstration Session, Poster Session, and ACM StudentResearch Competition (MODELS’13), Vol. 1115. CEUR-WS.org, 21–25.

[41] B. Ur, E. McManus, M. Pak Yong Ho, and M. L. Littman. 2014. PracticalTrigger-action Programming in the Smart Home. In SIGCHI Conferenceon Human Factors in Computing Systems. ACM, 803–812.

[42] B. Ur, M. Pak Yong Ho, S. Brawner, J. Lee, S. Mennicken, N. Picard, D.Schulze, and M. L. Littman. 2016. Trigger-Action Programming in theWild: An Analysis of 200,000 IFTTT Recipes. In SIGCHI Conference onHuman Factors in Computing Systems. ACM, 3227–3231.

[43] S. Van Mierlo, Y. Van Tendeloo, B. Meyers, J. Exelmans, and H.Vangheluwe. 2016. SCCD: SCXML Extended with Class Diagrams. InWorkshop on Engineering Interactive Systems with SCXML, part of EICS2016.

Page 14: Operationalizing the Integration of User Interaction ...dasilvav/files/pdf/SousaSLE2019.pdf · The Interface modeling language is aimed at UX designers. It specifies what elements

SLE ’19, October 20–22, 2019, Athens, Greece Vasco Sousa, Eugene Syriani, and Khady Fall

[44] D. Varró, G. Bergmann, Á. Hegedüs, Á. Horváth, I. Ráth, and Z. Ujhe-lyi. 2016. Road to a reactive and incremental model transformation

platform: three generations of the VIATRA framework. Software andSystem Modeling 15, 3 (2016), 609–629.