Upload
others
View
22
Download
0
Embed Size (px)
Citation preview
Comparative Analysis of the Prominent Middleware Platforms in the Domain of Ambient Assisted Living
by
Rajjeet Singh Phull
A thesis submitted in conformity with the requirements for the degree of Masters in Applied Science – Biomedical Engineering
Institute of Biomaterials and Biomedical Engineering University of Toronto
© Copyright by Rajjeet Singh Phull 2015
ii
Comparative Analysis of the Prominent Middleware Platforms in
the Domain of Ambient Assisted Living
Rajjeet Singh Phull
Masters in Applied Science – Biomedical Engineering
Institute of Biomaterials and Biomedical Engineering
University of Toronto
2015
Abstract
Diversification of application areas, technologies, and computational techniques in the Ambient
Assisted Living (AAL) domain alongside the need for a balance between reproducibility and
customizability for the heterogeneous end-user groups has led towards the development domain-
specific middleware platforms, which are integrated development environments that facilitate
integration and communication amongst heterogeneous hardware and software components, so
they can operate synergistically within a shared environment. Recent efforts that study such
platforms have emphasized the need for improved evaluation methods, particularly feature-
focused scenario-based evaluations and quantitative approaches. Thus, the objective of this thesis
was to analyze developer requirements and compare current prominent AAL-specific
middleware platforms to ultimately derive guidelines that improve the platform’s adoption rates
and domain relevance. Maintainability, the ease of modifying system components, was
highlighted as the most deficient quality attribute in current platforms. Mapping of a platform-
independence use-case scenario to both platforms, hinted towards the need for graphical-user
interfaces over text-based interfaces and additional application area-specific modules.
iii
Acknowledgments
I am very thankful to Dr. Alex Mihailidis and his team for providing me with the opportunity to
conduct this research with the optimal level of independence and guidance. The involvement of
Dr. Ramiro Liscano is also greatly appreciated for strengthening my direction in conducting the
scenario-based evaluation. The role of Dr. Tom Chau and Dr. Babak Taati as the committee
members are also acknowledged for the provision of recommendations and direction for this
research. Also, I would like to thank the UniversAAL and HOMER development team and
forums for providing the technical support, when needed, for my project. Lastly, this work was
possible thanks to the financial support of the AAL-WELL.
iv
Table of Contents
Acknowledgments .......................................................................................................................... iii
Table of Contents ........................................................................................................................... iv
List of Tables ................................................................................................................................ vii
List of Figures ................................................................................................................................ ix
Chapter 1 Introduction .................................................................................................................... 1
1 Ambient Assisted Living............................................................................................................ 1
2 AAL Middleware Platforms ....................................................................................................... 2
Chapter 2 Research Goal, Questions, & Scope ............................................................................... 4
3 Research Goal ............................................................................................................................ 4
4 Research Questions .................................................................................................................... 4
5 Scope of Thesis .......................................................................................................................... 5
Chapter 3 Related Work .................................................................................................................. 7
6 Relevant AAL Projects .............................................................................................................. 7
7 Evaluation of AAL Platforms .................................................................................................. 10
Chapter 4 Questionnaire for AAL Developers & Researchers ..................................................... 13
8 Methodology ............................................................................................................................ 13
8.1 Participant Recruitment and Inclusion Criteria ................................................................. 13
8.2 Participant Characteristics ................................................................................................ 14
8.3 Page Composition & Navigation ...................................................................................... 14
8.4 Evaluation Criteria and Statistical Analysis ..................................................................... 14
9 Results ...................................................................................................................................... 17
9.1 Demographics Data and Past Experiences ........................................................................ 17
9.2 AAL Application Areas, Tools & technologies, & Computational Techniques .............. 17
9.3 Architectural-based Quality Attributes ............................................................................. 20
v
9.4 Other ................................................................................................................................. 25
10 Discussion ................................................................................................................................ 26
Chapter 5 Scenario-based Evaluation ........................................................................................... 30
11 Methodology ............................................................................................................................ 30
11.1 Test-case Scenario ............................................................................................................ 31
11.1.1 Tea-Making Activities .......................................................................................... 32
11.1.2 State of Environmental Sensors ............................................................................ 32
11.1.3 Audio Prompting System ...................................................................................... 34
11.2 Outcome Measures of Maintainability .............................................................................. 37
11.3 Case Study Approach ........................................................................................................ 39
12 Results ...................................................................................................................................... 41
12.1 UniversAAL ...................................................................................................................... 41
12.1.1 State of the Environmental Sensors ...................................................................... 42
12.1.2 Audio Prompting System ...................................................................................... 44
12.2 HOMER Overview ........................................................................................................... 45
12.2.1 State of the Environmental Sensors ...................................................................... 46
1.1.1 Tea-making object/device ................................................................................... 47
1.1.2 HOMER-specific sensor data model .................................................................... 47
12.2.2 Audio Prompting System ...................................................................................... 50
12.3 Objective Measures of Maintainability ............................................................................. 51
13 Discussion ................................................................................................................................ 53
Chapter 6 HOMER Usability Test ................................................................................................ 56
14 Motivation ................................................................................................................................ 56
15 Methodology ............................................................................................................................ 56
15.1 Participant Recruitment and Inclusion Criteria ................................................................. 56
15.2 Usability Tasks .................................................................................................................. 56
vi
15.3 Outcome Measures ............................................................................................................ 58
16 Results ...................................................................................................................................... 61
16.1 Participant Details ............................................................................................................. 61
16.2 SUM and SUS Results ...................................................................................................... 62
17 Discussion ................................................................................................................................ 63
Chapter 7 Guidelines ..................................................................................................................... 65
18 Platform-Specific Guidelines ................................................................................................... 65
18.1 UniversAAL ...................................................................................................................... 65
18.2 HOMER ............................................................................................................................ 69
19 Platform-Independent Guidelines ............................................................................................ 70
19.1 Add-ons for Different AAL Application Areas, Computational Techniques, &
Technologies ..................................................................................................................... 70
19.2 Increasing Abstraction Level: Graphical User-Interfaces over Text-based Interfaces ..... 71
Chapter 8 Conclusion .................................................................................................................... 73
References ..................................................................................................................................... 75
Appendices .................................................................................................................................... 78
20 Appendix A: Developer’s Questionnaire ................................................................................. 78
21 Appendix B: Test-case Scenario State Machine .................................................................... 111
22 Appendix C: HOMER Usability Test Pre-Assessment & Evaluation Criteria ...................... 113
vii
List of Tables
Table 1 - Key AAL middleware platforms. .................................................................................... 7
Table 2. Definition of the five architectural-based quality attributes based on the ISO-9126
standards for software quality. ...................................................................................................... 11
Table 3. P-values, according to the Fisher Exact test (alternative hypothesis: two-tailed test).
Cross examination of the relative importance of the architectural-based quality attributes of AAL
middleware platforms from the perspective of questionnaire participants experienced with
similar software. * p < 0.05, ** p < 0.1. ....................................................................................... 21
Table 4. P-values, according to the Fisher Exact test (alternative hypothesis: two-tailed test).
Cross examination of the relative importance of the architectural-based quality attributes of AAL
middleware platforms from the perspective of questionnaire participants inexperienced with
similar software. * p < 0.05, ** p < 0.1. ....................................................................................... 22
Table 5 - P-values, according to the Fisher Exact test (alternative hypothesis: two-tailed test).
Cross examination of the rating of the architectural-based quality attributes of AAL middleware
platforms that the questionnaire participants have had experience working with in the past. * p <
0.05, ** p < 0.1. ............................................................................................................................ 24
Table 6. Selection of audio prompts based on the different sensor states. A blank cell indicates
that the state of that particular sensor is not considered (don't care). ........................................... 36
Table 7. Mapping of the tea-making objects/devices to HOMER-specific sensor/actuator data
model (IEEE 11073). .................................................................................................................... 47
Table 8 - Objective outcome measures of maintainability for various tasks on UniversAAL and
HOMER. ....................................................................................................................................... 52
Table 9. Definitions of the sub-categories of maintainability according to the ISO-9126. .......... 57
Table 10 - Usability tasks for the HOMER evaluation. ................................................................ 58
viii
Table 11 - Results of the usability study in terms of metrics from the Summated Metric Score
(SUM) ........................................................................................................................................... 62
ix
List of Figures
Figure 1. Relationship between the AAL concepts from the universal reference model of AAL
[4]. ................................................................................................................................................... 2
Figure 2. The middleware concept from the universal reference model of AAL[4]. ..................... 3
Figure 3. Thesis structure based on the software engineering process. Green, blue and red
highlighted objects indicate tasks relevant for research questions 1, 2, & 3, respectively. ............ 5
Figure 4. Breakdown of the questionnaire's page logic and distribution of questions on each page
labelled as Q. Distribution of participants on each page is represented by [ ] square brackets. ... 16
Figure 5 - Application area of AAL in which the participants had experienced working in. ....... 19
Figure 6 - Experience of the questionnaire participants in terms of the tools & technologies
employed in the AAL field. .......................................................................................................... 19
Figure 7 - Experience of the questionnaire participants in terms of the common AAL
computational techniques and algorithms. .................................................................................... 20
Figure 8 - Relative importance of the quality attributes of AAL middleware platforms from the
perspective of developers experienced with similar software. Response count = 30. .................. 21
Figure 9 - Relative importance of the quality attributes of AAL middleware platforms from the
perspective of developers inexperienced with similar software. Response count = 12. ............... 22
Figure 10 - Rating of the quality attributes of the AAL middleware platforms that the participants
have had experience working with in the past. ............................................................................. 24
Figure 11 - Experience of questionnaire participants to particular AAL middleware platforms.
Note participants had the ability to select multiple platforms. ..................................................... 25
Figure 12 - Areas of difficulties faced by participants when using middleware platforms (22
responses). ..................................................................................................................................... 26
x
Figure 13. Model demonstrating the relationship between the main components of the
application. .................................................................................................................................... 31
Figure 14. Activity diagram of the tea-making process. ............................................................... 32
Figure 15. Individual state diagrams of the different sensor devices involved in the audio
prompting system for tea-making. ................................................................................................ 34
Figure 16. The generic state machine of the audio prompting system for aiding OAwD with
making a cup of tea. Complete diagram is available in the appendix. .......................................... 37
Figure 17 - Modification of the test-case scenario by adding two activities to the tea-making
process. The grey task represent the modifications. .................................................................... 40
Figure 18. Audio prompting system on UniversAAL's system architecture. ............................... 42
Figure 19. Domain ontology model of the tea-making objects in UniversAAL. ......................... 44
Figure 20. Audio prompting application on the HOMER setup. .................................................. 46
Figure 21. Faucet scenario. ........................................................................................................... 47
Figure 22. Tea-making process tracker. ........................................................................................ 49
Figure 23. Audio prompting system scenario. .............................................................................. 51
Figure 24 - Flat designer view of the usability test showing the room layout and
sensor/actuator/door placements. .................................................................................................. 61
Figure 25. Example of an ontological structure for context event pattern of the kettle object. .... 67
Figure 26. An alternative ontological structure for the context event pattern of the kettle object.
....................................................................................................................................................... 68
1
Chapter 1 Introduction
1 Ambient Assisted Living
According to demographic estimates by Statistics Canada [1], the population of older adults
(aged 65 and over) will elevate to nine million by 2031, rendering them 25 percent of Canada's
total population. The main challenges associated with the growing proportionality of the aging
community includes the increasing prevalence of incapacitating diseases, such as dementia and
Alzheimer's disease, higher health care costs, shortage of caregivers, and lower productivity [2].
The anticipation of this disruptive demographic shift has triggered extensive research into
ambient intelligence. Ambient intelligence is an emerging paradigm which envisions an
environment comprised of smart sensors and actuators that analyze and predict human behaviour
to fulfill their needs in an unobtrusive, pervasive and responsive manner [3]. Technologies and
tools based on ambient intelligence are referred to as ambient-assisted living (AAL) [2]. In the
context of the aging community, AAL can be viewed as a technological approach that aims to
maintain the autonomy, independence, confidence, and well-being of the elderly and disabled
within their own homes.
Advancements in the AAL field are occurring in a broad range of applications areas such as
cognitive orthotics, emergency detection, continuous health and activity monitoring, therapy, and
emotional well-being [2]. Applications based on static smart environments incorporate ambient
or environment sensors such as cameras, radio frequency identification, microphones, passive
infrared sensors, and pressure sensors while mobile-based applications (i.e. e-textiles, vital sign
detection) use wearable sensors like accelerometers, gyroscopes, ECG, and pulse oximeters [2].
Sensory information is processed using several types of computational techniques such as
context modelling, activity recognition, anomaly detection, planning, and location identification
[2].
According to the universal reference model for AAL[4], AAL systems can be defined as socio-
technical systems that consist of networked artefacts (i.e. sensors & actuators) embedded in an
AAL space to provide various types of AAL services for the wellbeing of the assisted persons
2
(e.g. the elderly or disabled). AAL services (or applications) are specific functions that use the
input sensor data to make actions which facilitate assistance or boost social integration for the
assisted person. AAL space is the smart environment equipped with the networked artifact that
allow for the provision of AAL services. These concepts are depicted in Figure 1.
Figure 1. Relationship between the AAL concepts from the universal reference model of
AAL [4].
2 AAL Middleware Platforms
The diversity and heterogeneity of assisted persons (end users of AAL services) is one of the
main obstacles when developing AAL systems. Assisted persons can vary from one another in
terms of age, cognitive abilities, preferences, physical capabilities, perceptions on technological
aid, and environmental conditions. Therefore, the AAL solutions must be customizable and
adaptive to the needs of its end users. In addition, much of the developed AAL systems target
only a subset of the entire user population and adopt methods and tools that are not easily
transferable to other projects, resulting in fragmentation within the AAL field[5]. To ensure
3
technical and financial feasibility of large, complex AAL systems, AAL middleware platforms
are being developed. Middleware can be described as the systems of systems that resides
between the operating system and the application layer. As shown in Figure 2, it helps facilitate
integration and communication between software components from a group of heterogeneous
and distributed devices [4]. Middleware platforms allow several hardware components (i.e.
sensors, actuators, and other devices) and software components (i.e. applications) to be
represented as instances that can be rapidly combined to provide large range of functionality in
comparison to standalone solutions. The users of the middleware platforms, the application
developers, can work with single set of APIs for a vast range of heterogeneous components and
are not required to learn all their low-level implementation details. Hence, this saves time, effort,
and money when building flexible and customizable AAL systems for the vast range of end users
and renders AAL solutions as technically and financially feasible approach of caregiving [4].
Figure 2. The middleware concept from the universal reference model of AAL [4].
4
Chapter 2 Research Goal, Questions, & Scope
3 Research Goal
The main research goal of this project is to perform a comparative analysis of the prominent
AAL middleware platforms based on quantitative measures and scenarios to ultimately yield
guidelines or recommendations of novel features and functions. The added functionality may
help improve adoption rates of the platforms from AAL application developers, and render the
platforms more useful to a larger selection of AAL project types.
4 Research Questions
To accomplish this research goal, the objective of this thesis is to answer the following research
questions:
1. From the perspective of developers, what architectural-based software quality attributes
and areas of difficulties need to be addressed from AAL middleware platforms?
2. Based on the deficiencies of particular quality attributes and identified areas of
difficulties, how do the prominent AAL middleware platforms compare against one
another based on scenario-based evaluation?
3. From the results of the scenario-based evaluations, what novel features and functions
should be recommended to guide future AAL middleware platform developments, so that
it can potentially achieve greater adoption rates from the developer community and cater
to more AAL application types?
The next chapter presents the schematic of this document in relation towards answering the
research questions presented above.
5
5 Scope of Thesis
Figure 3 illustrates the structure of this thesis from the perspective of software engineering
process. Chapter 3 elaborates on the relevant work that relates to AAL middleware platforms and
introduces the foundation for the purpose of this thesis. After completing the literature review
(c.f. chapter 3), a questionnaire was conducted that targeted developers experienced with using
middleware platforms (c.f. chapter 4).
Requirements Design Construction Verification & Validation
Literature Review
Questionnaire
Test-case Scenario(Audio Prompting
System for Tea-making)
Modelling Platform-Independent
Application Behaviour
Mapping platform-independent model to
UniversAAL s architecture
Mapping platform-independent model to HOMER s architecture
UniversAAL Implementation
HOMER Implementation
Platform-specific verification
Testing of Non-functional
requirements (maintainability)
HOMER Usability TestGuidelines
Figure 3. Thesis structure based on the software engineering process. Green, blue and red
highlighted objects indicate tasks relevant for research questions 1, 2, & 3, respectively.
To answer the first research question, a questionnaire was disseminated to the community of
AAL developers to inquire about their needs and concerns regarding the quality of the AAL
middleware platforms. The purpose of this questionnaire was to inquire about the main quality
attributes (non-function requirements) and other properties of the available AAL-specific
middleware platforms.
Results from the questionnaire, together with the literature review, led to the creation of the test-
case scenario and the evaluation criteria as part of the scenario-based evaluation. To reiterate, the
scenario-based evaluation was used as main comparison approach between the AAL middleware
6
platforms, and pertained specifically towards addressing the second research question. The test-
case scenario, as introduced in section 11.1, was an audio prompting system for helping
dementia patients make a cup of tea. Once the initial software requirement phase was completed,
the design phase was initiated by translating the test-case scenario into generic activity diagram
and state machines to describe the intended system behaviour. Then, these system behaviour
models were converted to design patterns of the two compared platforms that reflected their
architecture. This was followed by the construction of the test-case scenario on both, HOMER
and UniversAAL, the AAL middleware platforms that were open-source, in-development, and
AAL-centered. The intended functionality of the implemented test-case scenario was verified
using the provided platform-specific tools. The chosen quality attribute or non-functional
requirement of interest, maintainability, was conducted using quantitative measures of effort (as
time) expended in various activities in the development process of the test-case scenario.
Qualitative data, in terms of observations and experience of the investigator, were also recorded.
An additional component of this verification and validation phase was a usability study on
HOMER platform (c.f. chapter 6).
To tackle the third and final research question, experiences from the subsequent project phases
were analyzed to compile a set of platform-specific and platform-independent guidelines. The
platform-specific guidelines aimed to suggest improvements for the two AAL middleware
platforms under study. In addition, platform-independent guidelines presented conceptual ideas
for an ideal AAL-middleware platform. These guidelines can be seen in chapter 7. Then, chapter
8 concludes this document by summarizing the findings of this project and how they relate back
towards answering the research questions.
7
Chapter 3 Related Work
6 Relevant AAL Projects
Over the past decade, multiple AAL-related platforms have been developed, many of which have
been consolidated into recent projects and/or have been discontinued. Table 1 provides a brief
overview of these middleware platforms. Note that AMIGO[6], SOPRANO[7], OpenAAL[8],
PERSONA[9], and MPOWER[10] have been consolidated as input projects for
UniversAAL[11]. Other main frameworks and open solution platforms are highlighted in a
recent literature survey of AAL by Memon et al. [12] which includes OpenCare[13] and
AmiVital [14]. Additionally, Memon et al. [12] incorporated an email-based survey to create a
detailed list of the contemporary AAL systems and platforms. The difference between AAL
systems, platforms, frameworks, and architectures are not clearly distinguishable as these terms
have been used concurrently and not formally defined. The universal reference model for AAL
has defined AAL platforms as a specific subsection of AAL systems. From the presented list of
AAL platforms, the ones that were based on a middleware were AALuis1, HOMER [15], and
OpenCare[13]. Of these platforms, HOMER[15] and UniversAAL [11] are the only prominent
AAL-specific middleware platforms that are available under open source license and are still in-
development.
Table 1 - Key AAL middleware platforms.
AAL
Middleware
Platform
Description Status
AMIGO[6] The platform is based on service-oriented architecture to target
interoperability and is composed of: (1) a platform layer for
Quality of Service (Qos) support and device management, (2) a
middleware layer for service discovery and orchestration, and (3)
Inactive since
2005.
Consolidated
into
1 http://www.aaluis.eu/benefits/developer/
8
an application layer for ontology-based semantics and context
management.
UniversAAL[11].
SOPRANO[7] The main component is the Context Manager that encloses a
knowledge base of AP-level (Assisted Person) and sensor-level
ontology, which express the environment semantically. Incoming
sensor data is converted into states using the integration OSGi
bundles (dynamic Java components), and mapped or categorized
to context using the sensor-level ontology. The AP-level ontology
convert context to a higher level of abstraction for the
consumption of situation-aware services.
Inactive since
2010.
Consolidated
into
UniversAAL[11].
OpenAAL[8] This platform is based on the work of SOPRANO[7]. A context
manager converts captured sensor data and user input to
ontological statements that describe the environment states.
Situations of interests are detected and acted from a repository
of workflows that are managed and executed by the procedural
manager. The composer studies the requested workflows and
combines available services to meet the goals.
Inactive since
2010.
Consolidated
into
UniversAAL[11].
PERSONA[9] PERSONA’s middleware is composed of three logical layers. The
abstract connection layer connects middleware instances that
various protocols using a common interface. Various types of
busses, for message exchange between peers, are introduced
higher up in the Sodapop and PERSONA-specific layer. Finally, the
application layers provides handles or wrappers for implementing
business logic.
Inactive since
2010.
Consolidated
into
UniversAAL[11].
MPOWER[10] A service-oriented architecture based approach. The middleware
architecture is comprised of a physical, service components,
service, business processes, and application layer.
Inactive since
2009.
Consolidated
into
UniversAAL[11].
9
UniversAAL[11] The core of the universAAL runtime platform is based on bus
communication that operates on publish-subscribe mechanism.
Each component that interacts through the universAAL buses
encapsulates itself in the bus-specific wrappers. These wrappers
specify to the bus the content of the message in the universAAL-
specific language which is based on the RDF/OWL ontology. The
three unique buses in universAAL are: the context bus, the
service bus, and the user-interface bus. Handles or wrappers are
available to connect the programmed application to the busses.
2009-present.
Open source
and active.
HOMER[15] HOMER (Home Event Recognition System) is AAL middleware
platform based on graphical user interface (GUI) as opposed to
following a programming paradigm like most other platforms.
The two main components of this platform are: (1) flat designer
for modelling the home layout and configuring sensor/actuators
with in the home, and (2) finite state machines that allow for the
deployment of the application’s business logic. The platform
provides hardware abstractions for many types of vendors such
as KNX, EnOcean and so on.
2009-present.
Open source
and active.
AmiVital[14] The middleware platform is based on the service-oriented
architecture, and contains (i) physical layer for networked
hardware, (ii) context module for storing generated context, (iii)
knowledge management module for storing user-related
information and the associated adaptation rules, and (iv) service/
application layer.
Unavailable.
OpenCare[13] The OpenCare infrastructure uses Web Services to communicate
between the different tiers. The home tier (gateway, sensors, and
actuators) collects its sensory data, along with data provided by
the mobile tier via Bluetooth communication, and transmits it to
a database in the central tier via web. The central tier conducts
business logic, service orchestration, event handing, and
2009. Not
active.
10
communication with third party sources on the public tier.
AALuis The AALuis builds on existing middleware platforms that consists
of services that provide various types of user interfaces to
existing AAL systems.
2011-2014.
Extension of
UniversAAL.
Although most of the mentioned projects were initially open-sourced, many were unavailable as
they were discontinued and/or preceded by UniversAAL. AALuis provided add-on middleware
module for existing middleware (i.e. universAAL) for the development of common user-
interfaces. Palumbo et al. [5], [16] also introduced a AAL middleware platform called GiraffPlus
or SensorWeaver, which was an lightweight extension for the UniversAAL platform.
7 Evaluation of AAL Platforms
The only evaluation of AAL platform was carried out in 2011 by Antonino et al. [17] that was
based on the quality attributes of the platform’s software architecture. The methodology for
evaluating the included platforms involved interviewing platform developers to understand the
features and functions their platforms provided in respect to the different quality attributes or
non-functional properties. More specifically, the qualitative feedback from the developers was
used to classify the subsections of the quality attributes within a range from not addressed (NA)
to highly addressed (HA), with various increments in between. The main quality attributes that
were specifically selected for evaluating platforms in the AAL domain were reliability, security,
maintainability, efficiency, and safety. The definitions of these five quality attributes can be seen
in Table 2. The attributes, such as suitability and portability, were excluded because they were
deemed application-specific. Results from this evaluation yielded that UniversAAL, at the time,
was the platform that highly addressed all the measured quality attributes. Antonino et al. [17]
decided to incorporate OpenAAL[8], PERSONA[9], and UniversAAL[11] into the evaluation,
despite the fact OpenAAL and PERSONA were consolidated into UniversAAL and therefore,
would predictively yield the better choice. The comparison between UniversAAL and the other
included platforms, OASIS2, Alhambra, and Hydra[18], clearly demonstrated UniversAAL as
the most complete AAL middleware platform.
2 http://www.oasis-project.eu/
11
Table 2. Definition of the five architectural-based quality attributes based on the ISO-9126
standards for software quality.
Architectural-based Quality
Attribute
Definition
Reliability The ability for a software to maintain a level of performance under the
stated conditions for a certain period of time.
Security The ability for the system to protect its confidential information,
prevent disruptions of its services, and prevent damage to its hardware
or software from external sources.
Maintainability The effort required to make changes to the system.
Efficiency The relationship between the level of performance delivered by the
software versus the amount of resources consumed.
Safety The amount of single points of failure in the system or features for
safety-critical applications.
The motivation of this evaluation was to help the developers select a suitable middleware
platform for their AAL projects. Based on the results, it was concluded that UniversAAL was the
only reasonable platform to select, since all its quality attributes sections were classified as
highly addressed. On the contrary, the technical validation report by UniversAAL[19], based on
questionnaires, focus groups and software metrics, reported poor ratings for the quality attributes
of maintainability, usability, portability, testability, and installiability. This inconclusively
demonstrates that the classification of quality attributes on the scale from not addressed to
highly-addressed based on qualitative interview data is not sufficient for a robust comparison.
12
Error from interview or interpretation bias was a possibility. Therefore, the comparison of these
AAL platforms’ quality attributes required quantitative outcome measures. The authors of the
interview-based evaluation also emphasized that scenario-based evaluations are required for
future evaluations, and for contributions towards developing more effective AAL middleware
platforms. This fact, in addition to the need for a quantitative analysis, was the primary
motivation for this thesis. Additionally, comparative analysis of AAL middleware platforms for
the purposes for optimizing their overall design, based on their non-functional properties, has not
been
UniversAAL[11] and HOMER [15] were selected for this comparison because they were the
only available AAL-specific middleware platforms under open source license, and are actively in
development. Other platforms were omitted from the comparative analysis, primarily due to the
lack of development-related activities, and developer support. As shown in the previous section,
many platforms have been consolidated into UniversAAL. Suggesting improvements on active
projects takes precedence in order to make useful and immediate impact on the developer
community.
13
Chapter 4 Questionnaire for AAL Developers & Researchers
To answer the first research question, an online questionnaire was undertaken using
SurveyMonkey3 during September 2014. The objectives of this survey were to investigate: (1)
the relative importance of quality attributes of the platform’s software architecture and their level
of presence in the available platforms, (2) the distribution of application areas, techniques, and
tools/technologies amongst developers of different platforms relevant to the AAL field, and (3)
the specific difficulties and areas of concern while using the platforms.
8 Methodology
This subsection presents the methods employed for the questionnaire that was directed to AAL
developers and researchers.
8.1 Participant Recruitment and Inclusion Criteria
The questionnaire was disseminated to the participants through email, which contained summary
of the questionnaire’s objectives, eligibility criteria, and the hyperlink to SurveyMonkey. The list
of potential participants and their email addresses were accumulated from the community of
developers that were involved in developing functions, features, and application features for the
platforms. This includes compiling the email list from key project sites of the various
middleware platforms (i.e. UniversAAL, HOMER homepages). The rest of the participants were
authors of publications that pertained to AAL-related applications, and middleware platforms.
This was accomplished through a literature search on Google Scholar4, IEEE Xplore5, and
Engineering Village6 and obtaining email addresses of relevant authors involved in AAL
technologies, systems, or platforms. The search terms used for the literature search were
3 https://www.surveymonkey.com/
4 https://scholar.google.ca/
5www.ieeexplore.ieee.org
6 www.engineeringvillage.com.
14
“Ambient Assisted Living”, “Middleware”, and “Platforms” with an OR operator to maximize
results. Further filtering of the email list was on the basis that the potential participant was either
involved in the creation of an AAL application or/and in the development or operation of an
AAL-domain related platform.
8.2 Participant Characteristics
To collect some basic demographic statistics, participants were asked for their education level,
location, and the nature of their employment (Q1-Q3). The next page inquired about the details
of the applications they had experienced working with from the perspective of the different
application areas, tools & technologies, and computational techniques (Q5-Q7) of AAL, as
elaborated by Rashidi et al. [2]. The number of years spent in developing AAL applications was
also inquired (Q8).
8.3 Page Composition & Navigation
Appendix A encloses the summary of the 66 questions and their responses. The format of
majority of the question resemble the 5-step Likert scale. Figure 4 illustrates how these 66
questions were split into 8 pages of the questionnaire. Participants were asked a varying number
of questions depending on their experiences and knowledge. For participants experienced in
developing AAL-related applications and using middleware platforms, a total of 43 questions
were asked through pages 1, 2, 3, 4, and 8. If the participants did not have past experience with
middleware platforms but were potentially interested in using middleware platforms for future
AAL projects, they were navigated through 30 questions in pages 1, 2, 3, 5, 6, and 8. Those who
possessed experience developing AAL technologies but lacked the interest in potentially
applying middleware platforms in the future skipped page 6 for page 7, and were asked 14
questions instead of 30. Lastly, participants without any experience in developing AAL-related
technologies bypassed the whole survey by jumping from page 2 to the final page for a total of 6
questions. Most questions were optional, except for some key questions at the end of certain
pages that were used to screen and redirect participants based on their input (Q4, Q9, Q42).
8.4 Evaluation Criteria and Statistical Analysis
Reliability, safety, maintainability, efficiency, and safety were chosen as the evaluation criteria
for the platforms and were justified specifically for AAL-domain related platforms by Antonino
15
et al. [17]. For developers with past middleware platform experiences, these five quality
attributes were ranked against one another in term of importance (Q11). Additionally, the
developers rated the middleware platforms that they had experience working with in terms of
these five quality attributes (Q12-32). On the other hand, developers without middleware
platform experience were only asked to rate the relative importance qualities attributes (Q50) and
the level of significance that each quality attribute had in relation to their applications (Q51-
Q60). For statistical analysis, the Fisher’s exact test was implemented to determine if the
differences between two compared quality attributes reached statistical significance at the
confidence levels of 5% and 10%. This Fisher’s exact test was chosen over the Chi-square test
because the number of response count fell below 5 for several of the available options or choices.
Two compared quality attributes having different importance or ratings was the alternative
hypothesis (two-tailed test). If the p-value fell below 0.05, the null hypothesis was rejected. The
null hypothesis states that there is no significant difference between the two categories of data
from the same question. For example, the Likert scale distribution of maintainability and
reliability can be compared using this method. Again, the Chi-square test is not ideal as some of
the responses were zero. Statistical significance was computed using the R7 software. Reponses
from two difference groups (or quality attributes) that used the Likert scale was plugged into the
data table, and the Fisher’s exact test function was invoked to output the p-value. Once the
hypothesis was testing and cross-examined for all the quality attributes for each question, the p-
value were interpreted and compared for outstanding patterns.
7 http://www.r-project.org/
16
Page 1Questionnaire Overview
and Objectives & Agreement Form
Page 2Demographics and
Background Experiences (Q1-Q4) [54]
Page 3Past Experience with
developing AAL-related technologies (Q5-Q9) [49]
Page 8Final remarks (Q65-
Q66)
(Q4) Strongly Agree, Agree, or Neither Agree or Disagree about developing
AAL-related technologies in the past
(Q9) Strongly Agree or Agree regarding experience
implementing AAL applicationsOn middleware platforms
Page 4Inquries about importance of quality attributes, rating
quality attributes of existing middleware
platforms, and areas of concern from developers
with experience using middleware platforms
(Q10-Q41) [30]
(Q4) Strongly Disagree or Disagree about
possessing experience in developing
AAL-related technologies
Page 5Gauging the participant s
interest in using middleware platforms for future AAL technologies
(Q42) [13]
(Q9) Strongly Disagree or Disagree regarding experience implementing
AAL applicationsOn middleware platforms
Page 6Inquiring about
importance of quality attributes, and areas of
concern from developers without
experience using middleware platforms
(Q43-Q62) [13]
(Q42) Strongly Agree, Agree, or Neither Agree or Disagree in
Considering use of middleware platforms for future
AAL applications
Page 7Inspecting the lack of
interest in middleware platform use (Q63-Q64)
[0]
(Q42) Strongly Disagree or Disagree in Considering use of middleware
platforms for futureAAL applications
Figure 4. Breakdown of the questionnaire's page logic and distribution of questions on each
page labelled as Q. Distribution of participants on each page is represented by [ ] square
brackets.
17
9 Results
The full details of the questionnaire are available in chapter 20 of the Appendix. For the sake of
brevity this section will summarize the important findings of the questionnaire and its influence
on the rest of the project.
9.1 Demographics Data and Past Experiences
Out of the 419 requests sent, 97 failed to reach the recipient as server hosting the email address
was unresponsive. A total of 54 participants chose to complete the questionnaire. Figure 4
demonstrates the distribution of participants (square brackets) amongst the different pages of the
questionnaire. The response rate for each question varies as most of the questions were optional,
with the exception of the questions required for transitioning to the appropriate pages. Out of the
54 participant, two participants did not have past experience working with AAL technologies and
three discontinued their participation after page 2. Of the remaining 49 participants, 30 of them
had experience with middleware platforms in the past and completed page 4 of the questionnaire.
Of the other 19 who lacked experience using middleware platforms, only 13 participants
continued to page 5 of the questionnaire while six participants failed to continue.
The results from the first three question indicate the demographic of the questionnaire
participants. As seen in the response to question 1, approximately 70% of the participants were
from an academic background, 20% were in a mix of academia and industry, while the rest were
in industry. In terms of location, the subsequent response indicated that almost 80% of the
participants were from Europe, and 11% were from North America. There was relatively even
balance between participants who had completed their Master’s and Doctoral degree, 44.4% and
38.9% respectively. Lastly, a large portion of the participants have been working on AAL-
related applications for 4-6 years (39.6%) and 2-3 years (31.3%), while 16.7% had experience
over 7 or more years.
9.2 AAL Application Areas, Tools & technologies, & Computational Techniques
As introduced in chapter 1, the AAL realm consists of different types of application that are
based on various categories of application areas, tools & technologies, and computational
techniques. Since most of the questionnaire participants were developers with experience in
18
working with AAL middleware platforms, the questions on page 2 that inquire about the
application details may explain the purposes for which the middleware platforms are being used
for. Hence, they can help design the scenario to be used for the scenario-based evaluation in later
stages of this project. To understand the way AAL middleware platforms are potentially used by
the developers from the AAL community that participated in the questionnaire, three questions
were asked that inquired about their experiences relating to the different AAL application areas,
tools & technologies, and computational techniques. Choices for the multiple choices were based
on the categories elaborated from a study that surveyed the AAL realm for each of these themes
[2].
As shown in Figure 5, participants were mostly familiar with the application areas of continuous
health monitoring, pervasive applications, and emergency detection. On the lower end was
cognitive orthotics, therapy, and emotional wellbeing. The distribution of participant’s
experience, in terms of the types of AAL tools & technologies, is shown in Figure 6. Effort of
the developers are shown to be more concentrated on smart homes, meaning embedded ambient
or environmental sensors, rather than wearable or mobile sensors. Assistive robots, generally
more complex due to needed coordination of multiple sensors and algorithm types, is at the very
least. Last but not least, Figure 7 presents the distribution of the participant’s experience in terms
of the various AAL-relevant computational techniques and algorithms. Activity recognition was
the most common among the participants, followed by context modelling, location identification,
and anomaly detection. In comparison, computational techniques of planning type were the least
common.
19
Figure 5 - Application area of AAL in which the participants had experienced working in.
Figure 6 - Experience of the questionnaire participants in terms of the tools & technologies
employed in the AAL field.
20
Figure 7 - Experience of the questionnaire participants in terms of the common AAL
computational techniques and algorithms.
9.3 Architectural-based Quality Attributes
In this section, the results of the questions related to the five main architectural-based quality
attributes of AAL middleware platforms, as highlighted by Antonino et al. [17], are presented.
Figure 8 demonstrated the importance that developers from the AAL community experienced
working with such software, assigned to the five quality attributes (i.e. reliability, security,
maintainability, efficiency, and safety) on a spectrum from most important to the least). Table 3
supplements the statistical analysis of the results in Figure 8, illustrating the differences between
the quality attributes that have attained statistical significance. To summarize, reliability shows
the greatest importance out of the five quality attributes, as it is statistically significant in terms
of its difference from the other compared attributes, with the most important category having the
most response count. The difference between maintainability and security has reached a lower
level of statistical significance (p<0.1) in their differences, with maintainability showing more
importance. When comparing safety with security (p=0.1080) and efficiency (p=0.1432), it can
be seen in Table 3 that it has almost reached differences that are almost statistically significance
at the 10% confidence interval, but not against maintainability (p=0.7971). All other comparison
proved to be insignificant.
21
Figure 8 - Relative importance of the quality attributes of AAL middleware platforms from
the perspective of developers experienced with similar software. Response count = 30.
Table 3. P-values, according to the Fisher Exact test (alternative hypothesis: two-tailed
test). Cross examination of the relative importance of the architectural-based quality
attributes of AAL middleware platforms from the perspective of questionnaire participants
experienced with similar software. * p < 0.05, ** p < 0.1.
Reliability Security Maintainability Efficiency Safety
Reliability 1.0000 0.0000* 0.0180* 0.0001* 0.0527**
Security 1.0000 0.0608** 0.3423 0.1080
Maintainability 1.0000 0.4139 0.7971
Efficiency 1.0000 0.1432
Safety 1.0000
Similar to the previously shown figure, Figure 9 portrays the relative importance of the
architectural-based quality attributes of AAL middleware platforms from the perspective of
developers within the AAL community who are inexperienced in using such types of platforms.
Likewise, Table 4 supplements the statistical data that highlights the differences between the
quality attributes. Similar to results from experienced developers, the inexperienced developers
favor reliability the most from the five other attributes. Reliability has shown the highest count
22
for the most important option in Figure 9, and has statistical significant (p<0.05) over
maintainability, efficiency, and safety, but not for security (p=0.1223). Another key finding is
that efficiency is statically difference (as in least important) to maintainability and efficiency,
and a smaller degree to security (p<0.1), and not against safety (p=0.4315). All other
comparisons have proven to be inconclusive. From a spectrum of most to least importance,
reliability has demonstrated to be the most importance of the quality attributes, while efficiency
as the least and all the other attributes lie in between. This trend holds true for all developers,
whether or not they have experience with AAL middleware platforms. Note that the response
count for developers with experiences (N=30) is far greater than those without experience
(N=12).
Figure 9 - Relative importance of the quality attributes of AAL middleware platforms from
the perspective of developers inexperienced with similar software. Response count = 12.
Table 4. P-values, according to the Fisher Exact test (alternative hypothesis: two-tailed
test). Cross examination of the relative importance of the architectural-based quality
attributes of AAL middleware platforms from the perspective of questionnaire participants
inexperienced with similar software. * p < 0.05, ** p < 0.1.
Reliability Security Maintainability Efficiency Safety
Reliability 1 0.1223 0.001899* 0.0001672* 0.04448*
Security 1 0.4411 0.06312** 0.8323
23
Maintainability 1 0.0178* 0.1571
Efficiency 1 0.4315
Safety 1
Figure 10 demonstrates the rating of the questionnaire participants of the five main
architectural-based quality attributes of AAL middleware platforms that they had experience
working with in the past. The diversification of participant's experience in AAL middleware
platforms is revealed in question 10 of the questionnaire and in Figure 11. Majority of the
participants rated the quality attributes have had experience working with UniversAAL, and
some less known platforms (Other response data shown in question 10 of chapter 20 in the
Appendix). Based on the statistical analysis in Table 4, maintainability has shown to be great
difference (p<0.05) from reliability and efficiency, lesser significance (p<0.1) in difference from
security, and no significance difference (p=0.5737) from safety. Maintainability is considerably
different from the other quality attributes as its response count is much larger for poor and
inadequate levels in comparison to the others. Reliability, security, and efficiency is dominant at
the acceptable level, while maintainability and safety, to an extent, is mostly inadequate. Even
though the difference between maintainability and safety attributes is not considerable, the
fact is that maintainability has statistical significance in differences with reliability, security, and
efficiency while the safety does not. This implies that the architectural-based quality attribute
of maintainability is at a greater deficiency in current AAL middleware platforms than the safety
attribute. See section 10 for a greater-in depth analysis of this topic.
24
Figure 10 - Rating of the quality attributes of the AAL middleware platforms that the
participants have had experience working with in the past.
Table 5 - P-values, according to the Fisher Exact test (alternative hypothesis: two-tailed
test). Cross examination of the rating of the architectural-based quality attributes of AAL
middleware platforms that the questionnaire participants have had experience working
with in the past. * p < 0.05, ** p < 0.1.
Reliability Security Maintainability Efficiency Safety
Reliability 1 0.8794 0.03226* 0.7419 0.1792
Security 1 0.08511** 0.7913 0.1931
Maintainability 1 0.02679* 0.5737
Efficiency 1 0.09054*
Safety 1
25
Figure 11 - Experience of questionnaire participants to particular AAL middleware
platforms. Note participants had the ability to select multiple platforms.
9.4 Other
One crucial inquiry made in the questionnaire was that of difficulties encountered while the
developers worked on the AAL middleware platforms. The findings are shown in Figure 12. Half
of the questionnaire participants declared platform runtime environment as one of the difficulties
of using AAL middleware platforms. Runtime environment can be described as the dynamic
state of a software where its core functionalities are executed. Difficulties with runtime
environment were followed closely by hardware communication/connection layer and hardware
compliance with standards. This corresponds to integrating hardware components of different
protocols on the AAL middleware platform as middleware instances, so that they can
communicate with other instances. Middleware instances can refer to either software modules
(e.g. services) or hardware components with their own communication protocol. Lastly, logic &
reasoning and graphical user interface were showed to be some level of concern.
26
Figure 12 - Areas of difficulties faced by participants when using middleware platforms (22
responses).
10 Discussion
Responses from the developer questionnaire show that the efforts of developers, most of whom
are experienced with middleware platforms, are focused on certain types of AAL applications or
services. It may be that the middleware platforms are designed or suited for specific types of
applications or that they are capable of handling a certain degree of complexity which may vary
depending on the application area, technology and/or computational techniques. For instance,
applications areas of cognitive orthotics most likely requires more sophisticated I/O devices (e.g.
camera, microphone, speakers) than activity recognition or location identification, which may
work well with simple binary sensors (i.e. contact closure sensors). On the other hand, these
developers may have adopted these middleware platforms based on its features and functions
which could support the application they were intending on developing. Another possible
explanation is that certain application areas, such as emergency detection, have more
applicability and use to the overall elderly community as opposed to lower end ones, like
cognitive orthotics that would only suit the older adults with dementia. For computational
techniques, it’s most likely that the classification of algorithms may fall several of the presented
categories.
27
The main conclusion drawn from inquiring about the quality attributes (c.f. Chapter 4) of the
software architecture of middleware platforms is that maintainability and safety are at inadequate
or somewhat satisfactory levels in comparison to the other three quality attributes, which are
acceptable. To reiterate, maintainability is the ability to install and modify system components
and safety refers to the platform’s eligibility in handling functions of safety-critical applications
[17]. The relative importance of these two quality attributes is indistinguishable using the
Fisher’s exact test. When the ratings of these two quality attributes are compared, maintainability
has more developers rating the quality attribute as inadequate and poor over safety. For safety,
the proportion of responses in the Not Applicable choice may indicate that the developers were
not certain with this quality attribute. Overall, maintainability is more at a deficit in current AAL
middleware platforms than safety because it has shown to be statistically different that the other
quality attributes, that are at acceptable level. As shown in Table 5, safety only reached slight
statistical significance against reliability, while it at similar level with the other quality attributes,
including maintainability. The response rate was the same for both questions.
The importance of the quality attributes by both groups of developers, experience and
inexperienced with AAL middleware platforms, should be taken into account with the ratings. It
appears as if reliability is the most important quality attribute of the five and it’s rated as
acceptable in the current AAL middleware platforms. On the other hand, efficiency is also at
acceptable level but the least important of the quality attributes. Interestingly, maintainability,
security, and safety lie in between reliability and efficiency in terms of importance. Table 3
portrays that more experienced developers consider maintainability to be more important than
security, but this doesn’t hold true for the inexperienced developers. It should be noted that the
data from the experienced developers have a higher response count. Hence, their results should
carry more weight than the one from the inexperienced developers. If that is the case, it is can be
concluded that maintainability is the quality attribute that required greater attention for the
scenario-based evaluation. Both safety and maintainability warrant further investigation,
however, the focus is solely placed on maintainability due the limited scope of this project.
Another reason to forfeit safety is that this quality attribute is only relevant for applications that
require safety critical features while maintainability is applicable across all potential use-cases or
applications for the middleware platforms. Future work should consider comparing the quality
28
attribute of safety across the different middleware platforms with an appropriate AAL
application.
This questionnaire contains limitations that are worth noting. Firstly, most of the participants had
experience working with UniversAAL as opposed to any other platform. From the 30 developers
experience with middleware platform use, 70% have had used UniversAAL and only 13% with
HOMER. Note that most developers had overlapping experiences with multiple platforms, and it
seems as if most developers had worked with UniversAAL in conjunction with other less
prominent middleware platforms. Therefore, it became unrealistic to link the survey results to
any specific middleware platform and can only be used as general guiding tool for the overall
status of middleware platforms in AAL. Overlapping between application developers, who use
AAL middleware platforms to develop applications, and platform developers, who develop
features and functions for AAL middleware platforms, was another issue for gauging the
importance and rating of the quality attributes. Even though the definition of the architectural-
based quality attributes were provided, they may have been interpreted differently based on the
role the developer had in the field. For instance, maintainability can be interpreted as the ease of
management of platform-specific functions or features (e.g. platform API or third party libraries)
for a platform developer, while it can be also mean a module for an application developed on the
platform itself for an application developer. The heterogeneity may have been overcome by
inquiring in greater detail of their past experiences. Another limitation is the fact that the
importance and rating of quality attributes can be valued differently by different developers for a
variety of reasons (e.g. personal opinion or application types). The questionnaire results can be
split into groups to obtain platform-specific results, however, the response count would reduce
the strength of the data and lead of inconclusively. On the other hand, merging the ratings of
quality attributes from developers experienced with a large range of middleware platforms can
also lead to a level of inconclusively because strengths and weakness of platforms can overlap.
This may explain the difficulty of prioritizing the importance or rating of quality attributes.
Based on the data collected, the interpretation indicates that architectural-based quality attribute
of maintainability of AAL middleware architecture deserved to be investigated at a greater depth
for the scenario-based evaluation in the next section. This implies that a test-case scenario will be
tested on the two platforms, UniversAAL[20] and HOMER[15], and outcome measures that
pertain to maintainability will be used as the bases for the evaluation. The end goal is to observe
29
maintainability on these platforms in order to derive plans for features and functions that can
improve this quality attribute.
30
Chapter 5 Scenario-based Evaluation
As part of the addressing the second research question and serving as the main comparison
approach, a scenario-based evaluation was conducted on the two prominent AAL-specific
middleware platforms for the ultimate purpose of generating guidelines for novel features and
functions. A scenario-based evaluation is an approach where a generic, platform-independent
use-case scenario within the AAL realm is generated, and implemented on the selected platforms
to compare their functional and/or non-functional capabilities in meeting the business goals of
the particular use-case. This form of comparative analysis has not been conducted for AAL
middleware platforms in any past literature, but was emphasized and recommended by Antonino
et al. [17] as the next step towards evaluating these platforms for the purposes of improving their
ability to cater to the application developers.
The approach of this scenario-based evaluation was heavily influenced on the findings from the
previous phase of this project, the questionnaire of the AAL developers and researchers. From
the questionnaire results, it was found that the architectural-based quality attribute,
maintainability, requires further investigation on the currently available and active AAL
middleware platforms. Additionally, the early portion of the questionnaire help determine that a
focus on particular AAL-specific applications areas, such as cognitive orthotics, and computation
techniques, like planning, would help determine key opportunities for testing and improvements
in the functionality of current AAL middleware platforms. These facts help shape the test-case
scenario, as explained in the next section.
11 Methodology
The section describes the methodology of deriving the particular test-case scenario used for the
scenario-based evaluation. The scenario-based evaluation intended to provide an overview of the
differences in functional capabilities and the provided features by each platform that influence
the development of AAL applications, in addition to exploring the software quality attribute of
maintainability. Once the suitable test-case scenario is developed, it is designed and implemented
on both of the platforms that are under the comparative analysis, in a customized way that uses
that makes use of the platform-specific features in the best manner possible. The result portion
31
(c.f. section 12) of the scenario-based evaluation describes the actual approach used to realize the
test-case scenario on each platform using its enforced system architecture and other constraints.
11.1 Test-case Scenario
The specific scenario or use case selected, for evaluating the platforms, was an audio prompting
system that aims to assist older adults with dementia (OAwD) or assisted person in simple meal
preparation. This use case is selected based on study findings, conducted on 106 caregivers of
dementia patients, which emphasized that assistance in meal preparation is one of the most
demanded functionalities of future AAL systems [21]. Furthermore, this scenario used in this
project builds on a recently conducted study that investigated the effectiveness of an prompting
assistive robot in helping dementia patients make a cup of tea [22]. The overview of this
application is shown in Figure 13. The tea-making activity constitutes all the individual tasks that
must be executed in a particular order for the OAwD to make a cup of tea. While these tasks are
carried out, embedded sensors within the environment detect conditions that imply the state of
the environment and the status of different tasks. Finally, the audio prompting system plays out
audio prompts that direct the OAwD to complete a pending task within the tea-making process
when assistance is demanded.
Tea-making Activities
State of Environmental
sensors
Audio Prompting
System
OAwD Takes Action
Figure 13. Model demonstrating the relationship between the main components of the
application.
32
11.1.1 Tea-Making Activities
The activity diagram of the tea-making process is shown in Figure 14. This process consists of
14 individual tasks that have specific sequencing and concurrencies. The colorings indicate sets
of tasks that must occur in a sequential order. The green tasks represent filling the kettle with
water and having it boiled. Preparing the cup is shown by the yellow tasks. Blue tasks merge the
outcome of the yellow and green tasks, in which the prepared cup and the boiled water are
combined to finish making a cup of tea. The initial split signifies that the green and yellow tasks
can be carried out in parallel, but both paths must be complete in order for the blue colored tasks
to be completed. This activity diagram only includes tasks that require monitoring from
embedded sensors, which will be simulated for this scenario-based evaluation.
Open cabinet door
Turn water on
Take out cup from cabinet
Fill kettle with water
Turn on kettle
Turn water off
Take teabag and place into
the cup
Pour hot water from kettle into
cup
Dispose teabag into trashcan
Wait for water to boil
Plug kettle base into wall socket
Place filled kettle on base
Remove kettle from base
Remove kettle from base (boiled)
Water Boiled
Figure 14. Activity diagram of the tea-making process.
11.1.2 State of Environmental Sensors
In an AAL system, sensory data is captured from the AAL space for the applications within the
system to understand its state and make appropriate adjustments. In this scenario, the state of the
environment needed to be captured in order to track the progress made by the OAwD in making
a cup of tea, following the activity diagram introduced in the previous section. For the purpose of
33
this simulated test-case, embedded sensors were theorized for the kettle, faucet, cabinet, teabag
container, and cup. Figure 15 illustrates the different binary state machines for individual objects
that are required in the tea-making process. Since the AAL middleware platforms cater to binary,
simple sensors (high level data) and since the hardware connection layer is not considered in this
evaluation, the included sensors have at most two states that oscillate between one another when
a certain threshold has been reached. For instance, the state of the faucet is switched to ON when
its embedded sensor reaches a specific threshold, indicating that the water is running. Together,
the state of all these individual devices helped determine the progress made by OAwD within the
tea-making progress, and select the most appropriate audio prompt in case they demanded
assistance via the assist button. The assist button was simulated to be switch, worn on the wrist
of the OAwD that would be pressed when they demanded assistance.
34
Faucet ON
Kettle FULL
Kettle Switch ON
Cabinet door OPENED
Tea Box ACTIVE
Faucet OFF
Kettle EMPTY
Kettle Switch OFF
Cabinet door CLOSED
Faucet
Cabinet
Teabag Box
Tea Box IDLE
Sensor Drops Below Threshold
Sensor Reaches Threshold
Sensor Reaches Threshold
Sensor Drops Below Threshold
Sensor Reaches Threshold
Sensor Drops Below Threshold
Sensor Reaches Threshold
Sensor Drops Below Threshold
Sensor Reaches Threshold
No sensor activity > 60 seconds
Kettle
Garbage Bin
Garbage Bin ACTIVE
Garbage Bin IDLE
Sensor Reaches Threshold
No sensor activity > 60 seconds
Kettle MOUNTED
Kettle UNMOUNTED
Sensor Reaches Threshold
Sensor Drops Below Threshold
Kettle POWERED
Kettle UNPOWERED
Sensor Reaches Threshold
Sensor Drops Below Threshold
Assist button ON
Assist button OFF
Assist Button
Sensor Reaches Threshold
Sensor Drops Below Threshold
Kettle Water BOILED
Kettle Water UNBOILED
Sensor Reaches Threshold
Sensor Drops Below Threshold
Figure 15. Individual state diagrams of the different sensor devices involved in the audio
prompting system for tea-making.
11.1.3 Audio Prompting System
The main component of this test-case scenario was the audio prompting system. As the state of
the environmental sensors was being recorded to keep track of the OAwD progressing through
35
the tea-making process, assistance could be demanded at any moment by pressing the assist
button. The assist button is one of the devices with the binary states of ON and OFF. Table 6
maps the 14 individual audio prompts to the states of the embedded sensors. The colour coding
represents the set of sequential tasks introduced in Figure 14. Blank cells represent the “don’t
care” condition, in which the state of the particular device is not checked or pulled. Because of
this, combination of sensor states not unique to a particular audio prompt. This conflict is
omitted by sequentially querying the sensor states for each audio prompt according to its index in
ascending order. For instance, the sensor states relevant to “Turn on the water” is pulled and
checked before “Remove the kettle from base”. Another potential method was to collect all
sensor states and determine the correct audio prompt, however, a unique set of sensor states
would be required for each audio prompt. This would add complexity to the system, and
decrease the reliability of the application as there would be many points of failures.
The behaviour of the audio prompting system is shown in Figure 16. The core of this state
machine are the following four state types: wait mode, active mode, audio prompt, and call
caregiver. The wait mode state is the default state of the system, as long as the assist button
remains in the OFF state. When the assist button transitions into the ON state, the application
enters active mode. Upon entry into this state, the counter is incremented. The counter is used to
determine the number of audio prompts the prompting system has delivered to the assisted
person. When the counter reaches an arbitrary number (5 in our case), the state transition occurs
from the active mode to call caregiver instead of a state that issues an audio prompt. The action
within the call caregiver state invokes the informal caregiver of the assisted person to intervene,
as the audio prompting system is eliciting the anticipated task from the assisted person.
Caregiver can be contacted using SMS or email.
36
Table 6. Selection of audio prompts based on the different sensor states. A blank cell
indicates that the state of that particular sensor is not considered (don't care).
If the counter is below the set limit, the assist mode state transitions to one of the 14 states that
represent a unique audio prompts, depending on the progress the assisted person has made in the
tea-making process. As the assisted person completes each activity within the tea-making
process, the virtual, embedded sensors are expected to detect specific environment conditions
that imply task competition. For instance, the state machine in Figure 16 under Faucet goes from
the OFF state to the ON state to indicate that the assisted person has turned on the faucet. The
combination of sensor states at any given time indicates the state of the environment, as
explained previously, which is used to track the progress of the assisted person in the tea-making
process. This status is used to select the appropriate audio prompt when assistance is requested
by the assisted person through the activation of the assisted button. Table 6 links the audio
prompts to the different sensor states. It should be noted that multiple audio prompts can be
derived from a single set of sensor states due to the overlapping nature of the blank cells (can be
any value). To overcome this issue as shown in Figure 16, guard conditions that check the
combination of different sensor states occur in a sequential and incremental fashion that
resembles the overall tea-making process. The checking of different state sensors occurs in the
sequential and incremental order according the index number shown below in Figure 16. For
example, states of the environmental sensors of the index 1, referring to the audio prompt “Turn
on the water”, are checked before the sensors relevant to index 2. The reasoning for approach is
discussed in detail in section 13.
37
Entry/ Switch assist button to OFF
Wait Mode (W)
Assist button pressed
[count < 5]
[count >= 5]
Call caregiver
Entry/ Play audio prompt Turn on the water , Start timer at t=0
Do/ Wait 10 seconds for task completion (faucet = ON?)
Turn on the water
Task Completion detected (t<10)
Conditions not met
Faucet = OFF, Kettle = EMPTY, Water = UNBOILED
Task Completion not detected (t=10)Do/ Increment count++
Assist Mode (A)
Entry/ Play audio prompt Please do something , Start timer at t=0
Do/ Wait 10 seconds for task completion(Sensor condition(s) met?)
Audio Prompt #2
Sensor state(s)
Task Completion detected (t<10) W
Task Completion not detected (t=10) A
Conditions not met
Older adult w/ dementia (OAwD) enters the tea-making station within the kitchen. The assist button represents a method by which the OAwD can express their need for assistance. The sensors within this application are simulated based on the Internet of Things (IoT) paradigm.
State Machine Diagram
Go back to Wait Mode State
Go back to Assist Mode State
Figure 16. The generic state machine of the audio prompting system for aiding OAwD with
making a cup of tea. Complete diagram is available in the appendix.
Once the audio prompt has been delivered to the OAwD, the application continuously (every
second) queries the states of sensors that are relevant for the issued audio prompt. If the
appropriate action is taken by the OAwD that continues the tea-making process, the state
machine transitions back to the wait mode and switches the audio button to the OFF state. On the
contrary, if no action is detected within 10 seconds of issuing the audio prompt, the application
enters assist mode and the counter is incremented. Depending on the counter value, the caregiver
call is invoked or the application goes through the sequential checking of sensor states to
determine the appropriate audio prompt. The ordered checking of sensor states for each audio
prompt is based on its index.
11.2 Outcome Measures of Maintainability
The secondary purpose of conducting the scenario-based evaluation was to analyze the
architectural-based quality attribute of maintainability through a quantifiable approach. The
38
evaluations presented here represent maintainability from the perspective of the application
developers, who use the AAL middleware platforms to build AAL services or applications for
the end-users.
UniversAAL used an object-oriented programming paradigm, while HOMER was based on a
GUI created using Java Swing. Therefore, outcome measures used were mostly relating to effort
in terms of time (logged) expended on each activity was used for these evaluations. The
objective measures are listed as follows:
Effort (time in minutes) for installing each platform on the main gateway. This includes
setting up the development environment for each platform. Total effort (est. time in
minutes) for developing the test-case scenario. This includes the total time spend on
documentation, set-up of the platform, requesting third-party support, and testing the test-
case scenario.
Effort (time in minutes) for modifying the business logic of the application. This includes
adding tasks within the tea-making process with the appropriate sensors and audio
prompts. Figure 17 illustrates the chosen modifications and their implications on the
activity diagram introduced in the previous section. This included adding a task for
opening a drawer and drawing out a spoon. This task precedes the task of removing the
teabag from the cup and disposing it into the trashcan.
Effort (time in minutes) for adding middleware instances to the system. This was
connecting an additional cabinet and cup item (embedded sensors) and merging it with
the application.
Number of requests for external support, when the provided resources (i.e. tutorials and
documentation) were not sufficient to fix the issue at hand. Request was elicited via
community forums and emailing the staff of the platform.
Long-term temporal measurements were based on the past logs during the development phase,
while short-term temporal measurements were based on a timer.
39
11.3 Case Study Approach
Due to limited resources, the use of multiple participants for the scenario-based evaluation was
not possible for this project. Therefore, the case study approach had to be adopted in which the
main investigator of this thesis project was the sole participant. Hence, the results of scenario-
based evaluation and guidelines are based on the perspective of a novice user. The investigator or
the sole participant of the scenario-based evaluation has completed a Bachelor of Engineering in
biomedical engineering and was proficient in the Java programming language which was
required to operate the AAL middleware platforms included in this study. The implications of
conducting a case-study approach and the characteristics of the sole participants on the results
are discussed in section 13.
40
Open cabinet door
Turn water on
Take out cup from cabinet
Fill kettle with water
Turn on kettle
Turn water off
Take teabag and place
into the cup
Pour hot water from kettle into
cup
Dispose teabag into trashcan
Wait for water to boil
Plug kettle base into wall socket
Place filled kettle on base
Remove kettle from base
Remove kettle from base
(boiled)
Tea Making Process
Water Boiled
Open cabinet door
Turn water on
Take out cup from cabinet
Fill kettle with water
Turn on kettle
Turn water off
Take teabag and place
into the cup
Pour hot water from kettle into
cupDispose teabag into trashcan
Wait for water to boil
Plug kettle base into wall socket
Place filled kettle on base
Remove kettle from base
Remove kettle from base
(boiled)
Tea Making Process
Water Boiled
Open drawer to retrieve spoon
Take spoon out of drawer
Figure 17 - Modification of the test-case scenario by adding two activities to the tea-making
process. The grey task represent the modifications.
41
12 Results
In the second phase of this project, a test-case scenario was modelled and designed to be
implemented on HOMER and UniversAAL for evaluating their functions and features. Due to
time and resource constraints, only the investigator was able to partake in the scenario-based
evaluation. Involvement of additional users required a team of developers who already had
general familiarity and skill with the middleware platforms under study. This is discussed in
greater detail in section 13 as a limitation. Devices (i.e. sensors and actuators) were only
simulated in this scenario-based evaluation, hence middleware platform features corresponding
to hardware abstraction was not evaluated. In this subsection, the details of the design and
implementation of the test-case scenario is presented for both of the selected AAL middleware
platforms, UniversAAL and HOMER.
12.1 UniversAAL
The architecture of the test-case scenario on the UniversAAL platform is demonstrated in Figure
18. This overview illustrates the logical components of the application. As shown in the diagram,
this AAL middleware platform mainly revolves around the three communication buses. These
buses allow for the decoupling between the data-collecting (i.e. sensors) and the actual
application. Different aspects of the mapping between platform-independent test-case scenario
and UniversAAL are presented in the following subsections.
42
Audio Prompting System for Assisting OAwD with Tea making
Context SubscribersSubject: Kettle, Predicate: PROP_IS_BOILEDSubject: Kettle, Predicate: PROP_IS_FULL
Subject: Kettle, Predicate: PROP_IS_MOUNTEDSubject: Kettle, Predicate: PROP_IS_PLUGGED
Subject: Kettle, Predicate: PROP_IS_POWERED_ONSubject: Cup, Predicate: PROP_IS_FULL
Subject: Cup, Predicate: PROP_IS_OUTSIDESubject: Faucet, Predicate: PROP_IS_ON
Subject: Garbage, Predicate: PROP_IS_OPENSubject: Teabox, Predicate: PROP_IS_OPENSubject: Cabinet, Predicate: PROP_IS_OPEN
Subject: SwitchActuator, Predicate: PROP_HAS_VALUE
Service CallerDeviceService.PROP_CONTROLS
SwitchActuator.PROP_HAS_VALUE
Context Bus
Service Bus
User-interface Bus
Tea-making Object States Variables
(Boolean)
Assist Button
Context Publisher (Garbage
Bin)Context
Publisher (Tea box)
Context Publisher (Faucet)
Context Publisher (Kettle)
Context Publisher (Kettle)
Context Publisher
(Cup)
Service CalleeService: Turn Off
Service: Get assist button
Context Publisher
(Assist Button)
Domain Ontology Model
Kettle+ isFull: Boolean
+ isPlugged: Boolean+ isPoweredOn: Boolean
+ isMounted: Boolean+ isBoiled: Boolean
Garbage+ isOpen: Boolean
Cup+ isFull: Boolean
+ isOutside: Boolean
Cabinet+ isOpen: Boolean
Tea box+ isOpen: Boolean
UI Caller
Audio Prompt Selector
Figure 18. Audio prompting system on UniversAAL's system architecture.
12.1.1 State of the Environmental Sensors
Sensory data was simulated for each device in its own OSGi bundles (Java modules). This
bundles or Java modules could be dynamically installed, started, stopped, and uninstalled during
runtime. Additionally, their dependencies to other bundles or third-party libraries can be
dynamically managed with the help of pom XML file from Maven plugin. These bundles
published their contextual information (i.e. their states) to the context bus, which the main
application bundle was subscribed to. The environmental sensors made use of the context
publisher, a UniversAAL wrapper for communication via the context bus. On the other hand, the
application module used the context subscriber to gather context relevant to its own business
logic. The application module filtered the context based on its location and function or device
type. Since the elements of the application was solely based in the kitchen, filtering the context
based on location alone was limiting. Using the provided ontology, it was possible to use a
43
coordinate system and nested, finer-grained locations to differentiate between the different
contexts publishers, however, this was deemed more time-consuming and complex than the
alternative method. The alternative was to use domain-specified ontology using feature provided
by UniversAAL. UniversAAL provided a UML editor that converted the created ontology
models, as seen in Figure 19, to generated code that all the bundles can use to specify the type
for their bus wrappers. For instance, a kettle ontology was created with its 5 data types for its
different sub-parts and embedded sensors. The context publishers and subscribers used the same
domain ontology model to exchange the messages. The provided sensor ontology provided many
sensor types (e.g. water flow sensor, motion sensor), however, distinguishing between the same
sensor types from the same location was an obstacle that would require more specific location to
be identified. Using domain ontology was the fastest and more reliable method to accomplishing
the objectives set out for this test-case scenario.
44
Figure 19. Domain ontology model of the tea-making objects in UniversAAL.
12.1.2 Audio Prompting System
Beyond the ontology support and the bus communication, UniversAAL provided no features to
help create the business logic of the application, which consisted of mostly the audio prompting
system. Most of the application was coded in using the eclipse editor. Once the assist button was
pressed, the assist button module used a context publisher to disseminate context to the
application module via the context bus. Once the application detected the context, the Boolean
45
state (i.e. isAssistButtonOn) changed states and caused the application to transition from the
default state of wait mode to active mode. In the active mode, a count variable was incremented
and the application entered a series of if-statements to decipher the condition of the
environmental sensors. The condition for these if-statements were dependent on Boolean
variables that were being altered by the virtual sensors. Once the correct audio prompt was
detected, an audio prompt was sent out to the user handler, using the bus wrapper for the UI-bus,
the UI caller.
The assist button bundle took on the role of a Service Callee since it provided the functionality to
the main application bundle for switching off the assist button. The application bundle’s Service
Caller would request this service when the button needed to be switched from active to idle state.
Relating to the application’s business logic, this service was invoked when the assisted person
carried out the action that was requested by the audio prompt within a set time frame. These
bidirectional arrows represent the service responses, which indicate if the call or request was
handled successfully. Once the service carried out its function, the assist button bundle’s context
publisher announced the idle or low state of the assist button. The application bundle would use
this announced context as a feedback to halt the repeated audio prompts to the assisted person.
When the assisted person pushes the assist button, a context would be published using the
bundle’s context publisher and the application bundle would start initiating the audio prompts
until the assisted person carries out the anticipated task within the tea-making process.
12.2 HOMER Overview
The test-case scenario was also realized on the HOMER platform, as shown in Figure 20. This
implementation is composed of three main sections: (1) flat designer, (2) scenarios (state
machines), and (3) system state variables. The flat designer was used to construct the physical
model of a kitchen that included sensor/actuator placement and configuration and management
of room coordinates. Secondarily, the scenarios or the state machines were used as functions to
the incoming sensory input to control actuators and carry out a series of other actions. Much of
the logic of the audio prompting system was handled within the scenario window, in which
multiple state machines were created. As shown in Figure 20, 14 unique state machines were
created for the entire application. Lastly, a set of system state variables was used as common
46
variables between the concurrently running state machines. For the applied application, 14
system state variables were needed.
Figure 20. Audio prompting application on the HOMER setup.
12.2.1 State of the Environmental Sensors
Of the 14 scenarios or state machines created, 12 were directly linked to the 12 individual
sensors that were present in the flat designer. All of these state machines consisted of 3 states: (1)
an initialization state, (2) a low or OFF state, and (3) a high, ON or active state. An instance of
these types of scenarios is shown in Figure 21. Transitioning between the ON and the OFF state
is dependent on the conditions of the specified water sensor placed in the flat designer. The
action within each of the binary states changes the system state variable called Faucet (a
Boolean) to either false (OFF) or true (ON). Similar scenarios are deployed for all the other
relevant sensors within the application. The resultant changes in the system state variables
contributes to the transitioning of states in the audio prompting system and tea-making process
tracker scenarios, which are describe in the subsequent section. Table 7 illustrates mapping
between the provided sensor data models of the HOMER platforms and the tea-making objects
of the platform-independent use-case scenario.
47
Figure 21. Faucet scenario.
Table 7. Mapping of the tea-making objects/devices to HOMER-specific sensor/actuator
data model (IEEE 11073).
1.1.1 Tea-making object/device 1.1.2 HOMER-specific sensor data model
Faucet Water sensor
Teabag Container Switch sensor
Cabinet door Contact closure sensor
Garbage bin lid Contact closure sensor
Assist button Switch sensor/ Switch Actuator
Kettle (Full)
Kettle (Mounted)
Kettle (Powered)
Kettle (Plugged)
Kettle (Boiled)
Water sensor
Switch sensor
Switch sensor
Switch sensor
Temperature sensor
Cup (Full) Water sensor
48
Cup (Inside Cabinet) Motion sensor
The Tea-making process tracker, as shown in Figure 22, is responsible for monitoring the 12
different virtual sensors via the system state variables to keep track of the progress of the assisted
person who has undertaken the tea-making task. As shown in Figure 22, each state presents the
task that the user needs to complete to progress towards making a cup of tea. Initially, the state
machine points to the Turn On the Water state but can move the other parallel set of tasks
(yellow tasks in Figure 14). The concurrently running state machine, the audio prompting
system, monitors a system state variable known as current state to decide what particular audio
prompt to invoke upon the assisted person. Each of these states has two main transitions to its
next indexed state. One of these transitions consist of a timer event the checks the status of the
relevant sensors (using the system state variable as it is persistent as opposed to sensor events)
and uses that as the condition. The other transition is based on a triggering of a sensor event from
the sensor that is relevant to that particular state. This rationale for this design was based on the
fact the HOMER platform is only event-driven, and that sensor events need persistence as their
status needs to be regularly checked.
49
Figure 22. Tea-making process tracker.
The cluster of cross-cutting transitions account for the possibility of the assisted person
switching from one linear set of tasks to another. For example, the assisted person may be
completing tasks related to filling the kettle with water and boiling the water (green tasks in
Figure 14), and may at any point break off and complete the other set of tasks involving taking
out the cup from cabinet and placing a teabag in it (yellow tasks in Figure 14). Between the two
concurrent paths, audio prompts, that followed the previously completed task within the tea-
making process, were given priority. For example, if the person has just opened the cabinet door
and demanded assistance through the audio prompting system, the priority of the audio prompt is
given to the next test within that path, which would be the retrieval of cup from the cabinet. This
is possible by indexing the progress of each concurrent paths and creating conditions on the state
50
transitions that reflect the prioritization. This was possible by indexing both parallel set of
activities with a two separate system state variables, in addition to the “current state” variable.
12.2.2 Audio Prompting System
The audio prompting system scenario, as shown in Figure 23, is similar to the conceptualized
state machine (c.f. Figure 16) of the tea-making scenario. Once the system has initialized, the
application enters wait mode state and stays in that state until the assist button sensor undergoes
activation. Upon assist button activation, wait mode state transitions to active mode state where
the counter is incremented. After 1 second, a state transition to the other states that issue the
appropriate audio prompting state is made, which are selected based on the current state of the
environment (represented by a system state variable from the tea-making process tracker). For
HOMER, one-second timer events are used for transitioning states where sensor events could not
be used or enforced to make state transitions or when they were not suitable. Once the audio
prompt has been invoked, a 10 second counter initials and the state machine transitions back to
assist (active) mode if the appropriate sensor event for that particular audio prompt have not been
raised. These sensor events correspond to the assisted person following the instructions of the
audio prompt. If the count has reached 5 and the assist button has been activated, the audio
prompting scenario transitions to the call caregiver and application terminates. The discussion
about the impact of the HOMER’s provided features to build this application is elaborated in the
discussion section.
51
Figure 23. Audio prompting system scenario.
12.3 Objective Measures of Maintainability
Table 8 demonstrates the objective outcome measures obtained for maintainability-related tasks
while implementing the test-case scenario on each platform. Installing the platform was rapid for
HOMER at five minutes, compared to the time spent for UniversAAL. Downloading the zip file
and running the main batch file was the only prerequisite for installing and running the HOMER
platform. UniversAAL demanded the setup of the Eclipse IDE8, Java Development Tool Kit9,
and AAL studios within Eclipse IDE before any developmental work could begin. Similar scale
of difference was seen for the total development time of the use-case scenario on both platforms.
8 https://eclipse.org/
9 http://www.oracle.com/technetwork/java/javase/overview/index.html
52
This time was estimated on the number of hours spend on the each platform, which was derived
from the number of days times the average number of hours spent working on each platform to
implement the test-case scenario. For universAAL, 35 days (at a rate of 4 hours per day) were
approximately spent to learn the platform and implement a working prototype of the test-case
scenario. For HOMER, 4 days (at a rate of 8 hours per day) were approximately spent on the
identical task. The other temporal data was derived using a stopwatch timer.
Table 8 - Objective outcome measures of maintainability for various tasks on UniversAAL
and HOMER.
Measure UniversAAL HOMER
Effort in installing platform (min) 35 5
Effort in developing test-case scenario (min) 8400 1920
Effort in adding activities (min) 32 38
Effort in adding cabinet & cup item (mins) 9 12
Number of requests for external support 11 0
Modifying the test-case scenario of UniversAAL to accommodate the additional activities of
Opening the drawer and Taking out the spoon within the tea-making process took 32 minutes to
complete. This endeavour incorporated adding two bundles, as context publishers, for the sensors
representing the drawer and the spoon. Once these components were ready to publish events on
the context bus, the bundle that contained the application business logic was modified to add two
additional context subscribers. Furthermore, two audio prompts for the two added activities were
created, indexed, and fitted into the ordering of the other audio prompts. The domain ontology
model was altered to include drawer, and spoon as IoT devices.
The same task on HOMER took slightly longer to complete. For HOMER, two binary state
machines for each sensor, representing the two new objects, were created to manipulate two new
system state variables. Modifications to the tea-process tracker included adding two states in-
between states that represented audio prompts of index 13 and 14 (c.f. along its transitions to
other states. The audio prompting scenario also required two additional states for the two new
53
audio prompts, in addition to rearranging the audio prompting index and other system state
variables. Lastly, the sensors has to be placed within the flat designer before they were linked to
the state machines in the scenario window.
Another task for measuring maintainability was to add in duplicates of existing items, namely a
cabinet and a cup. For UniversAAL, the two existing bundles of the context publishers of
cabinet and cup had to be duplicated and indexed, without modifying the application logic. This
was based on the assumption that opening any cabinet retrieving any cup from the cabinet was
sufficient for the competition of two associated activities within the tea-making process. This
same task was slightly more tedious and lengthy on HOMER. After adding the two additional
sensors for the cabinet and cup, their binary state machines had to be created, along with their
new system state variables. For the tea-making process tracker, new state transitions were added
to incorporated sensor events from the newly created sensors. Additional transitions also had to
be included for the audio prompts states associated with the cabinet and the cup to the wait
mode.
The last metric for measuring maintainability was counting the number of requests made to
external support while developing the test-case scenario on each platform. For UniversAAL,
support was routinely needed due to the discrepancies because of the versions of the platforms
and the documentation and developer tools. HOMER had documentation that was updated
simultaneously with the new released versions, therefore this was not an issue.
13 Discussion
As seen in the previous section, UniversAAL and HOMER are very different AAL middleware
platforms. Since UniversAAL follows an object-oriented programming paradigm and HOMER is
based on its own GUI, the skill set required to operate these platforms is extreme. For a
developer to operate UniversAAL runtime platform, he or she must be proficient with the Java
programming language (with basic knowledge of OSGi framework), the Eclipse IDE, Maven10
(software project management tool), the entire tree structure of the UniversAAL ontology, and
the coding techniques for using the wrappers for bus communication. In contrast, HOMER
10 https://maven.apache.org/
54
requires one to learn the basics of finite state machines, and the HOMER-specific GUI with its
limited features. Thus, UniversAAL initially required much greater effort than HOMER because
it demanded a large set of prerequisite knowledge and experience from the developer. Also,
HOMER accelerated the development process because it provided the state machines to form the
logic for the applications, whereas UniversAAL does not provide any significant leverage for
adding business logic. On the contrary, defining ontologies for the wrappers (e.g. context
subscriber and publishers) added the overhead to justify the time discrepancies. As a result, most
of the developmental effort was spent on creating matching ontologies between the application
and the sensor modules, rather than the business logic in the case of UniversAAL. For the
HOMER platform, the greatest time consumption was verifying state machine details and
making modifications once the application became complex.
Despite the increased effort required for UniversAAL for developing the test-case scenarios,
HOMER had greater times for performing the two modifications. To reiterate, one modification
required adding an two activities within the tea making process with their appropriate audio
prompts and sensors, while the second one required installing duplicate tea-making objects (with
embedded sensors). For both cases, UniversAAL was faster to modify but overall, slower when
it came to developing the test-case scenario. The role of the UniversAAL bus was handled by the
system state variables of HOMER, however, new system state variables had to be defined for
each sensor. The increased modification time for HOMER can be explained due to poor visual
feedback or indications from the GUI, which made identifying sensors difficult. The flat designer
portrayed a room with clusters of overlapping sensors that were difficult to differentiate, which
made testing using the simulator essentially useless for debugging. As the application grew in
size in HOMER, the list of events (state transitions), actions, and system state variables increased
and these additions made the scenario editor cumbersome to manage. Even though no feature of
UniversAAL was included to aid in developing the use-case scenario, efficient coding and good
software practices made modifications manageable, which may vary from one developer to the
next.
Definition of the tea-making objects was critical difference the two platforms. In UniversAAL,
the domain ontology model was highly customizable in making models for all the different items
with their own unique properties, expressed through the device ontology. The implementation of
the domain ontology model is shown in Figure 19. An example is the kettle ontology that
55
consisted of five different Boolean properties, which corresponded to the various embedded
sensors. An alternative route was to define each sensing part of every item using the sensor
ontology (e.g. switch sensor). This approach required better specification of location, in terms of
coordinates in addition to room function, to differentiate between the same sensor types. For
instance, the contact closure sensor could have been used both the cabinet door and the garbage
bin lid, so their specific location (xyz coordinates instead of kitchen alone) would be
differentiating factor for the application. Much of the development time was saved by using the
domain-specific ontology models that created custom ontologies through UML drawings and
automatic code generation. The downside to using domain specific ontology is the fact it ends up
being unique for different AAL spaces and set of developers. Integration from external sources
and introducing the space for other developers to work on would pose as an obstacle. In
HOMER, the tea-making objects were forced to be expressed as one of the available
sensor/actuator types from ones defined in the IEEE 11073 home automation standards.
There are several limitations while conducting the scenario-based evaluation for comparing the
AAL middleware platforms. Perhaps the greatest potential source of error is from the single
experience of the investigator. These results represent a case study of only one instance.
Different developers may learn at a different pace and apply different strategies when designing
and developing the application these platforms. For instance, the use of domain-specific ontology
models is highly dependent on the preferences of the developers, which may vary in the level of
effort required to use them. Future work with scenario-based evaluations should incorporate
multiple developers for a more precise comparison and more accurate results. The second
limitation is the limited complexity of the application itself and application type. The audio
prompting system for helping dementia patients make a cup of tea could have been much more
complex than it is presented within this work. Adding a greater level of complexity may have
limited the application on one of the platforms, most likely HOMER due to its limiting interface.
HOMER already shows that it is harder to maintain as the application begins to expand. On top
of this, there are many possible and unique use-case scenarios for tackling the same set of
business problems. Additionally, the use-case only covers a single application type while the
AAL field has many potential ones for which the platform may be used for. Therefore, future
scenario-based evaluation with multiple platforms should incorporate multiple types of
application types with different computational techniques and technologies.
56
Chapter 6 HOMER Usability Test
14 Motivation
In addition to Antonino et al. [17], UniversAAL [11] themselves have conducted self-evaluations
of their runtime platform based on the several software quality attributes. Antonino et al. [17]
has argued that usability and portability are not suitable quality attributes for the evaluation of
AAL platforms, however, these quality attributes should be considered from the perspective of
their users, which are the application developers of AAL. UniversAAL have conducted
validation studies based on these quality attributes and more. According to their validation
reports [19], their runtime support has been rated as low by external developers based on field
tests for usability, portability, and maintainability. The tutorials of UniversAAL have rated as
low for learnability, accuracy and understandability based on the expert reviews of their internal
developers. HOMER [15], on the other hand, has not reported any self-evaluation of the
platform, nor has any similar work been conducted. Therefore, this part of the thesis deals with
conducting evaluations of HOMER for this comparison against UniversAAL for the quality
attributes of maintainability, usability, and learnability. The universAAL results are qualitative,
while the work presented here is quantitative, and thus, it’s difficult to directly compare the
results against one another.
15 Methodology
15.1 Participant Recruitment and Inclusion Criteria
Participants were recruited individually from the IATSL lab and their eligibility to partake in this
usability test was gauged using the brief assessment survey in Appendix C. The pre-assessment
survey was to ensure that the participants were from a technical background (i.e. engineering,
computer science, science, or math) to simulate novice HOMER users. Their general knowledge
of state machines was also inquired but not required for their participation in this usability test.
15.2 Usability Tasks
This user-based evaluation of an hour duration was broken into five main phases. The main
purpose of the first part was for the user to become accustomed to the HOMER’s GUI by
57
completing a tutorial from HOMER’s manual (section 3.5-3.9) [23]. The study participants
completed the tutorial at their own pace with occasional clarification provided by the study
investigator when needed. For this part, learnability was quantified as the completion time for the
HOMER tutorial. The tutorial included constructing a room, placing and configuring sensors and
actuators, and designing logic via state machines in the scenario window and testing it.
The other four parts of this usability test involved a maintainability-related task. The four tasks
were administered were sub-categories of maintainability according to ISO-912611, which are
analyzability, changeability (or modifiability), testability, and composite (a combination of the
first three categories). The definitions for these sub-categories are shown in Table 9. Summated
Usability Metric (SUM) [24] was used to quantify the usability of each task. To put it briefly,
SUM combines each task’s completion time (efficiency), task error rate and success of
competition (effectiveness) and satisfaction scores to derive a single score to indicate the overall
usability of the specific task on the system under study. Table 10 describes the nature of each
task in greater detail. The user was presented a floor plan layout with three rooms (bedroom,
study, and kitchen) connected via doors, as shown in Figure 24. The doors had motion sensors
placed on both sides of the two doors between the three rooms. In each room, there was a switch
actuator, which acts as the light. Switch actuator in each room was turned on if a motion sensor
within that same room detected motion. The switch actuator, from which the previous motion
was detected, was turned off, as the other switch actuator turned on. This application was known
as automatic lighting.
Table 9. Definitions of the sub-categories of maintainability according to the ISO-9126.
Sub-categories of Maintainability Definition
Analyzability The ability to find and identify root causes of problems within the
software.
Changeability The amount of effort needed to modify the system and its
properties.
11 http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=22749
58
Testability The amount of effort needed to verify the proper functionality of
the modifications.
15.3 Outcome Measures
Based on the consultation of the platform developer of HOMER, the expected task completion
time was calculated by obtaining the task completion time of the study investigator (experienced
with the HOMER platform) and multiplying it by a factor of 2 to account for novice users (study
participants). The errors for each task were based on the successful completion of the sub-tasks
or error opportunities. The task completion rate was a binary value where the task was
successfully completed or not. The satisfaction score is based on an average of three questions
asked after the study participant attempted each task. The questionnaire are available in [24]. At
the end of completing all the tasks for this usability study for SUM, the participants were asked
to fill out a System Usability Scale (SUS) [25] questionnaire to obtain their views on HOMER’s
overall usability. The advantage of using SUS is that it provides a generic usability score that can
be compared against AAL middleware platforms with completely different in terms of its
characteristics, features, and functions.
Table 10 - Usability tasks for the HOMER evaluation.
Task Description Error Opportunities Expected
Completion
Time
Analyzability Given the application of
automatic lighting, it
was indicated to the
participant that the
application contained
two mistakes and that
he/she must analyze
the system to find the
Double click the red colored transition from the “IN BEDROOM” state to the “IN STUDY ROOM” state in order to indicate that no event has been specified for that particular transition.
Double click the state “IN STUDY ROOM” to indicate that no action is chosen for that particular state.
3 minutes
59
root cause of those
mistakes. The two
mistakes were: (1) no
sensor event was set
for one of the state
transitions from the “IN
BEDROOM” state to “IN
STUDY ROOM” state,
and (2) no action was
set for the “IN STUDY
ROOM” state.
Changeability The user was shown
the root cause of the
problems from the last
task and instructed to
fix them, so that the
automatic lighting
application works as
intended.
Select the red transition with missing event, and create a new event by clicking on the “+” sign.
Specify a new event name and set the event as sensor event. Select the correct motion sensors from the list of available sensors, and set the message type as motion detected.
Select the state “IN STUDY ROOM” with the missing action.
Shift the action “Turn on Lights (Study room)” from available actions to chosen actions.
2 minutes
Testability Given the solution of
the previous task, the
next task demanded
that user demonstrate
the testing of the
application to verify its
functionality to the
investigator. Three
different ways of
testing were possible.
3 Options:
o Select scenario simulator and click on the different available sensor events to show the transitioning between the three states.
o Go to the flat window, and click on the simulator icon. Move the person figure over each sensor to show the proper functioning of the actuators in response to the sensor events.
o Right click the different motion sensors on the flat window, and select motion detected to demonstrate the state transition between the different states (rooms).
1 minute
Composite The user was to modify Go to the flat window, click edit, and
draw a room adjacent to the bedroom.
10 minutes
60
the application by
adding an adjacent
room to the bedroom,
called the shower
room. A door and two
motion sensors were to
be added between the
rooms, and a switch
actuator within the
room as well. The logic
was to be adapted so
that it remained
congruent with the rest
of the automatic
lighting application.
Add a door between the bedroom and the newly created shower room.
Add two sensors on both sides of the door, and configure them as motion sensors.
Add an actuator within the shower room and configure it to be a switch actuator.
Go to the scenario window. Add a new state called the “IN SHOWERROOM.”
Add two transition going to and coming from the “IN BEDROOM” state.
Create two actions for switch actuator within the new room to turn on and off. Set the actions as chosen either on the correct transitions or states.
Create a new sensor event for motion being detected in the shower room. Set this event on the transition from the “IN BEDROOM” state to the “IN SHOWEROOM” state. Modify the existing event to include the new motion sensor in the bedroom, and set it to the transition from the “IN SHOWERROOM” to the “IN BEDROOM” state.
Test the system using methods in the previous task
61
Figure 24 - Flat designer view of the usability test showing the room layout and
sensor/actuator/door placements.
16 Results
16.1 Participant Details
For the usability study of HOMER, nine participants were recruited. According to the pre-
assessment, all the participants were from an academic background of science, technology, or
engineering. Five participants indicated that they have had past experience working with
assistive technology, while only two had experience with AAL middleware platforms. Five of
the nine participants had working knowledge of state machines, either from their education or
62
work. The average time the participants took to complete the tutorial portion of the study was
approximately 22 minutes, with a standard deviation of 6.66 minutes.
16.2 SUM and SUS Results
Table 11 shows the main results of this usability study on the HOMER platform using the
Summated Usability Metric (SUM) [24], [26]. The percentages relate to the quality level as
elaborated by Jeff et al. [26]. Out of the entire task, the highest score was achieved by the task
for testability and lowest score was seen for the task involving a composite of the 3 other tasks.
From the perspective of task completion and task completion times, task for testability scored
perfect and the composite task scored the lowest. Interestingly, satisfaction scores were the
lowest for the testability task and highest for the task for changeability. Despite the high
satisfaction scores for the task of changeability, it showed the highest amount of errors. The task
for analyzability and testability had the lowest amount of task-related errors. Overall, all tasks
and subsections of the usability score showed high usability of the HOMER platform by
unexperienced users. The System Usability Scale (SUS) [25], conducted at the end of the study,
for the HOMER platform was rated as 66.33 out of 100.
Table 11 - Results of the usability study in terms of metrics from the Summated Metric
Score (SUM)
Metric
Task 1 (Analyzability)
Task 2 (Changeability)
Task 3 (Testability)
Task 4 (Composite)
Mean SD Mean SD Mean SD Mean SD
SUM 77.5% 26.6% 83.4% 26.1% 87.4% 10.8% 74.2% 44.3%
Completion 88.9% 31.4% 88.9% 31.4% 100.0% 0.0% 77.8% 41.6%
Errors 94.4% 15.7% 86.1% 47.8% 94.4% 15.7% 91.4% 51.5%
Satisfaction 72.5% 18.6% 82.9% 10.9% 55.6% 21.1% 77.0% 17.1%
Times 54.3% 40.5% 75.8% 14.2% 99.6% 6.4% 50.5% 67.0%
63
17 Discussion
Overall, it can be said that HOMER is middleware platform that can be learned and operated
very quickly by users with a technical background. This can be attributed the simple mechanisms
of the HOMER’s logic as a set of state machines, and the limited options of its GUI. From the
designed tasks, testability in HOMER’s greatest strength due to the visual feedback given
through the scenario simulator. Even though the task labelled as composite demonstrated the
lowest usability score, sub-tasks relating to analyzability are mostly likely the contributors to the
lowering other overall score. In fact, comparing the composite task with the other three may not
be suitable as the composite task merges the elements of the analyzability, changeability, and
testability task and consist of higher number of error opportunities, duration, and applied
concepts of the other three tasks.
If we look at only the first three tasks, analyzability is perhaps the most limiting factor in
HOMER. To briefly summarize, the analyzability task involves the user studying the application
and system setup to find root causes of problems. In HOMER, this corresponded to analyzing the
set of events, actions, and conditions set for each state transition and states for the application
logic, and sensor/actuator configuration for the floor planning tool. From personal use from the
investigator and qualitative feedback from the study participants, HOMER’s analyzability suffers
because the difficulty of scanning through all the configurations of the transitions, states, and
hardware components. Additionally, there seems to be a lack of prompts or visual cues to
indicate unsuitable configurations. Upon viewing the components of the SUM for the
analyzability task, it seems that participant’s satisfaction for the completion time was the factor
that lowered the overall SUM score. This may indicate that the visual feedback features of
HOMER aren’t effective enough to capture the user’s attention when they are scanning for
problems with their configurations.
This usability study on HOMER has many limitations that need to be addressed. Firstly, the
learning phase of the study was critical to the performance of the four maintainability-related
tasks. This learning phase required the participants to critically think about system architecture of
this middleware platform, and understand the way the floor planning tool and the state machines
fit together to build an application. Depending on the motivation and learning speed, the
performance on the tasks can vary as a result. Also, participants that are more interested in
64
learning the platform ask more critical questions while conducting the tutorial and hence, have a
higher chance of performing better on the task. Another limitation is the fact that this usability
study didn’t incorporate every aspect of the HOMER platform into the design. This includes
using conditions for state transitions, system state variables, and operating or coordinating
multiple state machines. The investigator chose to leave these aspects out due to minimize the
complexity of the application, so that the study can be conducted under an hour and that the
participants are novice users of the platforms. Perhaps, the complexity of the application, based
on the inclusion of different HOMER features, needs to be reconsidered as another limitation. A
test application that requires use of more of the HOMER features would render more accurate
results, which may be lower for this case. Unlike UniversAAL, the participants of this study
were novice and had to be partially trained before they could be tested on the maintainability-
related tasks. UniversAAL conducted usability testing with internal developers who had
sufficient knowledge of the middleware platform, and their recommendations could have been
biased but more accurate due to their enhanced experience. This fact also makes comparing the
usability between the platforms non-trivial. From the time taken to develop the test-case scenario
and observing the users perform the tutorials for HOMER, it can be said that HOMER is much
faster for a technical person to learn and implement than UniversAAL.
65
Chapter 7 Guidelines
Based on the findings from the scenario-based evaluation and usability test on HOMER, this
section, as part of addressing the last research question, aimed to suggest recommendations to
guide future AAL middleware platform developments, so that these platforms can achieve
greater adoption rates from the developer community in a wider range of application areas. For
organizational purposes and clarity, guidelines have been split into platform-specific and
platform independent subsections.
18 Platform-Specific Guidelines
This section presents guidelines that are specific for the AAL middleware platforms,
UniversAAL and HOMER that were under comparison. The set of guidelines for UniversAAL
are aim specifically towards the novice users or early adopters as opposed to the expert users of
the middleware platforms as the suggested changes aim to ease its initial use until a greater
degree of familiarity and experience is gained. Rather than targeting a user group based on their
skill-sets, the guidelines for HOMER are catering to improve platform features so that the
platform achieves a greater relevance in the AAL domain.
18.1 UniversAAL
The guidelines for the UniversAAL are the following:
Graphical user interface and code generators for specifying the hierarchical ontological
structure used in filter for the context bus (i.e. context event patterns). For each
UniversAAL instance that uses the context bus wrappers (i.e. context subscribers and
publishers), a context event pattern must be specified to map out the ontological structure
used to filter out the context events published during runtime. Due to the high flexibility
and expressiveness of the UniversAAL ontology, there are multiple obstacles towards
using filtering the context bus events accurately and easily. Firstly, the context event
patterns may vary greatly from one developer to the next. Figure 25 and Figure 26
demonstrate the ontological structure of the context event patterns for the context
subscribers of the kettle (tea-making object used in the scenario-based evaluation). The
66
ontology in Figure 25 uses the sensor type and room location for filtering the context,
while one in Figure 26 makes use the combination of the xyz coordinate system and room
type. During application development, different developers may work on a common AAL
space with interconnected UniversAAL instances and specifying alternative context
subscriber filters, leading to missed context events and extra time spent in
troubleshooting the context wrappers. Secondarily, the increasing depth and branching of
the hierarchical ontology structure negatively impacts their interpretation, comparisons,
and modifications relative to other structures for secondary users. The solution is to
introduce an interactive graphical tool, similar to the one for creating customized
ontologies, for creating and modifying context event patterns for the context bus
wrappers. This feature can generate and inject code for the context event pattern method,
as well as generate interactive graphical models of the previously specified models.
Figure 25 and Figure 26 provide the suitable concept for this graphical tool.
67
Figure 25. Example of an ontological structure for context event pattern of the kettle
object.
68
Figure 26. An alternative ontological structure for the context event pattern of the kettle
object.
Options for deploying complete, working dummy applications rather than skeleton code
templates for the various bus wrappers. So far, the current platform provides simple to-
do empty methods for the specified code wrapper within a wizard. Just getting the buses
to be functional can take considerable amount of effort for new platform users following
the provided and somewhat updated tutorials. The solution for to provide complete
working templates during the creating of a new project. This would include complete,
working code wrappers (and middleware instances) using the default UniversAAL
69
ontology and some very basic business logic. Users can simply branch off to specifics of
their own application with more reliable version controlling.
18.2 HOMER
The guidelines for the HOMER are the following:
Decoupling the sensors/actuators in the flat design window from the scenarios. The
scenario window, where all the state machines are manufactured, is responsible for the
business logic of the HOMER-based application. The flat designer window allows for the
physical arrangement of the sensors and actuators, and AAL space layout. In the current
version of HOMER, all references for event transitions and state actions are based on
specific device entities identified within the flat designer. This scheme works decently
when the application size is very small and simple. As the application size grows in
devices and logic complexity, performing modifications within a large set of cross-
cutting state machines and their associated devices becomes a time-consuming activity,
not to mentions the chances of introducing errors. The solution is to adopt the pub-sub
system between the instances in the scenario window (state machines) and the flat
designer (sensors/actuators). An example of this can be specifying a sensor type and
room type for a sensor event (a state transition) rather than selecting a specific sensor
from the flat designer. This approach also can also decouple the scenario window from
the flat designer, as multiple possible interchangeable flat designer layouts can be
implemented for a single scenario window. Adding and removing devices can prevent the
application from breaking and improve maintainability.
Allow multiple conditions to be set for a state transition and linking them with operators
like AND and OR. Currently, HOMER only allows for a single condition to be set per
state transition. This limitation was realized while implementing the test-case scenario,
where multiple conditions (of various sensors) needed to be checked before selecting the
current audio prompt state. The solution allows the users to create and manage
conditions, similar to events and actions. Events and actions are created and managed
separately from any single state or event, while conditions are configured individually at
each transition and cannot be replicated across multiple transitions.
70
Automatic creation of generic state machines and the corresponding actions, events, and
system state variables for sensors and actuators. When working with a large set of
devices, creating simple state machines (scenarios) for each device (i.e. sensor or
actuator) can be repetitive and is prone to error. The test-case scenario used multiple state
machines that had binary states for each individual sensor and actuator, which ultimately
changed the value of the system state variables. The solution is the give the user an
option to generate a new scenario or state machine for each of the deployed sensors and
actuators, with its actions, events, and system state variables automatically defined. This
can save tremendous amount of time that is required for creation and troubleshooting.
Compounding multiple sensors/actuators into a single device. One common conflict that
may arise when using the flat designer is that fact that devices may be a composition of
multiple sensors and actuators. In HOMER, there is no feature that allows for the
overlapping or combining of sensors/actuators as single devices. For the test-case
scenario the kettle and the cup had multiple sensors embedded in it and couldn’t be
expressed ideally. A potential solution can be to incorporate a new composite GUI object
that can be used to represent multiple sensors or actuators within an exact location.
19 Platform-Independent Guidelines
This section presents guidelines that are not specific for any particular AAL middleware
platform, but are general suggestions for future development in this domain. In comparison to the
platform-specific guidelines, these ones are at a higher abstraction level.
19.1 Add-ons for Different AAL Application Areas, Computational Techniques, & Technologies
Since both UniversAAL and HOMER operate on top of the OSGi framework, their system
architecture is easily expandable though the addition of Java bundles. As a reminder, bundles are
java JAR files, with a manifest file that specify its dependencies and version. These bundles can
be dynamically installed, started, stopped, updated, and uninstalled during runtime. Future
development should incorporate UniversAAL as the base AAL middleware platform with add-on
bundles for the various AAL application areas. HOMER can be included as an add-on to
UniversAAL for application areas such as location identification and activity recognition. The
71
goal of this type of add-on bundle would be to convert HOMER-specific implementations to the
bus architecture of UniversAAL, allow it to communicate with applications created using
different add-on bundles. UniversAAL is more suitable as the base AAL middleware platform
over HOMER primarily because of the loose coupling nature of the publish-subscribe
mechanism. Additional bundles, that deploy a GUI, should be implemented to cover the
spectrum of the different AAL-domain related application areas, computational techniques, and
technologies. For instance, an AAL application such as COACH[27], which is a cognitive
orthotic that uses computer vision and advanced planning techniques to help dementia patients
carry out specific activities of daily living, could potentially use an UniversAAL add-on to
generate its extensive states simply from a GUI that deals with a simple activity diagram. Using
such add-ons can broaden the scope of UniversAAL and render is relevant for a diverse range of
application developers.
19.2 Increasing Abstraction Level: Graphical User-Interfaces over Text-based Interfaces
The comparison between two platforms in the scenario-based evaluation and from Q19-22 of the
developer questionnaire (c.f. Appendix A) has emphasized that maintainability required attention
so that its users can perform system or application modifications with ease. One issue regarding
this matter is that determining the ideal users of these AAL middleware platforms is not so clear.
UniversAAL’s text-based interface points to the software developer as the primary creator of the
AAL applications as they are responsible for its code creation. Moreover, the “AAL deployers”
is recognized as the secondary users as they are the stakeholders that install the AAL application
in the AAL spaces. For HOMER, their end-user is not explicitly stated, however, it is implied
that the deployers of the AAL application interact with the GUI and the software developers
make modifications to the platform itself. As of now, creation of AAL application would require
close interaction between the software/hardware developers and an expert within the AAL-
domain or the application deployer. Based on these facts, it can be suggested that the current
AAL middleware platforms should aim for higher levels of abstraction for the creation of its
AAL applications through the use of graphical user-interfaces as opposed to text-based interfaces
(i.e. text-based IDEs, command-line interfaces etc.) GUI-based designs can potential transition
the role of application developer from the traditional hardware/software developer towards the
medical staff, domain-experts, and deployers. This can be achieved by deriving add-ons, as
72
mentioned pervious, that rely on a GUI for the provision of its functionality. The benefits include
expanding the AAL middleware platform user base to individuals who wore more closely with
the ultimate end-users of AAL applications, the elderly and disabled. Additionally, these
individuals can potential reduce the involvement of core hardware/software developers, thereby
reducing cost and effort, while creating applications that better match the business goals of this
domain. Lastly, the requirement for customizability and repeatability for the application creation
is higher than other domain areas as the end-users are diverse in many aspects. From the
perspective of repeatability, the goal should be increase the abstraction level of the platform tools
to a point where it maintains a balance between standardization of AAL solutions to cater to the
masses and allow individuals with less technical expertise to create these solutions. From the
perspective of customizability, the level of alteration allowable from the platform should only be
enough to cover the diversity of the end-users.
73
Chapter 8 Conclusion
Previous work recognized scenario-based evaluations and quantitative analysis as next stages
towards comparing and evaluating AAL middleware platforms. Based on this statement, this
thesis project aimed to:
1. Identify the quality attributes and areas of difficulties that needed to be addressed from
AAL middleware platforms from the perspective of the domain-related novice developers
and researchers.
2. Conduct a comparison between the current prominent AAL middleware platforms using
a scenario-based evaluation technique and a quantitative approach.
3. Generate a set of guidelines that suggest novel features and functions in future AAL
middleware platforms to ultimately achieve wider adoption rates by the AAL community
and tackle a wider scope of challenges within the AAL field.
The findings of this thesis in relation to the objectives listed above were the following:
1. A questionnaire targeting developers and researchers within the AAL community was
conducted, which yielded the following conclusions:
a. Maintainability, the ability to perform system modifications by its users, was the
quality attribute of intermediate importance but greatest deficiency in the current
AAL middleware platforms.
b. The users of these AAL middleware platforms had a somewhat uneven
distribution of AAL-specific application areas, technologies and computational
techniques.
2. Simple meal preparation, in the form of making a cup of tea, was selected as the test-case
scenario for the scenario-based evaluation and other quantitative measures. Working with
this test-case resulted in the following observations:
74
a. UniversAAL adopted the publish-subscribe mechanism and a bus-based
architecture that allowed components to be loosely coupled. HOMER portrayed to
be a tightly-coupled, but simpler interface that presented many constraints for the
use-case scenario.
b. From the perspective of quantitative measures for maintainability, UniversAAL
demonstrated a steep learning curve for initial setup of the test-case scenario, yet
considerably less effort in modification for the long run. On the contrary,
HOMER had rapid initial setup time for the test-case scenario but an error-prone
condition for managing the use-case in the long run.
3. A set of guidelines specific to the features of both platforms was complied. These
guidelines suggested improvements for the specific features of the platform. An
additional set of platform-independent guidelines were generated for guiding future work
in the AAL middleware platforms that are summarized as follows:
a. A base middleware platform that is loosely coupled, such as UniversAAL, with
add-ons for AAL-specific application areas, technologies, and computation
techniques would be ideal.
b. The overall abstraction level, mostly transition from test-based interfaces to
graphical-user interfaces, is needed so that a wider and more relevant user base is
realized. This also has the potential to improve the efficiency of the AAL
middleware platform in meeting the needs of a wider scope of AAL field.
Based on the findings from this project, the next steps should include implementation of
platform-specific guidelines to their respective platforms over their upcoming versions. In order
to realize the platform-independent guidelines, developers of the middleware platforms and
AAL-specific applications should collaborate to salvage existing AAL applications and construct
generic modules to deal with a variety of potential use-cases and render usable from a wider
group of stakeholders.
75
References
[1] N. M. Ries, “Canadian Institutes of Health Research–Institute of Aging: Profile: Ethics, Health
Research, and Canada’s Aging Population,” Can. J. Aging / La Rev. Can. du Vieil., vol. 29, no. 4,
pp. 577–580.
[2] P. Rashidi and A. Mihailidis, “A Survey on Ambient-Assisted Living Tools for Older Adults,”
IEEE J. Biomed. Heal. Informatics, vol. 17, no. 3, pp. 579–590, May 2013.
[3] F. Sadri, “Ambient intelligence,” ACM Comput. Surv., vol. 43, no. 4, pp. 1–66, Oct. 2011.
[4] W. P. Tazari M.-R,Furfari F,Fides Valero Á,Hanke S,Höftberger O,Kehagias D,Mosmondor
M,Wichert R, “The universal Reference Model for AAL,” Handb. Ambient Assist. Living, vol. 1,
pp. 610 – 625, 2012.
[5] F. Palumbo, P. Barsocchi, F. Furfari, and E. Ferro, “AAL Middleware Infrastructure for Green
Bed Activity Monitoring,” J. Sensors, vol. 2013, pp. 1–15, 2013.
[6] N. Georgantas, S. Ben Mokhtar, Y. Bromberg, V. Issarny, J. Kantarovitch, A. Gérodolle, and R.
Mevissen, “The Amigo Service Architecture for the Open Networked Home Environment,” pp. 5–
6, 2005.
[7] P. Wolf, A. Schmidt, and M. Klein, “Applying Semantic Technologies for Context-Aware AAL
Services: What we can learn from SOPRANO,” GI Jahrestagung, vol. 154, pp. 3077–3090, 2009.
[8] P. Wolf, A. Schmidt, J. P. Otte, M. Klein, S. Rollwage, B. König-ries, T. Dettborn, and A.
Gabdulkhakova, “living ( AAL ),” no. March, pp. 1–5, 2010.
[9] M. Tazari, F. Furfari, and J. L. Ramos, “The PERSONA Service Platform for AAL,” pp. 1171–
1199, 2010.
[10] E. Stav, S. Walderhaug, M. Mikalsen, S. Hanke, and I. Benc, “Development and evaluation of
SOA-based AAL services in real-life environments: a case study and lessons learned.,” Int. J.
Med. Inform., vol. 82, no. 11, pp. e269–93, Nov. 2013.
[11] S. Hanke, C. Mayer, O. Hoeftberger, H. Boos, R. Wichert, M.-R. Tazari, P. Wolf, and F. Furfari,
“universAAL - An Open and Consolidated AAL Platform,” Ambient Assited Living 4 Dtsch.
AALKongress, pp. 127–140, 2011.
76
[12] M. Memon, S. R. Wagner, C. F. Pedersen, F. H. A. Beevi, and F. O. Hansen, “Ambient assisted
living healthcare frameworks, platforms, standards, and quality attributes.,” Sensors (Basel)., vol.
14, no. 3, pp. 4312–41, Jan. 2014.
[13] S. Wagner and C. Nielsen, “OpenCare project: An open, flexible and easily extendible
infrastructure for pervasive healthcare assisted living solutions,” 2009 3rd Int. Conf. Pervasive
Comput. Technol. Healthc., 2009.
[14] P. Abril-Jiménez, “Design Framework for Ambient Assisted Living Platforms,” Univers. Access
…, pp. 139–142, 2009.
[15] T. Fuxreiter, C. Mayer, S. Hanke, M. Gira, M. Sili, and J. Kropf, “A modular platform for event
recognition in smart homes,” 12th IEEE Int. Conf. e-Health Networking, Appl. Serv. Heal. 2010,
2010.
[16] F. Palumbo, J. Ullberg, A. Stimec, F. Furfari, L. Karlsson, and S. Coradeschi, “Sensor network
infrastructure for a home care monitoring system.,” Sensors (Basel)., vol. 14, no. 3, pp. 3833–60,
Jan. 2014.
[17] P. O. Antonino, D. Schneider, C. Hofmann, E. Y. Nakagawa, and F. Iese, “Evaluation of AAL
Platforms According to Architecture-Based Quality Attributes,” pp. 264–274.
[18] M. Eisenhauer, P. Rosengren, and P. Antolin, “A development platform for integrating wireless
devices and sensors into Ambient Intelligence systems,” 2009 6th IEEE Annu. Commun. Soc.
Conf. Sensor, Mesh Ad Hoc Commun. Networks Work. SECON Work. 2009, vol. 00, no. c, pp. 1–
3, 2009.
[19] D. Salvi, M. Mikalsen, and S. Walderhaug, “universAAL Technical Validation Results,” no.
247950, 2013.
[20] S. Hanke, C. Mayer, O. Hoeftberger, H. Boos, R. Wichert, M. Tazari, P. Wolf, and F. Furfari,
“universAAL – An Open and Consolidated AAL Platform.”
[21] S. Czarnuch and a, Mihailidis, “The design of intelligent in-home assistive technologies:
Assessing the needs of older adults with dementia and their caregivers,” Gerontechnology, vol. 10,
no. 3, 2011.
77
[22] M. Begum, R. Wang, R. Huq, and A. Mihailidis, “Performance of daily activities by older adults
with dementia: The role of an assistive robot,” IEEE Int. Conf. Rehabil. Robot., 2013.
[23] “HOMER Manual.”
[24] J. Sauro and E. Kindlund, “A method to standardize usability metrics into a single score,” Proc.
SIGCHI Conf. Hum. factors Comput. Syst. (CHI ’05), pp. 401–409, 2005.
[25] Brooke, “Brooke, J.: SUS: A ‘quick and dirty’ usability scale,” Jordan, P.W., Thomas, B.,
Weerdmeester, B.A., McClelland, I.L, vol. dustrypp, pp. 189–194, 1996.
[26] J. Sauro and E. Kindlund, “Making Sense of Usability Metrics: Usability and Six Sigma,” Proc.
14th Annu. Conf. Usability Prof. Assoc., pp. 1–10, 2005.
[27] A. Mihailidis, J. N. Boger, T. Craig, and J. Hoey, “The COACH prompting system to assist older
adults with dementia through handwashing : An efficacy study,” vol. 18, pp. 1–18, 2008.
78
Appendices
20 Appendix A: Developer’s Questionnaire
Q1) Which sector are you currently working in?
Answer Options Response
Percent Response
Count
Academia 70.4% 38
Industry 7.4% 4
Academia and industry 20.4% 11
Other (please specify) 1.9% 1
answered question 54
skipped question 0
Number Response Date
Other (please specify)
Categories
1 Oct 16, 2014 3:09
PM non-university research
Q2) On which continent do you currently work?
Answer Options Response Percent
Response Count
North America 11.1% 6
South America 0.0% 0
Europe 79.6% 43
Asia 7.4% 4
Africa 0.0% 0
Australia 1.9% 1
answered question 54
skipped question 0
79
Q3) Education: What is the highest degree or level of school you have completed? If currently enrolled, highest degree received.
Answer Options Response
Percent Response
Count
Some college credit, no degree 0.0% 0
Trade/technical/vocational training 0.0% 0
Associate degree 0.0% 0
Bachelor’s degree 14.8% 8
Master’s degree 44.4% 24
Doctorate degree 38.9% 21
Other (please specify) 1.9% 1
answered question 54
skipped question 0
Number Response Date Other (please specify)
Categories
1 Oct 23, 2014 11:57
AM Habilitation
Q4) You have developed an application(s) that can relate to the concepts of ambient assisted living (AAL). Definition: AAL are technologies that promote independence and social wellbeing for the elderly or disabled within their preferred environment.
Answer Options Strongly Disagree
Disagree
Neither Disagree
Nor Agree
Agree Strongly Agree
Rating Average
Response Count
1 1 2 18 32 4.46 54
answered question 54
skipped question 0
80
Q5) Check the AAL application areas that you have experienced working with.
Answer Options Response
Percent Response
Count
Continuous health monitoring 69.4% 34
Emergency detection 55.1% 27
Cognitive orthotics 16.3% 8
Therapy 20.4% 10
Emotional wellbeing 22.4% 11
Pervasive application 57.1% 28
Other (please specify) 8.2% 4
answered question 49
skipped question 5
Number Response Date Other (please specify)
Categories
1 Oct 23, 2014
12:50 PM prevention of frailty
2 Oct 23, 2014
12:41 PM
We designed the communication middleware for a AAL platform
3 Oct 23, 2014
12:35 PM design and development of middlware for AAL application
4 Oct 17, 2014
5:31 AM
User activity classification and anomaly behaviour detection at home
Q6) Check the categories of techniques that you have applied in your applications.
Answer Options Response Percent
Response Count
Activity recognition 79.2% 38
Context modeling 56.3% 27
Location identification 58.3% 28
Planning 22.9% 11
81
Anomaly detection 60.4% 29
Other (please specify) 8.3% 4
answered question 48
skipped question 6
Number Response Date Other (please specify)
Categories
1 Oct 23, 2014 12:50 PM pattern recognition and automtic feedback
2 Oct 23, 2014 12:41 PM
We designed the communication middleware for a AAL platform
3 Oct 17, 2014 5:31 AM social activity analysis
4 Oct 16, 2014 4:08 PM Interaction
Q7) Please check the tools and technologies that relate to your application(s).
Answer Options Response Percent
Response Count
Smart homes 89.6% 43
Assistive robots 16.7% 8
E-textiles 6.3% 3
Mobile/wearable sensors 60.4% 29
Other (please specify) 6.3% 3
answered question 48
skipped question 6
Number Response Date Other (please specify)
Categories
1 Oct 24, 2014 9:07
AM Cameras
82
2 Oct 17, 2014 5:31
AM
Bluetooth health wearables, SOA Architecture, social networks API, SOAP/REST Protocols
3 Oct 16, 2014 3:10
PM
smart TV, agent technology, argumentation technology
Q8) How many years have you been developing your application(s)?
Answer Options Response
Percent Response
Count
Not applicable 8.3% 4
0-1 4.2% 2
2-3 31.3% 15
4-6 39.6% 19
7+ 16.7% 8
answered question 48
skipped question 6
Q9) You have implemented or attempted to implement an application(s) on a middleware platform.
Answer Options Strongly Disagree
Disagree
Neither Disagree
Nor Agree
Agree Strongly Agree
Rating Average
Response Count
2 6 5 17 19 3.92 49
answered question 49
skipped question 5
Q10) Please indicate the all the AAL-related middleware platform(s) that you have worked with.
Answer Options Response Percent
Response Count
83
AMIGO 6.7% 2
GiraffPlus 13.3% 4
HOMER 13.3% 4
Hydra 6.7% 2
MPOWER 6.7% 2
OASIS 6.7% 2
OpenAAL 6.7% 2
PERSONA 23.3% 7
SOPARANO 10.0% 3
UniversAAL 70.0% 21
Other (please specify) 33.3% 10
answered question 30
skipped question 24
Number Response Date Other (please specify)
Categories
1 Oct 25, 2014 7:33 AM indigeneously developed system
2 Oct 23, 2014 12:19 PM sense-os.nl
3 Oct 23, 2014 12:02 PM Proprietary
4 Oct 18, 2014 10:59 AM OSAMI
5 Oct 17, 2014 8:26 AM
my own, it's not really AAL, but similar, see http://oa.upm.es/25600/
6 Oct 17, 2014 5:38 AM SAAPHO Middleware
7 Oct 16, 2014 3:17 PM
Java Agent Development Framework (JADE), Apache Tomcat web server, MySQL database
8 Oct 16, 2014 3:10 PM NetCarity
9 Oct 16, 2014 1:20 PM VISAGE, SONOPA,
84
MEDIATE, LILY
10 Oct 10, 2014 4:50 AM built in-house
Q11) Rank the quality attributes of middleware platform from most important (1) to the least important (5). ü Reliability : the ability to deliver acceptable level of performance after partial system failure.ü Security: the ability to safeguard the platform and protect user's sensitive and confidential data.ü Maintainability: the level of difficulty needed to perform installation or changes to the platform's components by non-technical and technical personal. ü Efficiency: the platform's ability to minimize resource consumption and maximize performance simultaneously. ü Safety: mechanisms that enable the system for safety-critical applications and safety patterns that ensure system is always in a safe state (fault tolerance).
Answer Options 1 2 3 4 5 Rating
Average Response
Count
Reliability 16 6 4 3 1 1.90 30
Security 0 8 7 9 6 3.43 30
Maintainability 6 4 7 5 8 3.17 30
Efficiency 2 7 4 6 11 3.57 30
Safety 6 5 8 7 4 2.93 30
answered question 30
skipped question 24
Q12) The middleware platform provided a unique recoverability feature that allowed the system to recover from failures of its components or nodes (reliability).
Answer Options Strongly Disagree
Disagree
Neither Disagree
Nor Agree
Agree Strongly Agree
Rating Average
Response Count
1 4 12 9 3 3.31 29
answered question 29
skipped question 25
Q13) How would you rate the feature(s) relating to reliability that are specifically provided by the middleware platform(s)?
85
Answer Options Poor Inadequate Satisfactory Acceptable Excellent N/A Rating
Average Response
Count
0 6 5 14 3 1 3.50 29
answered question 29
skipped question 25
Q14) (Optional) Provide a description of ways the middleware platform(s) met or not met your expectations regarding reliability.
Answer Options
Response Count
3
answered question 3
skipped question 51
Number Response Date
Response Text
Categories
1 Oct 18, 2014 10:59 AM
During field-studies, the software operated continuously 24/7 for about 6 months, without crash, memory fragmentation or other need to re-boot the machine.
2 Oct 16, 2014 3:17 PM
Through persisting the personal assistant agent state with the JADE Persistence add-on.
3 Oct 16, 2014 3:10 PM fall Detection detected correctly
Q15) The middleware platform(s) provided an encryption mechanism for end-to-end message security for Web services based environments and/or used digital certificates for simplified access control (security).
Answer Options Strongly Disagree
Disagree
Neither Disagree
Nor Agree
Agree Strongly Agree
Rating Average
Response Count
86
3 1 4 19 2 3.55 29
answered question 29
skipped question 25
Q16) The middleware platform(s) included user roles and security profile definition that consist of interfaces to the core management services (security).
Answer Options Strongly Disagree
Disagree
Neither Disagree
Nor Agree
Agree Strongly Agree
Rating Average
Response Count
2 4 2 18 3 3.55 29
answered question 29
skipped question 25
Q17) How would you rate the security feature(s) specifically provided by the middleware platform(s)?
Answer Options Poor Inadequate Satisfactory Acceptable Excellent N/A Rating
Average Response
Count
2 7 4 12 3 1 3.25 29
answered question 29
skipped question 25
Q18) (Optional) Provide a description of ways the middleware platform(s) met or not met your expectations regarding security.
Answer Options
Response Count
1
answered question 1
skipped question 53
Number Response Date
Response Text
Categories
87
1 Oct 18, 2014 10:59 AM
Secure functionality for remote maintenance and backup; RBAC for access to patient data; however, in our field studies not a single one of our more then 30 test patients had an internet access at home anyway, so the systems were operating standalone.
Q19) How would rate the overall difficulty in performing changes to the system components? (Changeability)
Answer Options
Easy Average Challenging Difficult Extreme N/A Rating
Average Response
Count
2 11 6 7 3 0 2.93 29
answered question 29
skipped question 25
Q20) How would you rate the difficulty for the person with technical knowledge of the platform to add new services, devices, or components? (Installibility)
Answer Options Easy Average Challenging Difficult Extreme N/A Rating
Average Response
Count
5 9 10 4 1 0 2.55 29
answered question 29
skipped question 25
Q21) How would you rate the difficulty for the person without technical knowledge of the platform to add new services, devices, or components? (Installibility)
Answer Options Easy Average Challenging Difficult Extreme N/A Rating
Average Response
Count
1 3 4 6 15 0 4.07 29
answered question 29
88
skipped question 25
Q22) How would you rate the middleware platform features or functions that specifically relate to maintainability? Maintainability: the level of difficulty needed to perform installation or changes to the platform's components by non-technical and technical personal.
Answer Options
Poor Inadequate Satisfactory Acceptable Excellent N/A Rating
Average Response
Count
1 11 10 4 3 0 2.90 29
answered question 29
skipped question 25
Q23) (Optional) Provide a description of ways the middleware platform(s) met or not met your expectations regarding changeability and installibility (maintainability).
Answer Options
Response Count
3
answered question 3
skipped question 51
Number Response Date
Response Text
Categories
1 Oct 23, 2014 12:19 PM
Most platforms focus too much on explicit ontological type safety. This doesn't adhere to the robustness principle and limits maintenance flexibility.
2 Oct 20, 2014 11:02 AM
universAAL is based on (but not bounded to) OSGi, Knowing OSGi and how to manupulate components is usefeull in universAA. universAAL provided a deploy Manager, that made easy deployment of services even from a remote operations
89
center or from the web (uStore).
3 Oct 18, 2014 10:59 AM
OSGi based platform, allows for an installation or upgrade of individual services at runtime, however, no high-level tools for this, such as a GUI, were available
Q24) How difficult was the integration of devices with limited resources on the middleware platform? (efficiency)
Answer Options Easy Average Challenging Difficult Extreme N/A Rating
Average Response
Count
3 7 9 4 2 4 2.80 29
answered question 29
skipped question 25
Q25) How would you rate the resource efficiency (e.g. CPU, memory usage) of the middleware platform? (efficiency)
Answer Options Poor Inadequate Satisfactory Acceptable Excellent N/A Rating
Average Response
Count
1 2 9 13 3 1 3.54 29
answered question 29
skipped question 25
Q26) How would you rate the efficiency of the middleware platform in handling the communication overhead? (efficiency)
Answer Options Poor Inadequate Satisfactory Acceptable Excellent N/A Rating
Average Response
Count
2 3 7 13 3 1 3.43 29
answered question 29
skipped question 25
90
Q27) How would you rate the overall efficiency of the middleware platform(s)?
Answer Options Poor Inadequate Satisfactory Acceptable Excellent N/A Rating
Average Response
Count
1 3 9 13 2 1 3.43 29
answered question 29
skipped question 25
Q28) (Optional) Provide a description of ways the middleware platform(s) met or not met your expectations regarding efficiency.
Answer Options
Response Count
1
answered question 1
skipped question 53
Number Response Date
Response Text
Categories
1 Oct 18, 2014 10:59 AM
Platform was designed for, and operated on a small PC. In theory, OSGi should run on embedded devices with much less resources, but we never seriously needed or tried that.
Q29) How would you rate the mechanisms of the middleware platform that deal with single points of failure? Note: A single point of failure refers to a weak point in a system that can hinders its entire functionality (safety).
Answer Options Poor Inadequate Satisfactory Acceptable Excellent N/A Rating
Average Response
Count
1 6 12 5 1 3 2.96 28
answered question 28
skipped question 26
91
Q30) How would you rate the safety pattern usage within the middleware platform? Note: Safety pattern usage are mechanisms in the system that always ensure its safe state (e.g. redundancy, fault tolerance).
Answer Options Poor Inadequate Satisfactory Acceptable Excellent N/A Rating
Average Response
Count
0 9 9 6 1 2 2.96 27
answered question 27
skipped question 27
Q31) How would you rate the overall safety mechanisms of the middleware platform(s)?
Answer Options Poor Inadequate Satisfactory Acceptable Excellent N/A Rating
Average Response
Count
0 9 9 7 1 2 3.00 28
answered question 28
skipped question 26
Q32) (Optional) Provide a description of ways the middleware platform(s) met or not met your expectations regarding safety.
Answer Options
Response Count
2
answered question 2
skipped question 52
Number Response Date
Response Text
Categories
1 Oct 20, 2014 11:02 AM
the distribution of the middleware allowed for replication and redundancy of critical components. Other specific features like reliability monitor are implemented.
92
2 Oct 18, 2014 10:59 AM
The platform itself was a single point of failure. Redundant hardware would have been too expensive.
Q33) How would you rate the mechanisms responsible for logic and reasoning (artificial intelligence) of the middleware platform?
Answer Options Poor Inadequate Satisfactory Acceptable Excellent N/A Rating
Average Response
Count
4 5 4 9 4 2 3.15 28
answered question 28
skipped question 26
Q34) (Optional) Provide a description of ways the middleware platform(s) met or not met your expectations regarding logic and reasoning.
Answer Options
Response Count
4
answered question 4
skipped question 50
Number Response Date
Response Text
Categories
1 Oct 23, 2014 10:05
PM
Since openAAL and universAAL are ontology-based middleware I would like to have a reasoner connected to the build-in sesame triple store - but there isn't at the moment...
2 Oct 20, 2014 11:02
AM
universAAL is based on semantical services and events (RDF-OWL form). On top other reasoners (even 3rd
93
party) can be developed. It already provides 2 reasoners (Drools and SPARQL-like rules), a service orquestration is being developed to enable easy scripting and reactions.
3 Oct 18, 2014 10:59
AM Implemented on application level.
4 Oct 17, 2014 8:26
AM
In my platform I used Drools (rules) and JBPM (workflows)
Q35) How would you rate the features responsible for storage or database services of the middleware platform?
Answer Options Poor Inadequate Satisfactory Acceptable Excellent N/A Rating
Average Response
Count
0 0 8 9 6 5 3.91 28
answered question 28
skipped question 26
Q36) (Optional) Provide a description of ways the middleware platform(s) met or not met your expectations regarding storage/database services.
Answer Options
Response Count
3
answered question 3
skipped question 51
Number Response Date
Response Text
Categories
1 Oct 20, 2014 11:02 AM
The main storing component in universAAL is called Context History Entrepot, it stores contextual data (events) and it is
94
available to store other things like profiles. It uses semantical database (Sesame) so it is easy for the semantic base system to find relations and particular objects.
2 Oct 18, 2014 10:59 AM SQL services provided as part of the platform.
3 Oct 17, 2014 8:26 AM
In my platform I use HSQL and a custom ORM
Q37) How would you rate the hardware communication or connection protocol of the middleware platform?
Answer Options Poor Inadequate Satisfactory Acceptable Excellent N/A Rating
Average Response
Count
1 3 8 9 5 2 3.54 28
answered question 28
skipped question 26
Q38) (Optional) Provide a description of ways the middleware platform(s) met or not met your expectations regarding hardware communication or connection protocol.
Answer Options
Response Count
2
answered question 2
skipped question 52
Number Response Date
Response Text
Categories
1 Oct 20, 2014 11:02
AM
There are serveral layers of communication in universAAL: first is the middleware layer which enables discovering an
95
distribution within the AALSpace. Then there is Local Device Detection and Interaction which implements specific device protocols to communicate with. Lastly the platform offers Remote Interoperability features to enable communication outside the AALSpace in several ways (importing/exporting web services, AALSpace2AALSpace, multi-tenancy, remote API...).
2 Oct 18, 2014 10:59
AM
Generic API for connecting medical sensors. API for connecting to smart-home devices less powerful, other solutions (such as OpenHAB or FHEM) are much better here.
Q39) Which of the following aspects of the middleware platform made it difficult to use?
Answer Options Response Percent
Response Count
Hardware communication or connection layer 45.5% 10
Database/storage system 9.1% 2
Graphical user interfaces 27.3% 6
Logic and reasoning (e.g. finite state machine, ontology) 31.8% 7
Runtime environment (e.g. prog. languages, OS) 50.0% 11
Compliance with standards (e.g ISO) 18.2% 4
Hardware/software compatibility 40.9% 9
Other (please specify) 9.1% 2
answered question 22
96
skipped question 32
Number Response Date Other (please specify)
Categories
1 Oct 23, 2014 10:05 PM
high learning curve because of lots of concepts and technologies
2 Oct 16, 2014 3:10 PM Defining thresholds in events
Q40) The documentation and developer support for the middleware platform were provided sufficiently.
Answer Options Strongly Disagree
Disagree
Neither Disagree
Nor Agree
Agree Strongly Agree
Rating Average
Response Count
2 6 10 8 3 3.14 29
answered question 29
skipped question 25
Q41) (Optional) Can you recommend or suggest features and functions from future middleware platforms, that were not covered within this survey?
Answer Options Response
Count
0
answered question 0
skipped question 54
Q42) You have not yet implemented an application on a middleware platform but may consider doing so in the future if the platform's functions and features align with your application requirements.
Answer Options Strongly Disagree
Disagree
Neither Disagree
Nor Agree
Agree Strongly Agree
Rating Average
Response Count
97
0 0 2 5 6 4.31 13
answered question 13
skipped question 41
Q43) You have general knowledge of middleware concepts and of the few middleware platforms that are currently available within the field.
Answer Options Strongly Disagree
Disagree
Neither Disagree
Nor Agree
Agree Strongly Agree
Rating Average
Response Count
1 5 2 3 1 2.83 12
answered question 12
skipped question 42
Q44) Please indicate the AAL-related middleware platform(s) that you are familiar with.
Answer Options Response
Percent Response
Count
AMIGO 0.0% 0
GiraffPlus 16.7% 1
HOMER 16.7% 1
Hydra 0.0% 0
98
MPOWER 16.7% 1
OASIS 16.7% 1
OpenAAL 16.7% 1
PERSONA 33.3% 2
SOPARANO 16.7% 1
UniversAAL 83.3% 5
Other (please specify) 0.0% 0
answered question 6
skipped question 48
Q45) You are interested in reusing existing software components of developed application(s) for new novel application(s)?
Answer Options Strongly Disagree
Disagree
Neither Disagree
Nor Agree
Agree Strongly Agree
Rating Average
Response Count
0 0 2 5 6 4.31 13
answered question 13
skipped question 41
Q46) You are interested in modifying existing software components of developed application(s) for new novel application(s)?
Answer Options Strongly Disagree
Disagree
Neither Disagree
Nor Agree
Agree Strongly Agree
Rating Average
Response Count
0 0 1 4 2 4.14 7
answered question 7
skipped question 47
99
Q47) Developing an application from shared repository of services (or modules), each consisting of a specific functionality (i.e. sensor, algorithm, software service), would be more desirable than developing a stand-alone application.
Answer Options Strongly Disagree
Disagree
Neither Disagree
Nor Agree
Agree Strongly Agree
Rating Average
Response Count
0 0 2 5 6 4.31 13
answered question 13
skipped question 41
Q48) You have envisioned your application(s) working in conjunction with other AAL-related applications as part of an larger, integrated AAL network / environment.
Answer Options Strongly Disagree
Disagree
Neither Disagree
Nor Agree
Agree Strongly Agree
Rating Average
Response Count
0 0 2 5 6 4.31 13
answered question 13
skipped question 41
Q49) The lack of documentation or developer support was limiting factor when considering a middleware platform.
Answer Options Strongly Disagree
Disagree
Neither Disagree
Nor Agree
Agree Strongly Agree
Rating Average
Response Count
1 1 5 6 0 3.23 13
answered question 13
100
skipped question 41
Q50) Suppose you were implementing an application on an middleware platform, rank the listed quality attributes of the platform's software architecture from MOST importance (1) to LEAST importance (5). ü Reliability : the ability to deliver acceptable level of performance after partial system failure.ü Security: the ability to safeguard the platform and protect user's sensitive and confidential data.ü Maintainability: the level of difficulty needed to perform installation or changes to the platform's components by non-technical and technical personal. ü Efficiency: the platform's ability to minimize resource consumption and maximize performance simultaneously. ü Safety: mechanisms that enable the system for safety-critical applications and safety patterns that ensure system is always in a safe state (fault tolerance, redundancy).
Answer Options 1 2 3 4 5 N/A Rating
Average Response
Count
Reliability 6 1 5 0 0 1 1.92 13
Security 2 4 3 2 1 1 2.67 13
Maintainability 0 2 3 6 1 1 3.50 13
Efficiency 1 2 0 2 7 1 4.00 13
Safety 3 3 1 2 3 1 2.92 13
answered question 13
skipped question 41
Q51) How important would the functions and features of middleware platform, that relate to reliability, be to your application(s)? (Reliability:The ability to deliver acceptable level of performance after partial system failure)
Answer Options Response
Percent Response
Count
Very Important 66.7% 8
Important 25.0% 3
Moderately Important 8.3% 1
Of Little Importance 0.0% 0
Unimportant 0.0% 0
answered question 12
skipped question 42
101
102
Q52) (Optional) Why is reliability important or unimportant to your particular application(s)?
Answer Options
Response Count
4
answered question 4
skipped question 50
Number Response Date
Response Text
Categories
1 Oct 23, 2014 12:00 PM
Because my application aims at almost eliminating external interventions, from operators and users, thus being almost totally autonomous.
2 Oct 14, 2014 6:33 PM
Because the device I develop is supposed to measure vital signs of the user in a relatively short period of time (10 minutes), the system needs to function reliably to make correct vital sign measurements.
3 Oct 14, 2014 2:31 PM
In mobility applications, if there is system failure, the user will lose their capacity to move around.
4 Oct 10, 2014 12:51 PM
I think the use of the word reliability should be extended beyond system failure. Reliability, in the sense that the performance is consistent and repeatable, is very important to me. System failure is one relatively small possibility of a need for reliability.
103
Q53) How important would the functions and features of middleware platform, that relate to security, be to your application(s)? (Security: The ability to safeguard the platform and protect user's sensitive and confidential data)
Answer Options Response Percent
Response Count
Very Important 41.7% 5
Important 41.7% 5
Moderately Important 16.7% 2
Of Little Importance 0.0% 0
Unimportant 0.0% 0
answered question 12
skipped question 42
Q54) (Optional) Why is security important or unimportant to your particular application(s)?
Answer Options
Response Count
4
answered question 4
skipped question 50
Number Response Date
Response Text
Categories
1 Oct 23, 2014 12:00 PM
It's a basic key factor if you want to bring your application to the real market.
2 Oct 14, 2014 6:33 PM Because biometric
104
parameters are being measured, the information must be kept confidential.
3 Oct 14, 2014 2:31 PM
Patient data should be kept confidential, if a system is shared between multiple users.
4 Oct 10, 2014 12:51 PM
In general AAL applications security is a hot topic. However, for my work security is not overly relevant.
Q55) How important would the functions and features of middleware platform, that relate to maintainability, be to your application(s)? (Maintainability: The level of difficulty needed to perform installation or changes to the platform's components by non-technical and technical personal)
Answer Options Response
Percent Response
Count
Very Important 16.7% 2
Important 66.7% 8
Moderately Important 8.3% 1
Of Little Importance 8.3% 1
Unimportant 0.0% 0
answered question 12
skipped question 42
Q56) (Optional) Why is maintainability important or important to your particular application(s)?
105
Answer Options Response
Count
4
answered question 4
skipped question 50
Number Response Date Response Text
Categories
1 Oct 23, 2014 12:00 PM
Beacuse of the need to simplify as much as possible the problems related to system and software updates.
2 Oct 14, 2014 6:33 PM
Connecting many devices together can cause a lot of trouble when one is debugging the integrated system. It is very important that there is an effective yet simple way to maintain the system.
3 Oct 14, 2014 2:31 PM
This is helpful for the burden of the technology. Ideally these technologies should not be difficult to maintain, otherwise there is risk of abandonment or frustration.
4 Oct 10, 2014 12:51 PM
At this point in the life-cycle of AAL, maintainability is very important largely because new applications for the technology generally create substantial requirements for change in the underlying technology.
Q57) How important would the functions and features of middleware platform, that relate to efficiency, be to your application(s)? (Efficiency: The platform's ability to minimize resource consumption and maximize performance simultaneously)
Answer Options Response Percent
Response Count
106
Very Important 25.0% 3
Important 16.7% 2
Moderately Important 41.7% 5
Of Little Importance 16.7% 2
Unimportant 0.0% 0
answered question 12
skipped question 42
Q58) (Optional) Why is efficiency important or unimportant to your particular application(s)?
Answer Options Response
Count
2
answered question 2
skipped question 52
Number Response Date Response Text
Categories
1 Oct 23, 2014 12:00 PM It's not the first target of the application being developed.
2 Oct 10, 2014 12:51 PM
Resourse consumption is minimal for my applications, but maximized performance is very important. See discussion on reliability above for a better idea of my interpretation of the word efficient (or effective).
Q59) How important would the functions and features of middleware platform, that relate to safety, be to your application(s)? (Safety: mechanisms that enable the system for safety-critical applications and safety patterns that ensure system is always in a safe state).
Answer Options Response Percent
Response Count
Very Important 50.0% 6
107
Important 25.0% 3
Moderately Important 25.0% 3
Of Little Importance 0.0% 0
Unimportant 0.0% 0
answered question 12
skipped question 42
Q60) (Optional) Why is safety important or unimportant to your particular application(s)?
Answer Options Response
Count
3
answered question 3
skipped question 51
Number Response Date Response Text
Categories
1 Oct 14, 2014 6:33 PM
Electronic equipment are used to measure the vital signs; therefore, maintaining safety of the device is very important.
2 Oct 14, 2014 2:31 PM
Patients will be operating the devices. The middleware should be able to regulate the interaction of devices safely.
3 Oct 10, 2014 12:51 PM Application specific, but not overly important to my work.
Q61) Which of the following aspects of the middleware platform may limit you from adopting an available platform for your application(s)?
108
Answer Options Response
Percent Response
Count
Hardware connection or communication protocol 41.7% 5
Database/storage system 16.7% 2
Graphical user interfaces 16.7% 2
Logic and reasoning (e.g. finite state machine, ontology) 33.3% 4
Runtime environment (e.g. prog. languages, OS) 41.7% 5
Hardware/software compatability 58.3% 7
Compliance with standards 58.3% 7
Other (please specify) 25.0% 3
answered question 12
skipped question 42
Number Response Date Other (please specify)
Categories
1 Oct 14, 2014 6:05 PM Don'e know - not familiar with the options
2 Oct 14, 2014 2:31 PM Time to learn how to use middleware platform
3 Oct 10, 2014 12:51 PM Technical capabilities, efficacy/effectiveness
Q62) (Optional) Can you recommend or suggest features and functions from future middleware platforms, that were not covered within this survey?
Answer Options Response
Count
1
answered question 1
skipped question 53
Number Response Date Response Text
Categories
109
1 Oct 14, 2014 6:33 PM
In the study that is currently happening, it has emerged that data synchronization of all the devices collecting data is extremely important yet challenging to achieve. Even if each data is recorded with a timestamp, each device will have its own processing time and the data collected by each device (e.g. bed, chair, tile, sofa, etc) contain small offset from one another. We are correcting this issue by connecting wire from one device to another but obviously this is very limiting in real life situation. A method to synchronize each device with respect to one another would be a good feature to have.
Q63) Which of the following choices best describe your reasoning for not considering a middleware platform for your application(s)?
Answer Options Response Percent
Response Count
Lack of general or technical knowledge of middleware concepts or unawareness of the currently available middleware platforms
0.0% 0
Insufficient resources (i.e time, money, and/or effort) for the migration or adoption process to a middleware platform
0.0% 0
Limiting developer support or documentation provisions from middleware platform developers
0.0% 0
Inability of the currently available middleware platforms to meet your demands in respect to specific functions or features (e.g. logic and reasoning, runtime environment, computability issues etc.)
0.0% 0
Deficiency in the quality attributes of the middleware platforms' software architectures (i.e. reliability, security, maintainability, efficiency, & safety)
0.0% 0
Other (please specify) 0.0% 0
answered question 0
skipped question 54
Q64) (Optional) Can you recommend or suggest features and functions from future middleware platforms, that were not covered within this survey?
Answer Options Response
Count
110
0
answered question 0
skipped question 54
111
21 Appendix B: Test-case Scenario State Machine
112
113
22 Appendix C: HOMER Usability Test Pre-Assessment &
Evaluation Criteria
HOMER Usability Test Pre-assessment
Strongly Disagree
Not sure
Strongly Agree
1
You have working knowledge of state
machines or state timing diagrams
2
Your education background involves knowledge
of technical concepts (e.g. computer science,
engineering)
3
You have been directly involved in design,
development, or testing of assistive technologies in
in the past.
4 You have had experience working with middleware
platforms in the past
Task 1 - Analyzing the HOMER application for detecting errors. Answer
114
How would describe how difficult or easy it was to complete this task? 1 = Very difficult, 5 = Very Easy
How satisfied are you with using this application to complete this task? 1 = Very Unsatisfied, 5 = Very Satisfied
How would you rate the amount of time it took to complete this task? 1 = Too Much Time, 5 = Very little time
Task 2 - Performing the modification on HOMER application. Answer
How would describe how difficult or easy it was to complete this task? 1 = Very difficult, 5 = Very Easy
How satisfied are you with using this application to complete this task? 1 = Very Unsatisfied, 5 = Very Satisfied
How would you rate the amount of time it took to complete this task? 1 = Too Much Time, 5 = Very little time
Task 3 - Testing the modification made to the HOMER application Answer
How would describe how difficult or easy it was to complete this task? 1 = Very difficult, 5 = Very Easy
115
How satisfied are you with using this application to complete this task? 1 = Very Unsatisfied, 5 = Very Satisfied
How would you rate the amount of time it took to complete this task? 1 = Too Much Time, 5 = Very little time
Task 4 - Installing new components to the HOMER applications. Answer
How would describe how difficult or easy it was to complete this task? 1 = Very difficult, 5 = Very Easy
How satisfied are you with using this application to complete this task? 1 = Very Unsatisfied, 5 = Very Satisfied
How would you rate the amount of time it took to complete this task? 1 = Too Much Time, 5 = Very little time
116
System Usability Scale
Strongly Disagree
Not sure
Strongly Agree
1 I think that I would like to
use this system frequently
2 I found the system
unnecessarily complex
3 I thought the system was easy
to use
4
I think that I would need the support of
a technical person to be able to use this system
5 I found the various functions
in this system well integrated
6 I thought there was too much inconsistency in this system
7
I would imagine that most people would
learn to use this system very quickly
8 I found the system very
cumbersome to use
9 I felt confident using the
system
10
I need to learn a lot of things before
I could get going with this system