175
Project co-funded by the European Commission within the ICT Policy Support Programme Key Information from the DoW Due Date 15-February-2015 Type Report Security Public Description: The development of ReAAL Multidimensional Evaluation Framework will build on the requirements defined in WP1, and will start in M5. The development takes an inclusive approach, in which a wide range of partners contributes with their specific expertise. These contributions will be merged into a consolidated framework to be released the first time in M12. The framework will be shared with the consortium through the Knowledge Portal. Lead Editor: Marleen de Mul (EUR) Internal Reviewer: Ma Pilar Sala Soriano (UPV)

requirements defined in WP1, and will start in M5. The ... · Project co-funded by the European Commission within the ICT Policy Support Programme Key Information from the DoW Due

  • Upload
    lybao

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Project co-funded by the European Commission within the ICT Policy Support Programme

Key Information from the DoW

Due Date 15-February-2015

Type Report

Security Public

Description:

The development of ReAAL Multidimensional Evaluation Framework will build on the requirements defined in WP1, and will start in M5. The development takes an inclusive approach, in which a wide range of partners contributes with their specific expertise. These contributions will be merged into a consolidated framework to be released the first time in M12. The framework will be shared with the consortium through the Knowledge Portal.

Lead Editor: Marleen de Mul (EUR) Internal Reviewer: Ma Pilar Sala Soriano (UPV)

D5.2 – Evaluation Framework

Page 2 of 175

Versioning and contribution history Version Date Author Partner Description

0.1 22-March-2014

Marleen de Mul

EUR First changes after Rotterdam meeting

0.2 28-July-2014

Marleen de Mul

EUR Showcases first version

0.3 12-Nov-2014

Joris van de Klundert

EUR Chapter value network and networking externalities

0.4 2-Jan-2015 Marleen de Mul

EUR Improvements to conceptual model and indicator tables

0.5 19-Jan-2015 Marleen de Mul

EUR Improvements to conceptual model and indicator tables

0.6 31-Jan-2015 Marleen de Mul, Alejandro Medrano

EUR Showcases second version, revised structure, revision of evaluation design. Ready for interim review

0.7 19-Feb-2014 Marleen de Mul, Alejandro Medrano, Alessio Fioravanti

EUR, UPM Improvements to uAAL and showcase description, improvements to indicator tables

0.8 23-Feb-2014 Marleen de Mul, Alessio Fioravanti

EUR, UPM improvements to indicator tables

0.9 11-March-2015

Marleen de Mul, Alejandro Medrano, Alessio Fioravanti

EUR, UPM Version for internal review

1.0 30-March-2015

Marleen de Mul

EUR Reviewed by Ma Pilar Sala Soriano

Contributions The following people contributed in the discussions, writing and reviewing of this report: Name Organisation

Marleen de Mul (task leader) EUR

Joris van de Klundert EUR

Marc Koopmanschap EUR

Terese Otte-Trojel EUR

Lonneke Weierink EUR

Kirsten van der Kuijp EUR

Dario Salvi UPM

Juan Batista Montalvá UPM

Alejandro Medrano UPM

Alessio Fioravanti UPM

Henk Herman Nap SmH

D5.2 – Evaluation Framework

Page 3 of 175

Adrea Caroppo CNR

Giovanni Diraco CNR

Angeles Mata TEA

Florian Visser RNT

Álvaro Fides Valero UPV

Juan Carlos Martinez UPV

Ma Pilar Sala Soriano UPV

Jordi Piera Jiminez (reviewer version D5.2a) BSA

Helmi ben Hmida Fh-IGD

Saied Tazari Fh-IGD

Statement of originality:

This deliverable contains original unpublished work except where clearly indicated otherwise. Acknowledgement of previously published material and of the work of others has been made through appropriate citation, quotation or both.

D5.2 – Evaluation Framework

Page 4 of 175

Executive Summary

Goal of the evaluation is to answer the following research question in the most objective and reliable way: "Will open platforms be able to generate the AAL market breakthrough?"

This deliverable presents the final release of the evaluation framework for ReAAL that has been designed for this purpose: the ReAAL Open Platform Ecosystem Assessment framework. The framework consists of the OPEA-conceptual model, the OPEA-indicator model and the OPEA evaluation strategy. For this final release, the first release of the deliverable in March 2014 was extended and thoroughly revised by the evaluation team of ReAAL. Originally, three versions of this deliverable were foreseen. However, due to delays in the project there was no benefit of an intermediate release of the framework (version b).

To construct the OPEA-framework, we used previous deliverables of ReAAL, complemented with a literature search, expert interviews and validation sessions.

The OPEA conceptual model (see paragraph 4.4) is the theoretical base for the evaluation. It originates from a general model for telemedicine assessment (the MAST model of Kidholm et al.), but it was thoroughly revised to make it applicable to the AAL domain, and the context of open platforms. Compared to the MAST model, the technical and economic domains are more elaborated. The model covers the following domains: Assistance problem & characteristics of the application and platform, Technical aspects, User perceptions, Outcomes, Economic aspects, Organisational aspects, Contextual aspects, Showcases. The conceptual model was further operationalized using DeLone and McLean's Information Systems Success Model (ISS), which is a model that draws causal relationships between the integral quality of an information system, its use and its experienced benefits. The ISS model provided us with a set of concepts we translated to relevant indicators for the evaluation of the universAAL platform.

The OPEA indicator model (see paragraph 5.1) is a three-dimensional model that assisted us in the construction of relevant indicators for the project. The first axis depicts the value network of the AAL platform provider, AAL application provider, Health Service or Social Service provider, the informal carers, assisted persons and society. All stakeholders are relevant for the evaluation of the universAAL (uAAL1) ecosystem-in-use, and they have their own value objectives for deploying AAL services on an open platform. The second axis marks the assessment domains of the evaluation: assistance problem and characteristics of the open platform & applications, technical aspects, user perceptions, outcomes, economic aspects, organizational aspects, and contextual aspects. The third axis relates to the three levels of assessing the AAL ecosystem: the platform, application, and service level.

The value objectives of axis 1, the domains of axis 2 and the levels of axis 3 are joined in a list of indicators, which are described in paragraph 5.4, and – in more depth in ‘Appendix A. Detailed indicator list’.

The indicators were developed by a process of literature review (e.g. the MAST model itself, the ISS model and models for technology acceptance), analysis of the pilot concepts and discussions in the evaluation team, and between the ReAAL partners.

1 universAAL and uAAL are both used in the report

D5.2 – Evaluation Framework

Page 5 of 175

The OPEA evaluation design used the conceptual model and indicator model for the actual evaluation strategy. It can be found in Chapter 6. The evaluation strategy takes a double approach: the evaluation of the pilots and the evaluation of the showcases that demonstrate the value of open platforms. Each pilot is involved in one or more showcases. The pilot evaluation consists of two phases: phase 1 evaluates the universAALisation and testing; phase 2 evaluates deployment and operation.

The data collected in the pilot and showcase evaluation is combined and complemented in the final step of the evaluation: the ReAAL impact evaluation. This final step is meant to validate ReAAL’s results (our findings about the value of open platforms), and draw scenarios about the impact for the AAL market beyond the scope of the project. In this step we specifically look at network externalities.

Both the pilot, showcase and impact evaluation use multimethod designs with qualitative and quantitative data. The pilot evaluation has a before-after design. The following data collection tools are used: questionnaires, focus group interviews, individual interviews, templates, and blogs. In addition the materials (reports) in other workpackages are used as a source, for example the test reports of WP3, and the operation reports of WP4. The showcase evaluation is designed as a technical demonstration and value assessment from the perspective of different stakeholders. The ReAAL impact evaluation uses all previously collected data, and adds to this dedicated focus groups of the ReAAL consortium, questionnaires to stakeholders outside the consortium (e.g. the associated vendors) and discussions with experts.

This deliverable reports on the conceptual work, the indicators that were developed, the overall design of evaluation activities and the responsibilities of evaluation partners and pilots.

D5.2 – Evaluation Framework

Page 6 of 175

Table of Contents

1. About This Document .................................................................................... 11

1.1. Deliverable context .................................................................................... 11

1.2. Version-specific notes ............................................................................... 13

2. Objectives and methodology ......................................................................... 14

2.1. Methodology to construct the framework ................................................ 14

2.1.1. Pilot requirements 15

2.1.2. Literature search 15

2.1.3. Expert input & validation of indicator framework 16

2.1.4. Development of showcases 17

2.1.5. Evaluation strategy 17

2.1.6. Relationship to other evaluation activities in Europe 17

2.2. General approach to the evaluation ......................................................... 18

2.2.1. Mixed method design 18

2.2.2. Two stages of evaluation 19

2.2.3. Summary of the OPEA evaluation framework approach 20

3. The value of AAL and open platforms for AAL ............................................ 21

3.1. The AAL domain ........................................................................................ 21

3.2. The value of AAL ....................................................................................... 22

3.3. Open platforms for AAL ............................................................................ 23

3.4. The open platform universAAL ................................................................. 24

3.5. Showcases for the value of the universAAL platform ............................. 29

3.5.1. Resource Sharing 30

3.5.2. Plug and Play 30

3.5.3. Services Integration 32

3.5.4. Evolution 34

3.5.5. Advanced Distribution 35

3.5.6. Advanced User Interaction 36

3.5.7. Personalized Content Push 37

3.5.8. Ambient Intelligence 38

3.5.9. Scalability 39

3.5.10. Service Transferability 40

3.5.11. Integration with legacy systems 41

3.5.12. Security & Privacy 42

3.5.13. Administration 43

3.5.14. Enhanced Market communication and distribution 43

4. OPEA conceptual model ................................................................................ 46

4.1. The Ecosystem as a Value Network ......................................................... 46

4.1.1. Value 46

D5.2 – Evaluation Framework

Page 7 of 175

4.1.2. The Network of Stakeholders involved in Creating and Consuming Value

enabled by ReAAL 47

4.1.3. Viewing the Ecosystem as three Partial Value Networks 49

4.1.4. Financial Value and Network externalities of open platforms 50

4.2. Application of a general assessment model to an AAL ecosystem ....... 52

4.2.1. Background for the assessment 54

4.2.2. Assessment domains 54

4.3. Showcase assessment .............................................................................. 57

4.4. OPEA conceptual model ........................................................................... 58

5. OPEA indicator model.................................................................................... 59

5.1. OPEA indicator model ............................................................................... 59

5.2. Theoretical models applied ....................................................................... 60

5.2.1. Information System Success 60

5.2.2. Telecare Acceptance and Use Model 61

5.2.3. Role of the models 63

5.3. Indicators for pilot and showcase evaluation .......................................... 64

5.3.1. Indicator groups per assessment domain 65

5.3.2. Indicator groups per stakeholder 66

5.4. Indicators description per assessment domain ...................................... 67

5.4.1. Background of the assessment 67

5.4.2. Technical aspects 68

5.4.3. User perceptions 68

5.4.4. Outcomes 69

5.4.5. Economic aspects 69

5.4.6. Organizational aspects 70

5.4.7. Contextual aspects 70

5.4.8. Showcases 71

5.5. ReAAL impact indicators .......................................................................... 72

5.6. Mapping of OPEA indicators and MAFEIP set ......................................... 72

6. Evaluation design ........................................................................................... 74

6.1. Stage I: Pilot evaluation ............................................................................ 74

6.1.1. Phase 1: Preparation, adaptation and test phase 75

6.1.2. Phase 2: Deployment and operation phase 76

6.1.3. Pilot evaluation in the associated pilots 78

6.2. Stage I: Showcase evaluation ................................................................... 78

6.2.1. Description of the showcase 78

6.2.2. Demonstration of the showcase 78

6.2.3. Value assessment 79

6.2.4. Showcase evaluation in the associated pilots 79

6.3. Stage II: ReAAL impact validation ............................................................ 79

6.3.1. ReAAL impact indicators 80

6.3.2. Scenario analysis 80

D5.2 – Evaluation Framework

Page 8 of 175

6.4. Overall planning of evaluation activities and deliverables ..................... 81

7. Evaluation guidelines and instruments ........................................................ 82

7.1. User definition ........................................................................................... 82

7.2. Sampling and Pilot Pearl ........................................................................... 83

7.3. Data collection ........................................................................................... 84

7.4. Data analysis .............................................................................................. 92

7.5. Knowledge portal ....................................................................................... 93

8. Responsibilities and quality assurance ........................................................ 95

8.1. The evaluation team .................................................................................. 95

8.2. Responsibilities in execution of pilot evaluation .................................... 96

8.3. Quality assurance ...................................................................................... 97

References ........................................................................................................................ 99

Appendix A. Detailed indicator list ................................................................................ 103

1. Overall structure of the indicator descriptions .................................................... 103

2. Background of the assessment ............................................................................ 105

3. Technical aspects .................................................................................................. 107

4. User perceptions .................................................................................................... 108

5. Outcomes ............................................................................................................... 117

6. Economic aspects ................................................................................................. 119

7. Organizational aspects .......................................................................................... 121

8. Contextual aspects ................................................................................................ 126

9. Showcases ............................................................................................................. 127

10. ReAAL impact indicators ..................................................................................... 134

Appendix B. Evaluation activities per pilot ................................................................... 135

BRM (Baerum municipality) ...................................................................................... 135

BSA pilot (Badalona services) .................................................................................. 136

IBR (Ibermatica pilot) ................................................................................................. 137

ODE (Odense municipality) ....................................................................................... 138

Puglia (Puglia region) ................................................................................................ 139

RNT (Rijnmond region) .............................................................................................. 140

SL (Smart Living pilot) ............................................................................................... 142

TEA (Madrid region) ................................................................................................... 142

WQZ (German construction sites pilot) .................................................................... 143

Appendix C. Data collection tools ................................................................................. 145

Application Developer T1 Survey ............................................................................. 146

Assisted Person T1 Survey ....................................................................................... 163

Cost template ............................................................................................................. 172

Script for evaluation of Advanced Distribution showcase ...................................... 174

D5.2 – Evaluation Framework

Page 9 of 175

List of tables Table 1. Recommendations from expert interviews regarding evaluation of an AAL open platform. .......................................................................................................... 16 Table 2. BRAID scenario framework ........................................................................ 21 Table 3. Key selling points of AAL applications ....................................................... 22 Table 4. Key selling points for open platforms ......................................................... 24 Table 5. Classification of stakeholders in the OPEA domain ................................... 48 Table 6. Network externalities .................................................................................. 51 Table 7. Comparison of MAST and proposed assessment domains for AAL ecosystem assessment ........................................................................................... 54 Table 8. Indicator codes .......................................................................................... 64 Table 9. Indicator groups per assessment domain .................................................. 65

Table 10. Indicator groups per stakeholder ............................................................. 66 Table 11. Indicators Background for the assessment .............................................. 67 Table 12. Indicators Technical aspects .................................................................... 68 Table 13. Indicators User perceptions ..................................................................... 68

Table 14. Indicators Outcomes ................................................................................ 69 Table 15. Indicators Economic aspects ................................................................... 69

Table 16. Indicators Organizational aspects ............................................................ 70 Table 17. Indicators Contextual aspects .................................................................. 70

Table 18. Indicators Showcases .............................................................................. 71 Table 19. ReAAL impact indicators .......................................................................... 72 Table 20. EIP-AHA indicators .................................................................................. 73

Table 21. Overall rough planning ............................................................................. 81 Table 22. Who counts as a user? ............................................................................ 82

Table 23. Evaluation timing ..................................................................................... 85

Table 24. Instruments per stakeholder .................................................................... 85

Table 25. Minimal Data Set ..................................................................................... 86 Table 26. Overview questionnaires .......................................................................... 87

Table 27. Overview focus group interviews ............................................................. 90 Table 28. Overview templates ................................................................................. 91 Table 29. Overview pilot level interviews ................................................................. 91

Table 30. Responsibilities per activity ...................................................................... 96 Table 31. Codes .................................................................................................... 103

Table 32. Timing .................................................................................................... 104 Table 33. Subindicators Background of the assessment ....................................... 105 Table 34. Subindicators Technical aspects ........................................................... 107

Table 35. Subindicators User perceptions ............................................................. 108 Table 36. Subindicators Outcomes ........................................................................ 117

Table 37. Subindicators Economic aspects ........................................................... 119

Table 38. Subindicators Organizational aspects .................................................... 121

Table 39. Subindicators Contextual aspects .......................................................... 126 Table 40. Subindicators Showcases ...................................................................... 127 Table 41. ReAAL impact indicators ........................................................................ 134

D5.2 – Evaluation Framework

Page 10 of 175

List of figures Figure 1. Stages in OPEA framework development ................................................. 15 Figure 2. Evaluation stages ..................................................................................... 19 Figure 3. ReAAL-OPEA framework overview .......................................................... 20 Figure 4. The three areas of universAAL platform ................................................... 25 Figure 5. universAAL stack ...................................................................................... 25 Figure 6. The universAAL platform and the three flows of information .................... 26 Figure 7. Showcases ............................................................................................... 29 Figure 8. Levels of the ecosystem ........................................................................... 49

Figure 9. Ecosystem level versus Stakeholders ...................................................... 50 Figure 10. MAST model ........................................................................................... 52 Figure 11. OPEA Conceptual model ........................................................................ 58

Figure 12. OPEA Indicator model ............................................................................ 59 Figure 13. DeLone and McLean model for information system success (ISS) ........ 60 Figure 14. Telemedicine Acceptance and Use Model (TAUM) ................................ 62 Figure 15. Evaluation plan ....................................................................................... 74

D5.2 – Evaluation Framework

Page 11 of 175

1. About This Document

This document provides the final release of the multidimensional evaluation framework for the ReAAL project. This project has a variety of goals (e.g. contributing to a market breakthrough in AAL, fostering knowledge sharing) and the framework is needed to assure that the achievement of all goals can be measured.

Because this is a public report, we also sketch the context in which the ReAAL project and its evaluation take place. Readers interested in evaluation issues can read this context to know more about evaluation of technology in healthcare, AAL and the setup of the ReAAL project, and decide if the approach sketched in this report could be relevant for them.

This final release replaces all previous versions.

The framework is used to perform the evaluation. The evaluation data will be collected in internal deliverables per pilot (ID5.1x), which will be sent to the evaluation team at M29 and M37. Based on this data and the data directly collected by the evaluation team the evaluation reports will be delivered in M34 and M39.

This report contains many (underlined) cross references for the convenience of the reader.

Any questions and comments can be sent to the Lead Editor.

1.1. Deliverable context

Project item Relationship

Objectives This deliverable relates to Objectives O3, O4, O5, O6, O7.

O3 refers to the development of a monitoring and evaluation concept for the adaptation of products and services to the universAAL platform. The evaluation framework proposed in this deliverable contains indicators that are relevant for this evaluation work, e.g. from the platform, developer and technology provider perspectives.

O4 refers to the development of a multi-dimensional evaluation methodology to measure the impact of the deployment of the universAAL ecosystem, for example on return on investment. This deliverable is the final release of this methodology.

O5 refers to the collection and spread of best practices. These best practices are derived from the process and outcome indicators and themes listed in the evaluation framework.

O6 refers to disseminating the socioeconomic evidence collected using the evaluation methodology proposed in this deliverable.

D5.2 – Evaluation Framework

Page 12 of 175

O7 refers to the validation of the effectiveness of the value chain, resulting in replication guidelines. The evaluation results are one source for this validation, which will be reported in other WP5 deliverables

Exploitable results

The exploitable results of WP5, of which this deliverable is part, are:

Res6: Guidelines for monitoring and evaluating the adaptation of products and services to the universAAL platform, because some of the indicators can be used for this purpose.

Res7: The evaluation framework will be part of the public knowledge portal donated to AALOA

Res8: this result IS the multi-dimensional evaluation methodology and framework itself

Work plan WP5 T5.2: in this task, the framework is developed

WP5 T5.3: here the framework is tested and implemented for use in the evaluation

WP5 T5.4: the part of the framework relevant for the replication guidelines is used in this task

WP4 T4.1: the setup of the evaluation is part of the individual deployment plans of the pilots

WP4 T4.3: in the deployment phase, evidence will be collected and reported in the periodic operation reports

WP6 T6.2: The outcomes of the evaluation will be important input for the replication plans of each pilot site.

Milestones MS2: the first release of the evaluation framework has to be ready by milestone 2 “ready for porting applications”

Deliverables This report builds on D1.3 and D2.1.

Other related deliverables are D5.3 and the ID’s in WP5 of each pilot.

Risks Rk12, Rk13, Rk14, Rk15, Rk16, Rk20. The risks relate to challenges in the pilots’ deployment phases, which have a negative impact on the data collection for evaluation. E.g. if the number of users or their motivation for participating in the evaluation is low, there will be fewer data available. The framework needs a minimal amount of data to provide reliable results, but low use and low motivation are also results in itself. For example, they might point to a gap between user needs and offered services or a suboptimal implementation process. However, to demonstrate the value of open platforms these negative results of a service implementation intervene with the ReAAL objectives and should be corrected for. Because of these risks the framework has a balanced set of indicators (for all stakeholders and all domains). WP5 and

D5.2 – Evaluation Framework

Page 13 of 175

WP4 leaders work together to assure that both deployment and evaluation objectives are met.

Rk20 refers to the protection of user information. The framework provides guidelines on the data collection, following the rules set out in the ethical and legal manual.

1.2. Version-specific notes

The evaluation framework originally had three releases. The first release presented the indicator framework conceptually, and gave a detailed description of the minimal dataset for the evaluation on pilot level, including the process evaluation of the pilots (Part I of the framework). This version was released in M14.

The second release of the evaluation framework (version b) was meant for adjustments based on the piloting experience. Because the project was delayed, the baseline measurements did not occur before the deadline for D5.2b. It was decided in the Project Executive Board that release b and c of the evaluation framework would be combined in one version, and would be released before the deployment phase starts.

Please note that this report is a final version, and as such replaces the first release. The content has also been thoroughly revised. This final deliverable is called D5.2.

D5.2 – Evaluation Framework

Page 14 of 175

2. Objectives and methodology

In the ReAAL project the open platform, universAAL (uAAL), will be adopted in six countries that have set up one or more pilots with AAL technology supported services. The objective of ReAAL is to test whether the uAAL platform can indeed be used “outside the laboratory” and bring benefits to thousands of dependent persons in Europe who use technology in their homes, or whose (in)formal caregivers use technology, to support independent living. The majority of these persons are of old age.

The objectives of the ReAAL evaluation are:

1. to collect evidence about the impact of adopting an open platform (uAAL) in the development of AAL services on a large scale, specifically addressing the socioeconomic benefits;

2. to collect the lessons learned from the best practices;

3. to derive from the results forecast scenarios and replication guidelines.

2.1. Methodology to construct the framework

To be able to collect the evidence, the Open Platform Ecosystem Assessment (OPEA) framework was designed. The framework consists of three parts, which have been developed iteratively:

1. OPEA conceptual model

2. OPEA indicators

3. OPEA evaluation design

Several activities led to these parts of the framework, which will be briefly described in the next paragraphs. Figure 1 depicts the rough phases in the development process. The framework itself can be found in chapters 4 to 6.

D5.2 – Evaluation Framework

Page 15 of 175

Figure 1. Stages in OPEA framework development

2.1.1. Pilot requirements

The basis for the evaluation framework was the deliverable with pilot concepts (D1.1) and with interoperability and flexibility requirements (D1.2). These reports were analysed for D1.3: “success indicators and requirements assessment criteria”. This part of the work entailed an analysis of pilot concepts (value proposition) and stakeholder goals. The analysis resulted in a list of evaluation questions, and assessment criteria to measure the success of the pilot concept. For each assessment criterion one or more indicators were defined. The list of indicators from D1.3 has been incorporated in the ReAAL-OPEA framework.

2.1.2. Literature search

A literature study was performed to search for evaluation models used in eHealth and AAL. Existing evaluation models could be relevant to check whether our indicators covered all relevant domains. More importantly, existing models could have already validated indicators.

Because one of the aims of the evaluation is to show socioeconomic benefit, we narrowed the search to evaluation models that have an HTA (health technology assessment) approach2. The models were discussed in the evaluation team

2 The list of reviewed literature can be obtained via the Evaluation Team.

•Pilot requirements

•Expert interviews

•Literature 1st round

•Overview of HTA models

•First validation with stakeholders

Preparation phase

•Assessment domains & stakeholders

•Literature search 2nd round

•MAST model

•Second validation in ReAAL

First release D5.2a •Literature search 3rd round

•ISS model

•Showcase definition

•Input advisory board

•MAFEIP framework

Final release D5.2

D5.2 – Evaluation Framework

Page 16 of 175

responsible for task T5.2, and a decision was made for those models that would best apply to the ReAAL case. This was checked with the Description of Work. Eventually we chose the MAST model. In addition we performed several literature searches on value networks, key selling points of open platforms, barriers and facilitators, models to explain success of open platforms and the acceptance of AAL technology. This literature was used to further construct the evaluation framework.

2.1.3. Expert input & validation of indicator framework

In addition to the literature search, seven AAL experts were interviewed for their opinion on current challenges in AAL evaluation and the expected benefits of open platforms:

1. a member of the project management team of ReAAL (Smart Homes)

2. a member of the Advisory board of ReAAL (University of Edinburgh)

3. a coordinator of the Ambient Assisted Living Joint Programme in the Netherlands (ZonMw)

4. a project manager at a research institution in the Netherlands (TNO)

5. a marketing manager at a telemedicine company (Eurocom Group)

6. a consultant in eHealth architecture (Nictiz)

7. a researcher involved in the universAAL project (Austrian Institute of Technology)

The interviews were part of the thesis project of a student of Erasmus University. This explains the overrepresentation of Dutch interviewees. The experts made the several recommendations for the ReAAL evaluation, which can be found in Table 1.

Table 1. Recommendations from expert interviews regarding evaluation of an AAL open platform.

Stage 1.

Prior to implementation

Decide on how to evaluate

Define what can be regarded as a positive impact

Decide how to measure this impact

Collect data, start with baseline measures

Measure the effort to develop applications for the platform

Stage 2.

After implementation

Measure the cost of an open platform

Measure the difficulty to implement an open platform

Measure the difficulty to integrate applications/services

Include multiple technologies/applications

Stage 3.

After Evaluation

Make outcomes transparent and accessible

Run a clear and up to date project website

Clearly present the stage of the project

Show the current users of the platform

The Evaluation Team took these considerations into account. The Stage 1 recommendations are reflected in the OPEA framework itself: we selected an appropriate design, defined the dimensions of evaluation and defined measurable indicators. The recommendations of Stage 2 are focused on a process evaluation of the course of the project. This is taken into account with the qualitative measures. The recommendations of Stage 3 are relevant for the project as a whole. The

D5.2 – Evaluation Framework

Page 17 of 175

Evaluation team contributes to these recommendations by publishing and disseminating the evaluation results.

The OPEA indicators are the result of a combination of bottom up collection of indicators (by analysing the pilot concepts and technical requirements) and top down collection of ready validated indicators and relevant evaluation domains. Partners involved in the task of writing the deliverable decided which indicators should be included. These partners took into account the relevance and measurability. In a plenary meeting (Eersel November 2013), a preliminary list of indicators from D1.3 was presented to the consortium and discussed with the pilots, and a first round of validation was organized at the end of 2013.

During the Rotterdam plenary meeting in March 2014, the evaluation approach was discussed with a member of the advisory board, who is an expert on health technology assessment. Her advice was to use more qualitative measures, and focus more on the soft impact for the market than the hard impact on the end user.

Between June and December 2014 the conceptual model was further refined, leading to rephrasing and restructuring the list of indicators. Members of the Advisory Board gave also their input to the evaluation framework, during an Advisory Board meeting in November 2014.

2.1.4. Development of showcases

A specific activity in which the evaluation team and the technical partners of ReAAL participated was the development of showcases for demonstrating the key features of open platforms. It was agreed in the consortium that as much features of uAAL should be tested, for example the flexibility when extending features of an AAL application, or the interoperability. Discussions about the showcases started at the plenary meeting in Rotterdam in March 2014, and continued during a technical workshop in Paris in April 2014. Because the adaptation and deployment plans of the pilots were not definite at that time, the showcases were updated in July, after the plenary meeting in Odense and checked again with all pilots. Further discussion took place at the plenary workshop in Berlin (November 2014), after which the showcases were finalized in March 2015 after the Eindhoven workshop.

2.1.5. Evaluation strategy

The evaluation strategy was discussed several times at pilot workshops and plenary meetings. Much of the discussion in these workshops was about the overall planning of the project, because delays with uAAL were encountered in the first year. In addition, during these meetings and by analysing the deployment plans of the pilot it became clearer which applications would be deployed, where, and when, and with how many users. This new information had to be taken into account for the evaluation strategy (e.g. on the sample of users).

2.1.6. Relationship to other evaluation activities in Europe

In the domain of active and healthy ageing, and the domain of AAL several evaluation framework and/or indicator initiatives are ongoing. During the first review the EC asked us to align – if possible – with the Monitoring and Assessment Framework for the European Innovation Partnership on Active and Healthy Ageing (MAFEIP) [1]. Following up on that advice, we had contact with one of the

D5.2 – Evaluation Framework

Page 18 of 175

responsible officers, and received the documentation available at that time (Summer 2014).

The indicators selected for ReAAL have been compared to the EIP-AHA evaluation framework, specifically the subset for C2. Given the focus of the ReAAL project we needed more, and more specific indicators, mainly on technical issues. This stems from the fact that the EIP-AHA evaluation framework has been developed to show to what extent a funded project contributed to the triple aim of the program. Many of the projects funded by the EC have an RCT design; their claims about the effects on quality of life (e.g. mortality) can be much stronger than in ReAAL. Furthermore it is important to stress the primary aim of the ReAAL project, which is not to test the effectiveness of AAL applications. This is the secondary aim of the project.

However, we do take into account all indicator groups in the OPEA model: Quality of life, Sustainability of health systems, and innovation & growth. In our final report, we will come back to the indicator list from EIP-AHA, and relate our findings to the framework. Par 5.13 shows the mapping between the EIP-AHA indicators and the OPEA indicators.

The evaluation team is aware of other indicator projects in AAL but has so far not received any documentation from these initiatives.

2.2. General approach to the evaluation

2.2.1. Mixed method design

The evaluation relies on quantitative evidence as well as qualitative insight into the development, deployment and operational processes. In evaluation studies of AAL with a purely quantitative focus (e.g. Aanesen et al who focused on cost-effectiveness of smart home technology [2]) it is argued that qualitative data is equally important to support decision-making in future investments in innovation. Generally, the combined use of qualitative and quantitative data improves validity of the results and contributes to rich, in-depth knowledge [3].

Building on this rationale, the ReAAL indicators will be collected using both quantitative and qualitative methods. For the quantitative part we will use questionnaires and (cost) data entry templates. The qualitative data is collected through focus groups, interviews and by analysing blogs written by pilot partners. Moreover, the indicators focus on both processes and outcomes [4]. In chapter 6 the design of the evaluation is presented.

The ReAAL project supports a large-scale implementation, which is not yet common in the AAL domain. Therefore the experiences from ReAAL are relevant for the AAL community and should be disseminated optimally. The quantitative and qualitative data will be input for the construction of the replication guidelines. The evaluation and dissemination leaders will join expertise to bring the results of the ReAAL project to the European AAL community and beyond, in a variety of ways. These ways include events, presentations at conferences, publications, participation in networks and through the project’s knowledge portal. The dissemination strategies can be found in the upcoming deliverables in WP5 (D5.4) and WP6.

D5.2 – Evaluation Framework

Page 19 of 175

2.2.2. Two stages of evaluation

The project aims to demonstrate that the potential key selling points of open platforms live up to their expectations. Although this aim ideally includes achieving a first market breakthrough, such a breakthrough goes beyond the scope of this three-year implementation project. Further, not all the evidence can be collected in the time frame of the ReAAL project. Therefore, we choose a staged approach to the evaluation in which the evaluation will have a retrospective and prospective part. Retrospectively we collect data at the level of the individual pilots and the showcases.

The results from Stage I will provide the basis for the prospective part where we will forecast in which roll-out scenarios open platforms will bring most benefit to society. Furthermore we reflect on the impact the ReAAL project had. This is Stage II of the evaluation, which is called “ReAAL impact validation”.

Stage I follows the life cycle of the project, starting with the preparations, adaptation and testing phase, the deployment and operation phase (see Figure 2).

Stage II takes place in the final months of the project, around the time deployment ends. Because of delays in the project, deployment for all pilots will end about three months before the end of the project.

Figure 2. Evaluation stages

D5.2 – Evaluation Framework

Page 20 of 175

2.2.3. Summary of the OPEA evaluation framework approach

The Figure 3 below shows how the different stages and phases of the evaluation framework relate to each other. The conceptual model will be described in chapter 4, the indicator model in chapter 5 and the evaluation design in chapter 6.

Figure 3. ReAAL-OPEA framework overview

Stage II: ReAAL impact validation

Impact indicators Validation of scenarios

- Open platform comparison

OPEA evaluation design Pilot evaluation

Showcase evaluation ReAAL impact validation

Stage I: Pilot evaluation Phase 1: Preparation & Adaptation Phase 2: Deployment & Operation

Stage I: Showcase evaluation

ReAAL-OPEA framework

OPEA conceptual model Value network

Assessment domains AAL ecosystem level

OPEA indicator model Value network

Assessment domains AAL ecosystem level

D5.2 – Evaluation Framework

Page 21 of 175

3. The value of AAL and open platforms for AAL

This chapter provides a background of the AAL domain, and the development of open platforms. It serves as an introduction for those not very familiar in this field, but also describes the universAAL platform, which is the open platform under study in this project.

3.1. The AAL domain

The ageing population constitutes one of the most critical societal challenges faced by Europe. The European population of 80 years and older is estimated to grow from 4 per cent today to more than 12 per cent in 2060 [5]. Ambient Assisted Living (AAL) technologies and services are being developed to meet the needs of the aging population. According to Sun et al. [6, pp109]: “AAL aims at extending the time older people can live in their home environment by increasing their autonomy and assisting them in carrying out daily living activities with the help of intelligent products and the provision of remote services including care services”. AAL is an umbrella term for a broad variety of products and services, including information and communication technology (ICT) aimed at enhancing the delivery of health and social care services [7]. The AAL domain is not strictly limited to older persons. Younger people with a handicap or a chronic disease might also benefit from the technologies and services. The ultimate goal of AAL is to increase the ability of people to live independently and longer in their own home.

In the BRAID project, the scope of the AAL domain has been explored and translated to scenarios [8]. Four life domains are distinguished: independent living, health and care in life, occupation in life, and recreation in life. Each domain is divided in subdomains (see Table 2).

Table 2. BRAID scenario framework

BRAID domain Sub domains

Independent living Daily life assistance (home safety and care; personal activity management)

Supporting physical mobility (localisation/position assistance; mobility and transportation)

Health and care in life Monitoring (chronic diseases, sensorial supervision)

Rehabilitation and disabilities compensation (physical compensation; neuro-cognitive compensation; rehabilitation)

Caring and intervention (health & care management; healthy lifestyle intervention; medical assistance)

Occupation in life Ageing at work

Extending professional life

Recreation in life Socialization (social event management; virtual communities)

Learning

Entertainment

D5.2 – Evaluation Framework

Page 22 of 175

By postponing the point where people require help with daily living activities, Ambient Assisted Living technologies and services may have significant economic and social impact on assisted persons, their families and caregivers, health or social service providers, and society as a whole. Still, the distribution of AAL in Europe is limited. Most of the European AAL projects have failed to accomplish real progress in the adoption by third party developers and to create a sustainable AAL market [9]. Limited adoption may partly be attributable to the considerable resources needed for implementation [10]. Further, evidence is lacking about how AAL technologies impact users and other stakeholders as well as about best practices for such assessments [11]. Therefore, to support adoption of AAL, evidence is needed about its impact on users and other involved stakeholders. Especially in the care for older persons and the chronically ill, informal care is a considerable part of the total care provided [12]. Providing informal care often constitute a large social, psychological or physical burden on the caregivers [13], emphasizing the importance of capturing their perspectives when introducing AAL technologies.

3.2. The value of AAL

In sum, the implementation of AAL technology, either in the homes of assisted persons (older persons, handicapped persons) or their (in)formal caregivers, aims at sustaining and possibly even prolonging independent living and improving wellbeing. Independent living can reduce societal costs of hospital or nursing home admissions. As such, the European Commission speaks of the triple win of AAL: (1) quality of life improvement for target users; (2) efficiency gains in care systems (sustainable care); and (3) an effect on economic growth and jobs (innovation-based competitiveness) [14]. As a summary, the key selling points of AAL applications are displayed in Table 3.

Table 3. Key selling points of AAL applications

Key selling points of AAL applications

Empower people to live independently

Improve quality of life of the users of these applications and their informal cares

Support informal carers in their care of an assisted person

Improve efficiency and quality of formal health and social services

Reduce costs of providing health and social care services

Postpone or prevent institutionalization

Reduce opportunity costs (when e.g. a caregiver can work instead of caring for a relative)

Although the development of AAL is ever growing, there is still a lot of discussion about the evidence for these key selling points. So far, like in the eHealth domain, the results of AAL projects are not always conclusive on the socioeconomic benefits for the system. Many AAL projects are small and lack scientific robustness to be able to show these benefits. And after the project phase ends, many new AAL technologies are not up scaled. In addition, European countries differ greatly in their financing structures. Costs made by one stakeholder in the value network, might lead to savings for someone else. Still a lot of research is needed to validate these key selling points, and the conclusions might also differ between various types of

D5.2 – Evaluation Framework

Page 23 of 175

AAL. The ReAAL project has only a modest contribution to this discussion, because the primary aim is not to demonstrate the effectiveness of each of the piloted applications.

3.3. Open platforms for AAL

Although multiple AAL services are being developed, these are often expensive and cover only a small part of the needs of the elderly and other assisted persons [6]. A considerable problem is that most of these services lack mutual integration and the ability for information sharing between different stakeholders and between services. One of the greatest challenges for AAL is the closed design of current products. The closed design requires a person in need of specific assistance on social, functional or medical domains to use multiple disintegrated technologies to cover all needs [15]. This results in lack of efficiency, integration and interaction between demand and supply for the current AAL services.

A solution to these challenges of “closed” AAL technologies is the use of an “open middleware platform” on which all AAL technologies can be made compatible. A middleware platform is a reusable software framework for the development of assistive systems [16] In this report we refer to the “open middleware platforms for assistive technology” simply as “open platforms”. In general, a platform can be described as follows:

[A] platform consists of those elements that are used in common or reused across implementations. A platform may include physical elements, tools and rules to facilitate development, a collection of technical standards to support interoperability, or any combination of these things (...) a platform can organize the technical development of interchangeable, complementary components and permit them to interact with one another. [17].

When a platform is completely open it does not belong to a specific body and it has no controlling authority. Therefore, it is accessible to everyone and it provides free access to the technology supply [18]. In some cases this openness of a platform even goes beyond free access to also promise the freedom to “modify, transform, and build upon previous development” [17, pp.1851]. This results in easier and less costly ways for small and medium-sized enterprises (SMEs) to bring innovative technology to the market. Opening or not opening a new technology is the most crucial decision an innovator will have to make. Opening might lead to additional impulses to the technology but might also result in little control for the innovator to secure profits [14]. However, it might also stimulate wider adoption of the technology by lowering consumers’ concerns of being locked to a single vendor [18].

An open platform for AAL technologies and services aims to support the development of a unified market that prevents vendor lock-ins. It increases the interoperability of applications, i.e. it improves the technical and semantic interaction between different technologies and electronic systems [19]. This will result in increased ability to share patient/user information and knowledge to support innovations. As such, open AAL platforms can provide “the needed support for hardware, communication, distributed networks, user interfaces, and dependability to allow for the effective development of applications that are context-aware, personalized, and anticipatory” [20]. Consequently, open platforms can contribute to more sensible use of the health care budget by increasing the affordability and usability of AAL services [10].

D5.2 – Evaluation Framework

Page 24 of 175

In the literature, there is discussion about the consequences of open platforms for the market. While some authors claim that open platforms will induce monopolies (e.g. [21]), others argue that the open platform itself can lead to a monopolistic position. As a rule, monopolies do not lead to lower prices for the end user. It is also said that (dominant) open standards might hinder innovation, but others expect that an open platform results in decreased development/production costs which might increase innovation [22]. So far, there has not been much empirical evidence.

To sum up, the implementation of an open platform for AAL aims at reducing the costs of developing AAL technology and providing AAL services, while at the same time allowing more flexibility to change devices and build interoperable solutions. The key selling points of open platforms, resulting from this analysis, are shown in Table 4.

Table 4. Key selling points for open platforms

Key selling points for open platforms

Lower costs than proprietary platforms

Reduce costs for implementing common functionalities that are shared among several services (e.g. communication protocols)

Lower production costs for software

Lower operational costs because several devices can run on same platform

Adapted applications can be implemented elsewhere at lower cost (transferability)

Data from different sensors can be combined (interoperability), which might result in fewer devices needed to provide the same service, or different services sharing the same device

Services can exploit information produced by other services and build on top of them

More flexibility to change vendor of hardware or software

Interoperability can enable creation of data repositories for evidence-based improvements

Sharing of knowledge among developers and practitioners can advance field (innovation)

Support a competitive and open market place for technologies and services

These key selling points are relevant for the developer community, the AAL application providers, and the (health) service providers who decide to buy and implement AAL for their clients, thus for the AAL market as a whole. The key selling points will be further specified in paragraph 3.5.

3.4. The open platform universAAL

The universAAL project started with the mission of studying all previous AAL projects and integrate them in a single, consolidated platform that represents what any AAL platform should be. Not only the definition of the architecture is provided but also a reference implementation is developed to demonstrate its soundness. And in order to overcome the lack of ecosystem, it is also addressing the settlement of a

D5.2 – Evaluation Framework

Page 25 of 175

developer community with source code and documentation, and what is more important, a distribution channel to make AAL technologies available to customers, in the fashion of the now widely used app-stores.

The actual platform is organized around three areas, as illustrated in Figure 4, that target the whole value chain of stakeholders in the AAL services provision.

Figure 4. The three areas of universAAL platform

1. Operation support

The platform provides a runtime environment enabling a virtual ecosystem for AAL Spaces (physical spaces – such as the home of an assisted person – in which independent-living services are provided to people that need any sort of assistance). In such a virtual ecosystem, hardware as well as software components can “live” while being able to share their capabilities. Additionally, operation support also includes tools for the management of AAL Spaces and their components (e.g., installation, configuration and personalization tools).

Operation support is the bundle of software that runs in the deployed home. It can be thought of as a stack (see Figure 5). As of any piece of software it needs a container, whether it is an operating system or some sort of platform; currently universAAL supports OSGi and Android platforms.

Figure 5. universAAL stack

Application

Container

Middleware

Managers

D5.2 – Evaluation Framework

Page 26 of 175

Execution Environment

The execution environment, or middleware, facilitates the sharing of three types of capabilities: Context (data based on shared models), Service (control) and User Interaction (view). These capabilities are used to connect different components in the system, components that might be distributed in the AAL Space in different processing nodes (see Figure 6). Therefore, connecting components to the universAAL middleware is equivalent to using the brokerage mechanisms of the universAAL platform in these three areas for interacting with other components in the system.

Figure 6. The universAAL platform and the three flows of information

Platform Services

The Execution environment is used to connect different components of the system. Some components are already provided, these are the platform services, or Managers.

These components offer common AAL operations, some examples include (but are not limited to):

History tracking and querying, all the context events are stored in a database that can then be queried to, for example get the last know state of a particular device.

UI Handler can present to the user information in a situation-aware way using the appropriate modality in the appropriate location where the user finds her/himself. In return, UI handlers deliver the collected user input back to the application. Therefore, the role of UI handlers in AAL Spaces is pretty much similar to the role of browsers in the Web. The major difference is that different UI handlers might utilize different modalities (graphics, speech, gesture, etc.) and use different devices in the environment for performing the delegated task of interaction with users.

Gateways can be used to interface between different AAL Spaces, enabling a hierarchy of AAL Spaces. This hierarchy makes possible to split an AAL Space so that a fixed AAL Space at home and an itinerant AAL space, around the user’s mobile phone, can interact and offer services regardless of whether the user is at home or not. Another important result is to connect a personal AAL Space to a Cloud AAL space, where more complex services can be provided to the user.

D5.2 – Evaluation Framework

Page 27 of 175

Rule Engines, allow easy configuration of responsiveness of the AAL Space. Usually rules in the simple form “if this then do that”, rule engines connect to the context of the AAL Space and when a state that matches the premise of the rule is matched then the rule is executed, typically calling a service

Service orchestration and composition, this component allows the building of complex services through the connection and coordinated execution of simpler services.

Exporters, are components that connect external, or 3rd party hardware to universAAL middleware. universAAL offers exporters for KNX, ZigBee, Bluetooth, and X73 technologies; but this list is ever expanding.

Installed Services

Installed services are the actual applications, any piece of software that can run on the Container, and that makes use of the uAAL Buses or Managers, whether by consuming them or providing into them, in order to provide an AAL Service or a part of it.

2. Development support

It includes all aspects needed to help developers to produce AAL solutions, or installed services. Development support includes tools, software repositories, guidelines, training materials, etc. Main results already available are:

Developer depot: it’s the repository where any developer can obtain the open source code of every universAAL component

AAL Studio: it’s a set of easy-to-use development tools oriented specifically to external developers, equivalent to an SDK for AAL applications

Developer handbook and tools tutorials: it’s a set of wiki pages and other documentation resources that provides a set of clear instructions and guidelines so that external developers can find their way into the framework.

Personal Support: through forums, mailing lists, and issue trackers, universAAL developers offer personalized support to developers that require it.

These resources can be found at http://depot.universaal.org/

3. Market Support

Market support provides facilities for linking demand and supply. The major result in this area is the uStore, where demand and supply will meet each other. uStore can be thought of as the digital application store for universAAL based AAL services, But there are key differences with current digital stores.

Roles with in uStore are distinct. Technology providers/AAL Application providers3 are producers of software and hardware; they publish their products on the uStore and receive feedback from end users, or service providers in the form of issue reports, and/or feature demands.

3 In this report the terms Technology provider and AAL Application provider are used interchangeably. Both refer to all technical stakeholders in the AAL domain, except for the provider of the (open) platform.

D5.2 – Evaluation Framework

Page 28 of 175

Service Providers can browse the available products (provided by technology providers) and bundle them into a full fledge AAL service, by adding the external necessary resources, like cloud resources or human resources (for example personnel for a housekeeping service). Several set of resources (software, hardware, and external) are bundled into packaged AAL services that can be then offered as purchasable products to end users. Technology providers that develop “stand alone products”, software that is able to work on its own may register as service providers to bundle and offer the application on its own. Offering an application on its own does not prevent from other service providers to bundle it (through legal agreement) with other applications to offer more complex AAL solutions.

End users browse the packaged AAL services, buy and automatically install the desired AAL service. Additionally end users have the chance to rate the solutions, report bugs & request new features, and let the system to notify them when for a previously unsuccessful search now matching offers have been introduced

Market support, and the uStore, also include a set of original AAL applications that showcase the capabilities and possibilities of the universAAL platform. These AAL applications may be freely bundled into any AAL service; but most importantly can be used to enlighten developers and service providers in the capabilities of universAAL platform.

NOTE: the uStore is technically not ready to be used in the ReAAL project. However, we will include opinions on the value of such a marketplace in the evaluation of universAAL.

D5.2 – Evaluation Framework

Page 29 of 175

3.5. Showcases for the value of the universAAL platform

The value of the open platform universAAL can be demonstrated with fourteen showcases (Figure 7). A showcase demonstrates key features of an open platform, compared to other solutions.

Figure 7. Showcases

In the next paragraphs these fourteen showcases will be defined and its potential value will be argued from different stakeholder perspectives.

A more elaborate description of the showcases with example use cases can be found in the knowledge portal. The showcases will also be central in the marketing materials of ReAAL.

13. Admini-stration

1. Resource Sharing

6. Advanced

User Interaction

5. Advanced

Distribution

14. Enhanced

Market communication and distribution

12. Security &

Privacy

3. Evolution

2. Plug and

Play

7. Personalized

Content Push

11. Integration with legacy

systems

4. Services

Integration

8. Ambient

Intelligence

10. Service

Transfer-ability

9. Scalability

D5.2 – Evaluation Framework

Page 30 of 175

3.5.1. Resource Sharing

Shared resources refer to the different AAL application or services which are able to share hardware, data and network resources thus allowing users to operate as if each application has its own resources. In the field of Ambient Assisted Living, the most used shared resources are hardware and data resources. As main advantage, resource sharing reduces the amount of needed hardware to implement AAL services. From the other side, without resource sharing, each AAL application and service is built with a separate platform. This showcase is implemented when different applications share the same resources (e.g. two applications using the same presence sensor).

Technology provider's view

From a technology provider perspective, the current feature will offer them the opportunity to exploit their hardware by several services in parallel. This will motivate them to break down the chain hardware-software, thus providing their technology in a more standard and generic way to be later on used by several applications and services.

Service provider's view

In fact, from hardware resource sharing point of view, this will avoid Vendor Lock for the service provider: the provided application will no longer depend on a specific device type or manufacturer. From a data sharing perspective, this capability will allow the application to be personalized through sharing relevant data and information with other applications regarding the environment, the user profile, preference, etc. It will allow the services providers to directly interact with the other available services.

Assisted person's and Caregiver's view

The show case will offer an assisted person the opportunity to profit from more AAL services without being obliged to pay more for extra hardware. At the home of the end user a simpler hardware environment can be realized. Sharing data with another application and service will allow to the end user to benefit from a more adequate and friendly user environment where the application can be automatically adapted to his/her need.

Sharing resources will allow the caregiver to have a more flexible working environment, where other software different to the one owned by the user can be used to get the same kind of information (distant Blood pressure for example)

Government's view

From the Government perspective, the resource sharing features of an open platform are the main strategy to avoid vendor lock. In parallel, it will increase the installation and the exploitation flexibility of AAL technology. Definitely, this will make a step forward toward facilitating scaling up the AAL culture in Europe.

3.5.2. Plug and Play

The “Plug and Play” showcase allows declining a fundamental characteristic of the platform: with Plug and Play there is not "vendor lock-in", so the end-user may be able to be independent on a vendor for hardware products or services. The most important aspect is that the resources are interchangeable, so it is possible to buy

D5.2 – Evaluation Framework

Page 31 of 175

similar services from a different vendors without incurring significant costs and risks. Another specification of Plug and Play is the detection and the configuration of hardware devices with little or no user involvement. In fact the platform provides Plug and Play of AAL applications and services, so the concept can be seen in an extended version. The main advantage is related to the interoperability feature and adaptability feature: the platform permits with little effort the adaptability of application and services even if changes in hardware occur. It is important to note that the previous definition should be contextualized on the basis of whole system complexity. For example, for a smartphone application the Plug and Play showcase should be read as ‘no user involvement’, whereas in the case of an integrated complex home automation system the Plug and Play definition may refer to ‘as little as “necessary” involvement of technicians’.

Technology provider's view

The ability of the platform to provide Plug and Play both for hardware and services/applications increases the number of technology providers that can show interest in the platform although officially they are not involved in the project. The new vision of a “universAAL” Plug and Play facilitates participation of providers that have developed applications in other open platforms. Another advantage is that the provider will not incur in high costs for training of resources to devote to new technologies for AAL platforms.

Service provider's view

From a service provider point of view, the Plug and Play showcase will increase for example the capability to choose from different "default" hardware sensors for the provisioning of the services; this allows for service provider (SP) a cost optimization. Another interesting aspect is that SP may offer different packages solutions for a specific service, for example different software/hardware sets that may help SP to build their service category business model (the more the client pays the more he gets).

Assisted person's and Caregiver's view

In this context, the end users may have an indirect benefit; certainly there will be cost reduction for the following two reasons:

installation work by personnel not highly qualified;

ability to switch between different hardware or software solutions (e.g. an AAL service or application that can be exploited by different brands of specific product/sensor)

Government's view

From the Government perspective, the Plug and Play showcase plays an important role because increasing in interoperability and adaptability of the solutions will facilitate the spread of a new concept concerning the use of AAL technology, increasing in very short periods of time the knowledge about the use of “open” solutions.

D5.2 – Evaluation Framework

Page 32 of 175

3.5.3. Services Integration

Service Integration refers to the capability for different applications (even those developed by vendors who don't know each other) to cooperate and exchange data and logic without effort.

One platform model to achieve this kind of interaction is through Semantic Interoperability, the base of which is the modelling of the service logic and data through ontologies. Each application will view its own tasks (and data) as the domain restricted by the ontologies it uses, this allows for both very concrete and very general services. This paradigm also enables smooth integration between different services, even when such integration is not planned or even foreseen. By means of extension or mapping of the underlying ontology, each application will work on the same data resources with its own view, leaving to the semantic engine the mapping of the operations.

Logic can also be modelled through ontological concepts, one example of this is the OWL-S standard4. A Semantic Service Oriented Architecture, is the extension of traditional Service Oriented Architectures, to use semantical definitions of the services instead of constricted APIs or interfaces. This way APIs are now ontological definitions that can be extended or mapped. The benefit is on one side easiness in integration and composition of services, as ontological models offer very low-coupling APIs with the same properties as the ontology itself; on the other side it offers the capability of using virtual services, i.e. the logic may include calls to services that may or may not even be defined. Of course Semantic Service Oriented Architectures enable different vendors to implement the same service (see also the Plug and Play showcase).

The service integration showcase aims to stress the platform's ability to host new services, adding functionality to the current ones in a very smooth and simple way.

Technology provider's view

Services integration offers technology providers the capability to modularize to the extreme each application, making high quality products at a fraction of the price due to the re-usage of all the components already developed.

The capability of composing services enables Technology providers to easily create new components for an application, enhancing the reaction power to requisite changes.

Programming against virtual services, enables Technology providers to divide the development team, and lessen the inconveniences when one team fails to meet deadlines; in a sense it helps parallel development of dependent components.

Semantic Service Oriented Architectures enable technology providers to specialize on particular areas of knowledge. This fact will create specialized markets, for example for artificial intelligence algorithms, each with its own domain view (i.e. ontology); each niche can be easily re-used by other domains by means of ontological mapping without affecting either side view.

In general Service integration offers Technology Providers, especially small business, a very low entry barrier. Whether it is to a highly specialized market or a general one, the capability of offering products (such as components) is achieved

4 See: http://www.w3.org/Submission/OWL-S/

D5.2 – Evaluation Framework

Page 33 of 175

easier than full solutions (developed in-house). In fact, for more general markets, full solutions can be offered easily by taking advantage of the open component market.

Service provider's view

A service Provider will most certainly profit from the specialized competition in the Technology Provider market. The service provider may choose form different Technology Providers, or even choose several Technology Providers simultaneously, for cost optimization without necessarily affecting final quality.

For those Service Providers who act also as Technology providers, the specialized market will ease their own development as an open market for components, instead of applications, will make it easier to provide services by means of composing different components from this market.

Service integration offers flexibility, especially in the option to choose from different implementations. This flexibility may also be offered to the clients, as an asset of the service itself. Deployment flexibility and frequency can be an asset for clients, keeping the service highly personalized and updated.

The most important result from service integration is the interaction of different services, services may be exploited in new different ways, generally in more efficient ways. Examples of this include the Apple Health and Google Fit5 frameworks; both centred on the client's health, allow third party applications to cooperate in the final objective of improving the client's health. For a Service Provider in the Health domain providing different applications to its clients to track different aspects of their health will help gather more information and a more precise picture of the client's health, and therefore this will undoubtedly improve the service provided to the client. Of course Health is one domain where these new models fit very well, but the same kind of interaction can be exploited for other domains, and even between domains.

Assisted person's and Caregiver's view

From the end user's perspective the expectation is a radical change in the market, following the steps of the mobile app revolution; where a single, top-down, closed solution is opened to different vendors creating a new market where the users choose the platform and the services on top, without impositions or exclusions.

The most important impact of this flexibility for the end users will be the freedom to customize their own solution at a fraction of the price.

Government's view

The prospect of a revolution similar to the mobile revolution, which now is one of the most important sectors of the global economy, is indeed appealing.

Lowering the entry barrier to the AAL market, will for sure foster small business in the sector. These small business do not have to directly compete with big corporations already monopolizing the market, in fact these small business might overtake big companies if their solutions best fit the population interests. There have been other precedents for this in the internet sector, and the principal factors for this where the openness of the platform, the low entry barrier for small business and the free access to the consumers.

5 https://www.apple.com/ios/whats-new/health/ & https://fit.google.com

D5.2 – Evaluation Framework

Page 34 of 175

Fostering small businesses not only helps keep a healthy inflow of new, and innovative ideas to the sector (making it more robust), but is also helps with unemployment rates. Demand for highly qualified personnel will also impact the higher education indicators.

In general open, interoperable, flexible markets will benefit all.

3.5.4. Evolution

The showcase “Evolution” implies the change in time of the particularities of a single deployment, whether it is hardware, software, features (installed services) or its context. From the application/services point of view it is possible to develop new functionalities or services as “extension” of previous provided functionality in order to add new tools or new instruments useful, for example, to simplify the installation task or automatic update task; the new functionalities may also be useful for troubleshooting and fixing of some malfunctions. Moreover, from the architectural and hardware point of view, “evolution” can be interpreted as the adding of sensors or actuators without changing the core functionality of the platform, but with the goal of improving general performances. Last but not least, "evolution" of the context is also possible, for example going from a single-user deployment to a multi-user deployment.

Technology provider's view

Evolution gives technical providers the ability to integrate new functionalities or to increase the number of services with minimal effort. Moreover, the addition of new features and services brings competition because the technology providers can increase the potential of its products and services compared to a competitor who has already developed additional services on top of the same service/application.

Service provider's view

From a service provider overview, this showcase will permit to provide instruments for testing the applications and checking if a service, not currently integrated, will work. In this way they will offer customizations of specific services that, for example, have been demonstrated to be effective with high-performance in other sites. Another view can be the capability of service provider to upgrade users from one service category to a superior category, evolving the single deployment; in this way, service providers can evaluate the opportunity of new possible business models. Moreover, the "evolution" of a deployment (in terms of "upgrading") can be a valuable asset for service providers, for example it may be possible to solve some issues in deployments.

Assisted person's and Caregiver's view

For the end user, the extension of the functionality of a platform component has considerable advantages, for example through “evolution” it can be possible to customize an application according to specific needs of the assisted person. From the caregiver point of view, on the other hand, the same application can be specialized based on monitoring needs; in this way different evolutions of the same application are available, increasing the level of knowledge of the tools and the level of satisfaction.

D5.2 – Evaluation Framework

Page 35 of 175

Government's view

Evolution showcase is important for government agencies as it allows a better and more effective management of financing instruments. In fact, for the institution, is fundamental to invest for the improvement of existing applications or services that are already popular among end users; the improvement can be obtained through extension of functionality of application and services which contribute to the provision of new solutions required by the market but not yet realized. Moreover, "evolution" helps keep the market active, and always looking for innovative solutions that can be used for evolving current deployments.

3.5.5. Advanced Distribution

The platform allows applications developed on top of it to communicate with others and share resources regardless of the node on which they are deployed. Of course in order to have a minimum degree of distribution there must be more than one single node in the platform. And this distribution is truly "advanced" when the communication and resource sharing is effortless and seamlessly transparent (applications are oblivious to the actual node where they or the resources are located), and when the nodes are separated beyond the usual reaches of common platforms.

Nodes in the platform can be distributed not only within the same network of an end user environment, but can also be located in server backends (owned by service providers or third parties) or could even be mobile, being worn by the user while outside his/her house. This represents advanced distribution not only within the user home but also mobile and cloud-based. And applications and resources can be communicated across all these nodes no matter where they are.

This adds another level of distribution, where nodes from different users and spaces can virtually co-exist in other shared nodes. This is most visible in cloud-based scenarios, where the server of a provider can host virtual nodes for each of their users, while being a node for them itself.

Technology provider's view

Developers can choose from all the different possibilities where to deploy their applications, and can be sure that they can access all of the features of the platform regardless of the node they choose. They do not need to worry about communication protocols or endpoints addresses since this is hidden by the platform distribution features. For what they know, the resources or applications they require are virtually in the same place where their application resides. If the applications they develop are designed to be cloud-based, the server side of the application can discriminate between which user node should be addressed in every interaction. Distribution capabilities can also be used to set up load-balancing solutions.

Service provider's view

All the different possibilities for deploying applications release any constraints on service providers to decide how and where they want to deploy the platform and their applications into. Whether in a computing device at the users' home, a smartphone, a tablet, a web application or a combination of any or all the previous, new and existing applications can be compatible with the platform.

D5.2 – Evaluation Framework

Page 36 of 175

Assisted person's and Caregiver's view

Advanced distribution is transparent for the end users. They interact with their applications without having to worry about where exactly is their data stored or which device they should use to access it. A kind of "computational load-balancing" can be set up as well, moving resource-consuming applications to the more powerful nodes, while, for instance, the mobile phone node of the assisted person can carry only light-weight apps.

Government's view

The possibility of having different types, sizes and distributions of nodes manufactured by different companies boosts competition and opens more choices for purchases.

3.5.6. Advanced User Interaction

When an application interacts with the user, it can do so in many ways. For example it can interact through different channels (i.e. multimodality) such as a screen or through spoken text (text to speech, TTS); it may also combine different modalities at once for any given interaction (i.e. output fusion), having to collect the input from the user through different channels too (i.e. input fission). Interaction may also be personalized to the user needs, for example users that suffer from low vision disabilities may require bigger script, or no lettering at all in favour of TTS channel. The interaction may also take place in new paradigms such as Human-Environment Interaction (HEI), in favour of the traditional Human-Computer Interaction (HCI); the difference being that in HCI all the possible channels of interaction are restricted to a single computer's periferials, whereas in HEI all available periferials in the environment, (and sometimes the environment itself) may be used to interact with the users. HEI allow for features such as follow me interaction, where when the user moves, it is still possible for him to access the interaction; the system might have decided to change de display periferial to the nearest screen, or change the whole modality; for course this requires extra processing to maintain privacy (context information of whether there are other users in the environment and if the information shown is privacy sensitive). Consistent look and feel is important in AAL because users learn to use the different applications faster and easier, specially avoiding distress to new systems.

Technology provider's view

There are many platforms and frameworks specific for User Interaction (UI) tasks, most of them allow for a consistent "look and feel" within applications, and even between applications using the same framework. Some platforms allow for consistent look and feel system wise, across all applications as well as the system itself. These type of platforms not only allow for applications to be natively multimodal, and manage output fusion and input fission; they allow for system wise personalization, opening the gates to adaptability (allowing change in order to meet user requirements and preferences prior to the start of the interaction) and adaptivity (allowing the system to alter its interaction during run-time). These platforms generally work through the Independence between application and presentation layer. UI specific tasks are generally one of the most tedious for software developers, using a platform that offers all the features (multimodality, input fission, output fusion, adaptability, adaptivity, HEI) may simplify developing the UI by enforcing best practice models such as the Independence between application and presentation layer, and declarative presentation.

D5.2 – Evaluation Framework

Page 37 of 175

Learning new systems is always time consuming, and may produce errors at first. Also the Independence between application and presentation layer, and the consistent look and feel constrain developer in certain cases for example when developing custom graphics such as in a videogame.

Service provider's view

User interaction is one of the most important aspects of the system, after all the UI is the only thing the user will actually see of the software system.

Advance UI, offer the clients a sense of whole package, the system solves many of their problems, even though in reality there are many applications devoted to each sub task. It may be branded (especially on the consistent look and feel feature), so that this sense of problem solving is associated to the service provider's brand; nourishing the bond of the clients with the service provider, helping them maintain the service.

On the other hand the consistent look and feel, in a scenario with multiple service providers (each offering a set of applications), will make it difficult to make a difference.

Assisted person's and Caregiver's view

User Interaction is where the end users get most of the feel of the system, but also where they need the most help. Advanced User Interaction may reduce, and even eliminate the need for help, since the uniformity helps users learn the system quicker, HEI allows for more natural and efficient interaction with the system, and especially personalization helps the user feel the system is specifically designed for them, increasing their engagement. HEI, is an innovative and futuristic way of interaction that will make end users proud of "owning".

3.5.7. Personalized Content Push

Content is information and experiences that provides value for an end-user. Personalized content add even more value for each individual. Personalized Content may be information related to user profile, or knowledge the user might be interested in, based on the user's context. In this context Personalized Content Push might be thought of as new content, such as possible new services the user might be interested in, are suggested to the user. The suggested content is selected from within the user workflow (i.e. no personal information is processed by third-parties).

Technology provider's view

Personalized applications are always more difficult to design and implement.

But proactivity of the system enhances the user experience and simplifies the workflow in some cases.

Service provider's view

When content are services, and especially personalized services that the user is most likely to be interested in, there is a higher probability of having the user requesting such services. Service providers benefit from this just as from targeted advertising.

Other type of personalized content is interesting too. Personalized content makes users feel better cared for, increasing the satisfaction with their service provider.

D5.2 – Evaluation Framework

Page 38 of 175

Assisted person's and Caregiver's view

Personalization makes assisted persons feel more engaged with the service, which in most cases is what it is required for the service to provide better quality of life.

For example, a medication intake monitoring service, offering personalized information and education regarding the medication the assisted person is taking (and specially when the user is not complying with the treatment) will make the end user understand the importance of complying with the treatment; instead of being annoyed by constant reminders resulting in the user ignoring the service all together.

Government's view

Personalized care systems are expensive. But when the system is able to personalize to each user's needs the system not only has better acceptance, its chances of being a successful measure increase drastically.

3.5.8. Ambient Intelligence

Ambient intelligence is the ability of a system based on distributed computing devices, sensors and actuators that enables it to act or react to the conditions of the environment and the status of the users. Thus it can provide solutions and services to those users specifically tailored to them, to cover their needs at that moment in that place. Ambient Intelligence can take full advantage of environments with pervasive technology, having access to context information, user inputs and computational power from "everywhere". It must be noted though that Ambient Intelligence is a capability provided by the system, but in order to leverage it, an application must make proper use of it.

Technology provider's view

Ambient intelligence is at the core of the platform in ReAAL, as it allows developers of applications to take advantage of all the contextual information gathered by the sensors, act on the environment, or get information about the users, all in a unified way. It is their responsibility though that this "ambient information" is properly used and presented to the user in a convenient way, to produce the effect that the desired feature of the application emerges from the system for the user when needed.

Service provider's view

The customized and context-aware response of applications that benefit from ambient intelligence can be seen as an advantage over other competence applications that confine themselves within their own data and capabilities. In applications that focus on remote monitoring and similar, the information gathered from the environment can provide an overview of the status of both that environment and the user (provided that the right sensors are in place).

Assisted person's and Caregiver's view

Interactions with an ambient intelligence application are more personal and familiar since they are already tailored to the user needs and environment. These applications can also reduce user interaction efforts since they can react to the conditions of the user environment on their own. In summary, applications that make proper use of ambient intelligence should be easier to use and require less user intervention.

D5.2 – Evaluation Framework

Page 39 of 175

Government's view

It is the goal of Ambient Assisted Living to reduce long-term costs of healthcare assistance thanks to continuous monitoring of health conditions and prevention. Information and responsiveness is maximized in a full Ambient-Intelligence environment.

3.5.9. Scalability

Scalability applies when increasing the number of deployments or users. Typically deployments are homes where the system is installed; increasing the number of deployments means that the system is installed in more homes. For this the system has to be easily installable and configurable. Also the system must be resource optimized, many deployments will increase the need for, for example, more power in cloud services.

Technology provider's view

Developers usually do not take in consideration mass deployment in their implementation phase, so a platform that helps personalize the system for each deployment, as well as optimize and in some sense high the complexity of massive deployment will be an advantage when the system is actually mass deployed.

Massive deployment implies that the real-life usage of the system is high, thus the probability of failure is increased.

Service provider's view

Scaling the number of users is the cornerstone of any Service Provider's business plan, understandably the more users the system is capable of handling the more income those users will produce. Optimizing the costs of incrementing the number of deployments is essential, this costs must not overwhelm the overall profit margin of the operation.

Assisted person's and Caregiver's view

The most value end users will get from scalable services is divided in cost and warranty. The more users, the cheaper it gets to run such systems, thus the price per user could be reduced. At the same time the more users, the higher the guarantee that the most important features of it will be used, and the most individual benefit is realized.

Massification of service use, will have to be accompanied by the possibility of personalization, so end users are not left feeling they are just one more. Also quality of the service must not be compromised, end users expect the same quality at less price when the system is scaled.

Government's view

Optimizing the costs, and complexity of increasing the number of users is essential for making AAL services cheap for everyone. Lowering the effort barrier to mass deployments will also increase the competition, another pressing factor in the cheapening of the operation costs per user.

The competitive prices per user not only directly helps to introduce AAL services for the population which, depending on the nature of the AAL services, will greatly

D5.2 – Evaluation Framework

Page 40 of 175

reduce other indirect costs from the state budget. For example general introduction of AAL services will reduce admission to hospitals or nursing homes.

3.5.10. Service Transferability

Transferability is the ability to deploy a service in other set-ups or even other countries. Transferability has been widely achieved in the software industry, where the same product, is slightly adapted (namely the language used in the user interface) to be operational on other countries. In the AAL domain Transferability is a bit more delicate as services may not be 100% software (as they may include human resources), general health/assistance workflows may differ, culture plays an important role in may AAL services, and many regions have slightly different laws that affect the service.

Transferability in the AAL domain is a game changer in the AAL domain, finally breaking the market free helping to reach the people in need of this technology.

Technology provider's view

A transferable service has to comply with certain technical, and otherwise logical, design rules:

Language independency, or internationalization, is the first and most important rule in order to enable transference, especially to other countries, or where population speaks another language.

Application design must be modular, to enable the customization of the deployment-sensitive components. These include devices, services that involve human resources and/or legacy systems not provided with the application.

Data may also require adaptation to other cultures, have as an example a nutritional manager. Ingredients and dishes in one region, due to cultural differences, may not be available on other regions.

Legal issues, especially with regard to data protection laws have to be taken into account.

Transferability opens the market, enables expansion; at the same time it brings competition that requires technology providers to be innovative and provide better quality/price than competitors. Competition is not the only result, collaboration is also possible, specialization on different aspects of the AAL problem enables different Technology providers to offer transferable sub-solutions, which other technology providers can use to integrate higher intelligence in their development.

This new market view opens the gate to new business models; handing the possibility to go from consultancy/custom-made-solutions for service providers, to white-label products, where requirements are driven by the technology providers, and then small customizations are developed for each client.

Service provider's view

The ability to import transferable services from other technology providers, even from other countries, broadens the choice for service providers. This increase in supply will help to keep reduced costs, while increasing the general quality of the solution, and at the same time not giving up on customization of the services. Service providers will be able to choose solutions that have been demonstrated to

D5.2 – Evaluation Framework

Page 41 of 175

be successful on other deployment sites, having greater grantee of the final product compared to contracting a technology provider to develop the solution from scratch or create an imitation product.

At the same time, Service providers may invest in new services to offer to their clients thanks to transferable AAL products.

Assisted person's and Caregiver's view

True open markets always benefit the consumers, they will get the services with better quality at competitive prices. An open AAL market will help people with special needs get services that otherwise would be far too expensive for service providers to provide, thus increasing the general well-being of assisted persons and satisfaction of caregivers.

Government's view

Open markets promote healthy competition that will help improve the overall quality of the AAL industry.

Successful technology providers will increase homeland's exports.

3.5.11. Integration with legacy systems

The showcase “Integration with Legacy Systems” aims to underline the platform capability to manage hardware, application and services that are earlier than current technology. Typically, the challenge is to keep the legacy application/services running while converting it to newer and more efficient code that makes use of new technology and programmer skills. A legacy system is not necessarily defined by age. Legacy may refer to lack of vendor support or a system's incapacity to meet organizational requirements. Legacy conditions refer to a system's difficulty (or inability) to be maintained, supported or improved. A legacy system (for open platforms) overcomes this problem because it is usually compatible with newly purchased systems. The introduction of new ICT-systems in the health sector is often complex and requires a lot of time and resources, because this public sector domain is highly complex and has many legacy systems. The introduction of innovation in ICT for healthcare always requires the integration and coexistence of new components with legacy systems, and the integration should be kept at a high level i.e. avoid complex integrations and keep simplicity for the end user to exchange information between a new system and the legacy system. It is important to stress that the universAAL platform uses ICT instruments to make this integration easily.

Technology provider's view

Technology providers may be able to rapidly extend their legacy systems to new technologies (mobile application, web application and cloud applications). Open platforms (and open-source protocols) enable technology providers to easily, quickly and freely extend the portfolio of capabilities and service offered, while maintaining robust and reliable legacy systems.

Service provider's view

From the perspective of service provider, this showcase is the most beneficial in terms of resource optimization. Every universAALized system already used by the

D5.2 – Evaluation Framework

Page 42 of 175

service provider (e.g. billing system, health records, user management…) can be integrated with a little effort.

Assisted person's and Caregiver's view

The showcase is interesting for the assisted person since it allows the reuse of obsolete technologies and systems that may be present inside homes of elderly users.

Government's view

From the Government perspective, many benefits can be derived by the involvement of a greater number of users, since the reuse of legacy systems well integrated with the platform will have advantages both from the economic perspective and from the number of end users involved in the use of new “open” technologies. It is likely that financial resources can be saved.

3.5.12. Security & Privacy

The Security and Privacy showcase aims to demonstrate the platform capability of providing confidentiality, integrity and availability of data/resources to only authorized stakeholders of the system. Also, it aims to include the system ability to provide privacy mechanisms, allowing task, transaction or transfer of private data only if authorized by the owner of the data.

Technology provider's view

Adding security and privacy characteristics, traditionally have been a burden for technology providers. This is usually due to the difficulty while testing and debugging the application, and therefore is generally added at the end of the development, which makes it prone to errors in deployment phases. A good platform will help incorporate security & privacy on the whole development lifecycle, especially since the beginning of development; while not interfering with testing and debugging tasks.

Service provider's view

For Service providers, it is important to have a robust security mechanism, so that clients that paid for a service receive the service while blocking clients (or malicious users) that did not pay the service from receiving such service.

Privacy is one of the most important indicators for quality in applications, providing security and privacy will rise the demand of clients and end users for the service provider's products. In a platform that incorporates privacy and security by default, optimizing costs do not affect the performance of the security and privacy features.

Assisted person's and Caregiver's view

As an end user, the assisted person will benefit from the Security and Privacy features, since his private data will be secured through several encryption protocols. Same, and in order to protect her/his privacy, the end user will be the only one able to access to certain data and application via the authorization features.

Government's view

From a Government "legal" perspective, the showcase will be aligned with the ethical rules and enforcement of security and data privacy. Providing such a feature will definitely encourage government to recommend the platform for future use.

D5.2 – Evaluation Framework

Page 43 of 175

3.5.13. Administration

It is possible to remotely (and of course locally) connect to platform nodes (or sets of nodes) and manage the status of the platform itself. It would therefore allow technical providers, service providers or even caregivers to execute maintenance operations such as updates, reboots, log inspection and similar. It provides a way to check on the status of the system and if necessary provide help and support if problems arise. With remote administration facilities, providers can manage several users' platforms in a unified and accountable manner.

Technology provider's view

Remote administration, and even on-site administration if necessary, gives technical providers the ability to make the system recover from certain errors and inspecting their cause, as well as many other technical maintenance tasks such as updates or statistics collection.

Service provider's view

Costs are reduced by being able to manage nodes remotely without sending personnel in-place. Service provider administrators can check on the status of their applications, but in general it is the same that can be done by the technology provider administrators, or just delegating these tasks to them.

Assisted person's and Caregiver's view

Given the technical nature of system administration is unlikely that any end user would directly benefit from access to this features, but in certain, less technical cases, it could be useful to allow some caregivers to remotely access administration basic features such as reboots. This could be helpful for deadlock situations or when service providers give limited support.

Government's view

Allows government stakeholders to participate in deployments as administrative peers, without technical intervention. In this case, like Service provider's advantages, costs are reduced thanks to remote and unified administrative capabilities.

Administrative subcontracting might create a new interesting market to nurture new business models.

3.5.14. Enhanced Market communication and distribution

An important aspect of AAL solution production is the communication with the market, knowing what is needed, what users want; as well as good and direct distribution channels, AAL applications need to reach the appropriate people.

An open platform needs a powerful tool for market support, a tool that puts in direct contact all stakeholders involved in AAL. Such a tool could be thought of as project management tool, where different teams are responsible of different aspects of the final product, as well as being able to communicate, and track all the ongoing tasks like feature requests, new milestones/releases, malfunctions, etc. Another view to this tool could be thought similar to digital stores, concretely application markets; where users are able to buy, download, and install AAL services as well as searching, browsing and rating the most interesting AAL services for them.

D5.2 – Evaluation Framework

Page 44 of 175

Technology provider's view

This kind of market tool enables Technology providers to promptly get the true needs of the market, both from users' end and from service providers. It promotes the capability of Technology providers to adapt and provide technologies able to fit the needs, this promotion turns into profit as early and/or quality producers will get the advantage over the market.

The differentiation of the roles of Technology provider and Service provider enables more independence and creativity on the technology provider, as well as distinct business models.

Service provider's view

Knowing what clients need is crucial for any business model. A market tool that enhances communication and distribution on the market is the perfect tool to get ahead of the demands and always meet what it is expected from service providers.

The ability to communicate with the market, rather than with technology providers in particular, promotes competition and yields the option to choose from different providers the best fitted solution for the service provider.

For many Service Providers distribution of the service is more complex than just installing the application at the deployment. In many cases the service requires allocation of human resources (for example, doctors), that in some cases will need to go to the deployment (for example nurses). In traditional digital stores this fact is not accounted for; thus the market tool must accommodate this.

The solution is to bundle software, hardware, and human resources as a service offered to the final user. This feature enables the creation of personalized service bundles, for example the same service can be offered with or without hardware; clients that require hardware will opt for the first, users that already have the required hardware will opt for the second. Bundling of AAL services may go a level beyond, and offer different package options (for example a package that has services A and B versus another that contains A, B and C), helping Service Providers manage their offer by keeping it in line with their billing and organizational structures.

Assisted person's and Caregiver's view

Access to all available services is an essential part of what an end user needs, without this access there is no chance for end users to receive or even know there are AAL services that can make their life easier. A digital store is a mechanism that fits this objective cleanly, but it must be easy to use, browse the needed services, as well as allow feedback to providers about their real needs.

The capability to get services in bundles simplifies things for end users, plus usually bundled services have better deals.

A digital store enables users to get services from different providers. Not being restricted to a single provider gives more freedom and flexibility to choose not only between different providers, but several simultaneously. For example Provider A offers a service that Provider B does not, and vice-versa; the user has the freedom to hire both services.

Some services are restricted by location (for example housekeeping services) or external services (like communication with local authorities), which means not all

D5.2 – Evaluation Framework

Page 45 of 175

service providers are able to expand where the demand physically is. The market tool promotes that successful services of this kind can be offered elsewhere, Service Providers will emerge where the demand rises; since the hardware and software resources can be shared and transferred.

Government's view

An open marketplace for hiring AAL services, is the best way to get the population assisted by AAL services.

Marketplaces also stimulate competition, and offer grounds for new businesses to flourish.

A "central" repository of AAL services, is an advantage; as for regulations and required specifications can be monitored on AAL services so they comply with health standards.

D5.2 – Evaluation Framework

Page 46 of 175

4. OPEA conceptual model

This chapter provides the theoretical background for the OPEA framework, resulting in the OPEA conceptual model.

We take several steps in this chapter, and build on the value perspective and showcases already introduced in Chapter 3.

The first step is to present the ecosystem for AAL in the ReAAL project as a value network (paragraph 4.1). We relate this to Porters Care Delivery Value Chain model [23]. As stakeholders are of different natures, ranging from elderly persons, to health or social service provider organisations to IT companies to software developers, the value proposition universAAL offers each of the stakeholders needs to be specified using stakeholder-specific parameters. We continue to argue that a close look at the ecosystem reveals in fact three partial networks.

The second step is to assess the different dimensions on which the ecosystem can, and should be evaluated (paragraph 4.2). For this purpose, we use Kidholm's Model for the Assessment of Telemedicine (MAST) [24] as a basis. The model is extended with the showcase evaluation (paragraph 4.3).

These two steps result in the conceptual model for the ReAAL project: OPEA conceptual model (paragraph 4.4).

4.1. The Ecosystem as a Value Network

As is standard scientific practise, the assessment of the open platform technology universAAL, which aims to improve (health related) quality of life for the elderly, is ultimately based on the impact on the (health) utility it provides for the end users. There is a long and complex (causal) chain however between the platform and the health impact, which involves many stakeholders in the ecosystem. As a first and necessary structure, the assessment therefore needs a model of the ecosystem and the stakeholders therein.

4.1.1. Value

The notion of value is a central concept as the evaluation aims to assess whether the open platform ecosystem is a valuable contribution to Ambient Assisted Living. Hence, we first consider the definition of value. Echoing Plato we might ask, which properties, features, actions, accomplishments of any kind the ecosystem must have to be correctly classified as being of value? ReAAL takes a multidimensional approach in answering this question that goes beyond the economic definition of value in healthcare in terms of ‘health outcomes relative to costs’ (e.g. Porter [23]). And since AAL covers more the quality of life domain that the narrow “health” domain, it is even better to say that value of AAL goes beyond the ‘quality of life outcomes relative to cost’.

Following a common model from marketing psychology, we define the AAL service to be of value to the end user of if the end-users’ perceived benefits outweigh the sacrifices [25]. When appropriate, these benefits and sacrifices are measured in monetary terms. The ecosystem assessment considers the value of the open platform in a similar manner for the other stakeholders in the ecosystem and defines for each of them their participation in the ecosystem to be of value if their perceived

D5.2 – Evaluation Framework

Page 47 of 175

benefits outweigh the perceived sacrifices. Again these benefits and sacrifices can be partly expressed in monetary terms, but other criteria may come into play as well. Hence, value is by definition multidimensional for each of the stakeholders, and the dimensions may vary among the stakeholders. As a result, an AAL application which is assessed to be valuable by each of the stakeholders in one pilot context, may fail to do so in another.

We elaborate the value dimensions in more depth below.

4.1.2. The Network of Stakeholders involved in Creating and Consuming Value enabled by ReAAL

The AAL domain has been considered to define a network of stakeholders (see e.g. the proceedings of the AAL Forum in 2012, and the presentations held at the AAL Forum 2013).

First we can organize the stakeholders by organization. The AAL platform provider, in the ReAAL project this is the ‘organization’ that developed and currently maintains the universAAL platform. We placed organization between inverted commas, because at this point in time the business model for universAAL is part of negotiation between its founders.

The AAL application provider/ AAL Technology provider is usually an SME that builds a new application or new services around existing applications, using the open platform. An application can have hardware and software components. Looking deeper at this stakeholder group, we see that the individuals in the organization can have a role of developer or owner of the application. The value of universAAL for “the AAL technology provider” is the sum of the values for these different subgroups in the organization, and the different types of technology provider (whether providing hardware or software or both).

The applications are bought by (Health or social) service providers who integrate the application in their service. In those organizations we can distinguish the perspective of the management, IT professionals and caregivers. Informal carers are the fourth stakeholder group. We explicitly place them in the value network, either as end user of the applications, or as the ones benefiting from their loved ones’ use of an AAL application. The assisted persons themselves belong to a fifth stakeholder group, which are also end users of the value network or ecosystem. In some cases the informal carer is also an end user. Finally, for the purpose of assessment it will be helpful to consider Societal stakeholders, which are not part of the ecosystem, but relevant nevertheless. Under society we can place stakeholders such as (local) government, insurance companies, etcetera.

Table 5 on the next page presents a first classification of stakeholders.

D5.2 – Evaluation Framework

Page 48 of 175

Table 5. Classification of stakeholders in the OPEA domain

Stakeholder group

AAL Platform provider

AAL Technology provider

Health/Social Service Provider

Informal carers

Assisted persons

Society

Individuals Platform developer

AAL Developer

Health service professional

Informal carer

Patient Policy maker (government)

Service support staff

Service support staff

IT professional Volunteer Client Commissioner (insurance)

Management Management Management Citizen

Roles Producer (of platform)

Producer (of application, device, etc.)

Producer (of service)

Deployer (of platform)

Deployer (of application)

Deployer (of service)

User (of platform)

User (of application)

User (of application & service)

User (of application & service)

Payer (of application)

Payer (of service)

Payer (of service)

Payer (of service)

At the same time, the AAL domain could also be seen as a network of non-human actors. Technologies, infrastructures, procedures, arrangements etc. play an equally important role: universAAL itself, the devices, the software, a contract between a vendor and a service provider. These tangibles and intangibles are used and produced by the stakeholders to create value. In studying value networks it is common to focus on the stakeholders, but it is through the technologies we see them adding value to the network.

We now formally define the open platform ecosystem as: a value network which consists of a set of organisations, referred to as stakeholders, involved in delivering AAL services and hence experiencing benefits and sacrifices from this involvement. By consequence, health and social service provider organisations and technology provider organisations alike form a part of the value network. In general, each of these organisations adds value to the AAL service, until it is ultimately delivered and the value is consumed. By definition, the person or persons consuming the value are the end-user of the service. In general there is a sole end user which is the ‘patient’ or ‘client’, whose health and/or quality of life is affected. Occasionally however, an informal carer can (also) be end-user. Typically however, informal carers add value rather than consuming it.

Users of AAL services are mostly chronically ill and/or elderly; people for whom the AAL services become part of their everyday life. This motivates viewing value from an AAL service not only in terms of health outcomes (or quality of life for that matter), but also in terms of the user experience (e.g. perceived ease of use). ReAAL therefore combines the health technology assessment discipline with that of service management. Both disciplines accommodate the ‘utility approach’ in which value is defined in terms of benefits and sacrifices (costs). From the health technology assessment perspective for instance, benefits are health outcomes (effectiveness), and sacrifices are often expressed in monetary terms, costs [26]. The financial perspective also applies to each of the stakeholders as such. The stakeholders involved in delivering the AAL service, typically enjoy financial benefits (revenues) from selling to other stakeholders and financial sacrifices (costs) from purchasing from other stakeholders. We will return to this financial perspective when discussing the value network externalities in the ecosystem.

D5.2 – Evaluation Framework

Page 49 of 175

4.1.3. Viewing the Ecosystem as three Partial Value Networks

It is important to note that the open platform itself is only one part of what universAAL aims to offer the AAL domain. As we described in the previous chapter, universAAL is an ecosystem in itself, with a marketplace (the uStore) where producers and buyers of AAL applications meet, a Developer Depot, where the developers can find all materials they need, and an ‘organization’ behind that maintains the code and gives support. During the ReAAL project, not all parts of this ecosystem have been developed to the full. This has consequences for the ability of the ReAAL project to evaluate ‘The universAAL ecosystem’, and draw conclusions on the results of ReAAL.

To structure the assessment, the ReAAL ecosystem, or value network, is partitioned into three levels of value addition, which vary by purpose and nature (see also Figure

8):

Platform Level: The level of stakeholders that increase the value of the platform through development and delivery.

Application Level: The level of stakeholders that add value by developing, enhancing, and/or implementing applications, which use the platform.

Health/Social Service Level: The level of stakeholders who add value in the health service delivery process, which is supported by the AAL applications which in turn run on the open platform.

Figure 8. Levels of the ecosystem

The Platform Level serves as the base level, on which especially the Application Level builds. The Application Level is the level where the AAL applications are created which in principle are delivered to the health or (social) care service organisations in the Health/Care Service Level, and potentially directly to end users. Normally however, the stakeholders in the Health/Social Service level facilitate the deployment of the AAL applications in the living environments of the elderly, integrated with their health services.

D5.2 – Evaluation Framework

Page 50 of 175

The three layers of the ecosystem align with the stakeholder groups in the previous section, as is demonstrated in Figure 9.

Figure 9. Ecosystem level versus Stakeholders

4.1.4. Financial Value and Network externalities of open platforms

The key selling points of open platforms have been described in paragraph 3.3. How does this relate to the value network? In other words, what is the (financial) value of an open platform on the three levels of the ecosystem? At first sight, the functionality an ‘open platform based-application’ provides to end users can also be provided by other AAL applications, and hence the value for the end user need not be different. For the health service provider, a similar argument may apply. The integration of the technology in the health services processes need not be different because of the underlying open platform. At the Health/Social Service Level, the value effects may therefore be limited. We note already however, that both for the end user as well as for the health or social service provider, a common platform may yield some benefits as explained below under network externalities. These values also became more visible in the description of the ReAAL showcases in paragraph 3.5.

At the Application Level, costs reductions may firstly occur because repeatedly using a common platform, and integrating with other applications using the same platform may entail efficiency gains. Moreover, repeatedly using the same open platform avoids paying for licenses and property rights of proprietary technology platforms. Some of these efficiency gains can however not be observed in a first application development, or without application integration. We will reconsider this issue, when discussing the design of the assessment of ReAAL.

Some effects on the benefits and costs at the Platform Level are beyond the scope of the assessment as the platform development has been completed prior to the start of ReAAL. We will not look back at the cost of developing universAAL, but the cost for exploiting and maintaining it are taken into account.

The open platform ecosystem defines a network which can be regarded as a value network, yet also as a technology network. As is well known, financial benefits and sacrifices in such networks are affected by so called network externalities [27]. In terms of the open platform ecosystem, network externalities come into play when

D5.2 – Evaluation Framework

Page 51 of 175

new AAL applications, new application development stakeholders, new health services, new health or social service provider organisations, and/or new end users increase already realized values.

At the Health/Social Service Level, end users and health or social service providers may enjoy network externalities when installing additional applications which can interact with already implemented ones. The value of the already implemented ones then may increase. The integrated applications may even impact the health service processes and yield further benefits (or costs). At the Application Level, externalities arise when an increasing number of stakeholders develop platform based applications. Firstly, because this creates additional value increasing opportunities for already existing applications. Secondly, it will make the universAAL platform development skills more attractive to acquire and utilize for the individual software developers. Moreover, this should result in a growth of the uStore in both number of applications, number of vendors, and consequently number of Health or social service providers visiting the uStore in search for new applications. Hence, the benefits, costs, value proposition, business case, et cetera, cannot be solely assessed per AAL application, or per organisation, but also needs to be considered as a function of the network externalities. The showcase analysis is a good way to identify the values and network externalities. The uStore, an instrument for market communication and distribution, that is unfortunately not operational yet, will be one of these showcases.

Table 6 below gives examples of network externalities per stakeholder.

Table 6. Network externalities

Stakeholder Network externalities

Assisted person, informal carer, formal caregiver

Network externalities enjoyed by using multiple open platform AAL applications

Health/Social service providers Network externalities enjoyed by increasing the end-user base and/or the collection implemented open platform AAL applications.

AAL application providers Network externalities resulting from developing multiple applications, in development speed, joint functionality, and service synergies.

Platform provider The more the open platform is used, the more valuable for developers to contribute or for commercial partners to provide services in relation to the platform.

D5.2 – Evaluation Framework

Page 52 of 175

4.2. Application of a general assessment model to an AAL ecosystem

Considering the open platform ecosystem as a value network helps to identify relevant levels of stakeholders and to frame value in terms of benefits and sacrifices. However, the benefits and sacrifices for each of the stakeholder organisations and individuals involved have as of yet not been structured or classified. In this section and the next, we present an assessment framework which is general enough to apply to each of the pilot case studies and to allow for analysis across the cases. This session deals with developing the assessment dimensions based on the Model for the Assessment of Telemedicine (MAST) introduced below.

A literature search revealed that currently no AAL specific evaluation model exists. Evaluation in AAL relies on evaluation models from telemedicine or ICT. The Model for the Assessment of Telemedicine, MAST, developed by Kidholm et al. [24] has been selected as a reference framework as it is most suitable as a basis for ReAAL among the existing models. This model has been developed in Europe and is currently proposed as a methodology for evaluation in the Active & Healthy Ageing domain [13].

Figure 10. MAST model

MAST (see Figure 10) consists of a base in which the health problem and characteristics of the application are described, followed by six domains of assessment: (1) safety; (2) clinical effectiveness; (3) patient perspectives; (4) economic aspects; (5) organisational aspects; and (6) socio-cultural, ethical and legal aspects. After assessment of these domains, a transferability assessment should take place.

A closer look at MAST reveals that MAST is best suited for the telemedicine technology under assessment to be mature and ready for implementation without further technology development. This assumption is not uncommon in HTA studies. For ReAAL however, the applications are the first applications developed on the novel open universAAL platform. The benefits and sacrifices resulting from the open platform for developers, health or social service provider organisations and end users need all be included in this early development stage. The technical challenges occurring in this immature stage need to be identified and taken into account in the assessment. In contrast to the MAST assumptions and model, the ReAAL

D5.2 – Evaluation Framework

Page 53 of 175

assessment needs to explicitly include the Platform Level and Application Level stakeholders, while the network externalities are at best in an early stage of development.

Another important difference stems from the fact that telemedicine is a clinical domain, aiming at improving or sustaining the health status of patients at home. AAL technologies are not necessarily of a medical nature, nor are the end users necessarily patients. The safety first adagium is hence less appropriate.

The MAST model stresses the importance of assessing the transferability (generalizability) of the results for each domain: would the outcomes be different if the application was used in another country? Would the costs be different if the number of users would increase? These assessments should also be made in the ReAAL project. It is important to stress that the conclusions that need generalization are about deploying AAL solutions on an open platform. In fact, demonstrating the costs and effects of deploying universAALised applications in another context is part of the ReAAL project itself. All pilot have to choose at least one application from another pilot, adapt it to their local needs and implement it with a limited number of users. This transferability is very important for the ReAAL project, because it shows key selling points of open platforms. As a consequence, a transferability assessment is not a separate step in the project, but part of its design. The ReAAL project, being a technical project on open platforms, needs to show in the end that it is technically feasible, with relatively little effort to implement the AAL solutions on large scale or in other contexts and countries. More specifically, it is part of the showcase assessment. Showcases are descriptions of the key selling points of open platforms, from the perspective of all stakeholders. In paragraph 4.3 the showcases deemed relevant for the ReAAL project were already described. These showcases will be implemented and validated, demonstrating value for different stakeholders. Whether or not the same user outcomes will be achieved is of secondary importance for ReAAL due to its technical focus.

We decided to rename the MAST topic ‘transferability’ to ‘Showcases’ because the name would be confusing. In technical vocabulary it has another meaning than in research methodology. But more importantly it would be confusing since one of the showcases is called transferability.

Table 7 on the next page shows the original MAST domains and the proposed adaptations to AAL Technology evaluation as proposed in OPEA, where the categories are re-ordered to reflect the OPEA logic.

D5.2 – Evaluation Framework

Page 54 of 175

Table 7. Comparison of MAST and proposed assessment domains for AAL ecosystem assessment

MAST model Assessment domain AAL ecosystem

Background for the assessment

Health problem and characteristics of the application

Assistance problem and characteristics of the application and platform

Assessment domains

X Technical aspects

Patient perspectives User perceptions

Clinical effectiveness Outcomes

Economic aspects Economic aspects

Organisational aspects Organisational aspects

Socio-cultural, ethical and legal aspects Contextual aspects

Safety X (incorporated in the Technical and Outcomes domains)

Transferability Showcases

Cross-border

Scalability

Generalizability

4.2.1. Background for the assessment

Assistance problem and characteristics of the platform & application

This domain merely presents a description of the users, a short description of the (functional) problem that the AAL application is designed to address, and the functionality provided by the application. The description includes clarifying how the use of the applications is integrated with the health service provisioning by professionals, informal carers, and/or the end-user. It may also address interfaces with other universAAL applications and corresponding joint functionalities. Further, it describes the way the application makes use of the universAAL platform, and which showcase it demonstrates.

4.2.2. Assessment domains

Technical aspects

Since universAAL has never been tested in real-life and on a large scale, there are many technical issues to include in the evaluation. Therefore, this assessment domain has been added to the MAST model. All universAALised applications will need thorough testing, and the results of these tests should be included in the evaluation. These tests, but also evaluation during deployment are needed to assess the quality of the platform and the quality of the universAALised applications. This, for example, included the technical reliability (which is part of the Safety domain in MAST). The technical aspects need – if possible- quantifiable and

D5.2 – Evaluation Framework

Page 55 of 175

objective measures. Complemented with the experience of the technical stakeholders (see the domain User perceptions) the quality of the platform and applications should be defined.

The technical aspects can be measured with a set of indicators.

User Perceptions

This domain gives insight into users’ subjective experiences. In the MAST model only one user group is relevant: the patient. For ReAAL we consider the perceptions of stakeholders that use the universAAL platform equally important. We argue that technology developers are also users of the uAAL platform, because through the developers’ depot they have access to the codes and standards and use them to develop and adapt services. They are the ones to judge the quality of universAAL. To include all these different users and have a consistent naming, we label the domain “User perceptions” rather than ‘patient perspective’.

Like MAST, important themes are user acceptance, actual use and satisfaction, but we are also interested in expectations of the user at the beginning. Stratified by the different stakeholders, this ReAAL-OPEA domain addresses descriptive operational measurements regarding for instance use of the application in terms of frequency, purpose and duration (assisted persons, caregivers, professionals), or use of the open platform environment by the developers (frequency, productivity,..). Information on these topics will provide knowledge about how the users appreciate the uAAL platform, the AAL technologies and services and on what aspects improvements are needed.

We complement this with experienced value for each stakeholder, because that fits the value network approach. For example, users experience value of a service if it fits their needs. Value is further elaborated in the Economics domain.

The user perceptions can be measured with a set of indicators.

Outcomes

This domain focuses on assessing the effectiveness of AAL/uAAL via a set of quantifiable measures, to be measured at the user. Because AAL has impact on the self-management of independently living people, it is relevant to look at health and care consumption of the assisted persons. A notable difference between MAST and ReAAL, however, is that while MAST focuses on patients, ReAAL includes assisted persons who are not necessarily patients. For this reason, ReAAL will collect information on the assisted persons’ health problems as well as on factors hindering independent living. This means that in contrast to MAST’s focus on clinical effectiveness (impact on mortality and morbidity), ReAAL focuses on independent living, quality of life, wellbeing and comfort. This also means that while MAST proposes specific effectiveness indicators for each chronic disease [28, pp.26-29], ReAAL will use more generic measures to assure comparability between pilots and applications.

Negative outcomes should also be considered. In the MAST model, (clinical) safety is a separate assessment domain, but we argue that it makes sense to assess safety as part of the outcomes. Therefore the occurrence of adverse events and side effects should be included in the evaluation.

The outcomes can be measured with a set of indicators.

D5.2 – Evaluation Framework

Page 56 of 175

Economic aspects

This domain relates to assessing the impact of uAAL on costs and benefits of AAL on society, the participating organisations and users. Such analyses take into account that the pilots run in different countries with different financial schemes. Further, typically, many of the costs are upfront, while network externalities and revenues typically accrue medium to long term, placing restrictions on the assessment methods and results. When measuring the costs, activity based costing should be used, to assure that comparisons between pilots can be made. An interview with each stakeholder group will be used to select the types of cost that are relevant.

In general, two types of economic evaluation are distinguished in MAST: the socioeconomic benefit (from a societal perspective) and the business case (for the service provider). The MAST model does not provide a specific methodology for performing these evaluations, and the MAST user is free to choose an approach that matches his goals [27, pp.32].

In line with ReAAL’s value network approach, we apply different methods for different stakeholders. For example, to capture the costs for the platform providers and technology developers, we use a specific methodology developed to access the costs of providing open technology [29]. Taking into account the goal of the ReAAL project, which is to demonstrate socioeconomic benefit of open platforms, it is also relevant to consider the societal financial benefits and sacrifices. Thus, the focus of the ReAAL evaluation is mostly on the socioeconomic benefit. However, the pilots can calculate their own business case. This work is mainly relevant for continuation after the ReAAL project, and is as such part of WP6 (Dissemination & Networking)

The economic aspects can be measured with a set of indicators.

Organisational aspects

Whereas the “outcomes” and “user perceptions” domains capture the experiences and actions of the individual user, this domain has a similar focus, albeit at the organisational level.

This MAST domain is thus appropriate for the organisations in the ReAAL value network, including service and technology providers. Important themes are organizational fit, implementation and strategic position.

We also include in this domain the impact the ReAAL project as a whole has for the AAL field. This cannot be measured at the level of an individual organization or stakeholder group, but at project level. Put differently, we consider ReAAL as an organization in itself, which has certain goals.

The organizational aspects can be measured with a set of indicators.

Contextual aspects

The final assessment domain was originally called in MAST ‘Socio-cultural, ethical and legal aspects’. In MAST, the socio-cultural domain refers to the impact of telemedicine on the life of patients, including social inclusion [28 pp.39-42]. For AAL, the life of assisted persons already includes the broad domain of health, wellbeing and comfort. This domain may therefore overlap with the outcomes domain, because some of the piloted applications aim at improving autonomy or social inclusion. However, we will take into account possible differences in adoption (and accessibility) between different groups of assisted persons.

D5.2 – Evaluation Framework

Page 57 of 175

The ethical part of MAST relates to the way the study was performed (e.g. informed consent of patients, exit strategy). In ReAAL this is covered by the work of the ethical board and the manual that was implemented. Whether or not these ethical requirements were met, will be part of the evaluation. Furthermore we also address in the evaluation any ethical concern users or care providers might have with respect to AAL technology.

Legal aspects could relate to the legal considerations when offering telemedicine (e.g. accreditation of the formal caregiver or the way personal/medical data is protected). For AAL this is also relevant, since sensitive information may be shared and stored. All pilots have to perform a privacy assessment. Moreover, there is the issue of responsibility and accountability for the platform.

Another legal aspect relevant in the AAL domain is procurement. The different procurement policies of the countries participating in ReAAL might have effect on the choices pilots make and the flexibility they have.

Related to this, another important contextual factor is financial arrangements and reimbursement policies. The success of a pilot’s implementation of AAL solutions on an open platform might depend on the way it will be paid in a situation without EU funding.

The contextual aspects are very important to assess, in order to be able to explain local differences, but also to draw conclusion on the societal level. However, these aspects cannot be ‘measured’ easily with indicators. We have to rely more on descriptions and lessons learned.

4.3. Showcase assessment

In addition to the MAST model, ReAAL has an extra assessment, the assessment of the showcases. The showcases are based on the features of universAAL and open platforms in general, and demonstrate the value open platforms bring to AAL applications. The fourteen showcases of universAAL have been described and analysed in paragraph 3.5.

The assessment comprises:

a description of the showcase (what it is)

an analysis of the potential values for stakeholders (technology provider, service provider, user, government)

a (technical) demonstration (showing that an application has indeed implemented the showcase)

a valuation of the demonstration (are the potential values really acknowledged by the stakeholders?)

The showcase assessment, thus, takes both the micro and macro context into account. The outcomes of the assessment are therefore relevant for Stage II of the evaluation as well.

D5.2 – Evaluation Framework

Page 58 of 175

4.4. OPEA conceptual model

Combining the adjusted MAST domains with the aforementioned stakeholders in and around the value network, we derive the open platform ecosystem assessment framework presented in Figure 11.

Figure 11. OPEA Conceptual model

D5.2 – Evaluation Framework

Page 59 of 175

5. OPEA indicator model

The Conceptual model was used to define relevant indicators for the project. The OPEA indicator model was a practical tool for this work.

5.1. OPEA indicator model

The indicator model is a three-axis model, shown in Figure 12. We analysed which assessment domains were relevant for which stakeholder in the value network, and for which level of the ecosystem (the platform level, application level or service level). As was mentioned already in Chapter 2, we used previous ReAAL deliverables, literature studies, analyses of ready available indicator sets, and discussions with experts to define and describe the indicators. If there were already metrics or validated instruments available, we chose those if they were applicable to the ReAAL aims.

Figure 12. OPEA Indicator model

D5.2 – Evaluation Framework

Page 60 of 175

5.2. Theoretical models applied

Two ready available models helped us define relevant indicators on the domains Technical aspects and User perceptions: the Information Success model of DeLone and McLean’s [29] and the Telecare Acceptance and Use Model of Sponselee [33].

5.2.1. Information System Success

DeLone and McLean’s model on Information System Success ISS) will be used [28] in the ReAAL evaluation. This is a framework and model for measuring the complex-dependent variable in information system research. The model identifies the following constructs (dimensions) to be assessed (Figure 13): System quality, Information Quality, Service Quality, Use, User Experience and Net Benefits. These constructs are relevant for any information system, and therefore also applicable to the ReAAL case. We can, and will, use this model on the level of the platform (universAAL) and the level of the (universAALized) applications.

Figure 13. DeLone and McLean model for information system success (ISS)

(updated version 2003)

The constructs in the ISS model have a causal relationship, as is argued by the developers. System quality, information quality and service quality have impact on both the intention to use (and actual use), and on user satisfaction. Use and user satisfaction lead to the experience of a net benefit. This net benefit has, again, impact on intention to use and user satisfaction. In addition to this causality, the authors also argue that in itself each construct is a determinant for success.

System quality

With System quality DeLone and Mclean refer to the desired characteristics of an information system, as valued by its users. In their paper five metrics are presented as indicators for this construct: usability, availability, reliability, adaptability and response time.

These indicators should be measured through testing. The open platform is itself a System and its technical characteristics have to be evaluated, particularly the extent to which the key selling points of open platforms are realised. In the ReAAL evaluation this is demonstrated through showcases, which show the usage of

D5.2 – Evaluation Framework

Page 61 of 175

advantage features of the platform and their added value. In the case of AAL applications as systems, indicators related to reliability, maturity and efficiency are particularly important.

Information quality

This domain refers to the content of the system, that is the information it provides. Authors refer particularly to the e-commerce field where information quality is proposed to be assessed by the following characteristics: completeness, ease of understanding, personalization, relevance, and security. When analysing development platforms, information in mainly related to the technical documentation that accompanies the code of the platform, while if applications and services for the final users are under analysis, information is related to the content managed by these applications/services.

Service quality

This construct is added in the updated ISS model, because of the emerging e-commerce field where not only the quality of the product but also the quality of the support given by the service provider is crucial. Assurance, empathy and responsiveness are the proposed metrics for this construct. In the case of open platforms, service quality refers to the support given by the platform's provider or its open source community, to those developers using the platform. Support means answering to questions, giving classes and tutorials, fixing bugs. In case of AAL applications and services, the service quality is measured by their users.

Intention to use & Use

According to DeLone and McLean intention and actual behaviour are very much related to each other. Here the following metrics are proposed: nature of use, frequency of use, use patterns and an outcome measure: the number of transactions executed. When analysing the use of open platforms, showcases are a good way to what extent the platform is being exploited. In addition, and for the case of final AAL applications, questionnaires are the preferred way to assess the (intention to) use.

User satisfaction

User satisfaction can be measured looking at behaviour of the user (return visits) and the experience of the user, for which a survey is needed. The survey should cover the entire experience cycle. This is applicable to end users as well as "intermediate" stakeholders like developers and managers of a company.

Net benefits

Net benefits are the most important success measures as they capture the balance of positive and negative impacts on all stakeholders. DeLone and McLean distinguish metrics on cost(savings), time savings, market expansion, incremental additional sales. Benefits can be measured at all levels, from developing efficiency, to improvement in mortality rates or quality of life. Studying the relationship between these, apparently so distant, dimensions is however a big challenge.

5.2.2. Telecare Acceptance and Use Model

In the domain of information technology implementation several models exist that explain or predict acceptance and use of technology. The most classical model is TAM: Technology Acceptance Model by Fred Davis [30]. TAM was influenced by the

D5.2 – Evaluation Framework

Page 62 of 175

theory of planned behaviour (of Ajzen & Fishbein) [31]. TAM has evolved to another model: UTAUT, which is the Unified Theory of Acceptance and Use of Technology [32]. Being unified models, developed for a labour context (people using information systems in their job), they have been criticized by researchers in the eHealth and AAL field.

Sponselee (2013) developed an alternative acceptance and use model which takes the context of eHealth and Smart Homes into account: the Telecare Acceptance and Use Model (TAUM) [33]. She applied the model (see Figure 14) to her own PhD research in the Netherlands. Unfortunately we could not find any publications (except for the thesis) about this model.

Figure 14. Telemedicine Acceptance and Use Model (TAUM)

Sponselee 2013

The TAUM model proposes a ‘needs based approach’. Personal needs are expected to determine the added value of the technology, its usefulness and its effectiveness ([33], p. 118). The needs based approach is rooted in a value perspective on smart home technology – as is also central in the OPEA framework - instead of a design domain base.

Needs and dependence

The needs are for example living independently, and the current level of dependency should be taken into account. Needs and dependence are the drivers for technology usage by elderly people. In addition to level of dependence, level of support is also relevant.

Benefits

Benefits refer to the beneficial outcome effects of technology usage (perceived usefulness), and the subjectively perceived benefits of the behaviour per se.

As every benefit comes with downsides, Sponselee argues, the variable ‘benefits’ should not only consist of objectively measurable effects, but also of the subjective appreciation and valuation of the technology and its usage. It is important to notice

D5.2 – Evaluation Framework

Page 63 of 175

that perceived benefits can change over time, and non-use might follow. That is why Sponselee draws two arrows between benefits and use.

Sponselee breaks down Benefits in the following indicators: perceived usefulness, extrinsic motivation, outcome expectations, performance expectancy, and long-term consequences.

Moderator variables and stakeholders

The process through which technology usage develops is expected to be influenced by multiple moderator variables, for which stakeholders can be held responsible.

Designers of technology are responsible for the accessibility. People can be blocked from accessing new technology due to physical, conceptual, economic, cultural, or social barriers ([33] p. 122). The most important accessibility items are investments, such as costs for the individual, and perceived ease of use (complexity, user friendliness).

Caregivers are responsible for the facilitating conditions: training, documentation, user support. When the organization of support and technology use is inaccessible to the end-users, they may become insecure and demotivated to use the technology.

Last, the care receivers have different characteristics that influence the acceptance and use. Previous research into technology acceptance concluded that gender, sex, experience with technology, computer experience, and self-efficacy are the most important variables.

The TAUM model is an interesting model to use from the perspective of the assisted person. It will therefore be applied to the service level of the evaluation.

5.2.3. Role of the models

These two models differ from MAST in several ways. A first observation is that they do not cover everything, but have a specific focus either on the technology (quality of the technology) or on the assisted person’s acceptance. If we would only use these models, we could not answer our research questions, because we would not cover all dimensions nor all stakeholders.

The ISS and TAUM models are based on assumptions of causality, which makes them interesting to add to the ‘neutral’ MAST model. We can use the models to test some hypotheses in the project. For example that developers who are more positive about system quality also report the highest net benefits, or that younger assisted persons experience higher value than older ones.

The final reason for using them is that they propose ways to measure the constructs. The ISS model does not come with the instruments, but the developers suggest some readily existing instruments such as the UTAUT questionnaire and SERVQUAL. When selecting the indicators for ReAAL, we looked at these instruments but some were too extensive to use (entirely).

Sponselee used in her research a questionnaire covering all constructs of her model. For those indicators that were selected for the ReAAL project, we assessed if we could use her questions, other available questions/metrics, or if we needed to construct them ourselves.

D5.2 – Evaluation Framework

Page 64 of 175

In the end, the indicators and measuring tools (questionnaires) are inspired by constructs of the ISS and TAUM model but they do not fully cover each construct.

5.3. Indicators for pilot and showcase evaluation

In the next paragraph the indicators will be described in depth. For the purpose of clarity the indicators are defined on two levels. The highest levels are the indicator groups. These groups show the evaluation topics we chose to include in the evaluation. Each indicator group consists of one or more subindicators. In ‘Appendix A. Detailed indicator list’ these subindicators are operationalized; i.e. we describe how they should be measured (for example with which instrument).

The remaining of this chapter is structured as follows: for each assessment domain the main indicator groups and involved stakeholders are listed in an overview table.

We choose names for the indicator groups and subindicators that are easy to understand. The way the indicator will be measured might be more complex, and technical. This is only relevant for those interested in indicator construction.

Each subindicator has a code. The first two characters refer to the assessment domain, the digit refers to the indicator group (see Table 8).

Table 8. Indicator codes

Code Assessment domain

BA Background

TA Technical aspects

UP User perceptions

OC Outcomes

EA Economic aspects

OA Organizational aspects

CA Contextual aspects

SHOW Showcases

This is followed by a specification of the indicator groups in subindicators. A detailed description of the subindicators can be found in ‘Appendix A. Detailed indicator list’.

D5.2 – Evaluation Framework

Page 65 of 175

5.3.1. Indicator groups per assessment domain

In Table 9 an overview is provided of all indicator groups per assessment domain. Each assessment domain has at most four indicator groups.

Table 9. Indicator groups per assessment domain

Domain Indicator groups

Background for the assessment

Assistance problem & characteristics of the platform and application

Description of user group

Description of application and service

Description of platform

Description of universAALisation

Assessment domains

Technical aspects Quality of the platform

Quality of the application

User perceptions User acceptance Use Satisfaction Value

Outcomes Health & health consumption

Quality of life Independent living Adverse events & side effects

Economic aspects Cost Revenues Willingness to pay Market value

Organizational aspects Organizational fit Implementation Impact on core process

Strategic position

Contextual aspects Ethical aspects Legal aspects Sociocultural aspects

Showcases

Showcases Description of showcase

Demonstration of showcase

Value of showcase

D5.2 – Evaluation Framework

Page 66 of 175

5.3.2. Indicator groups per stakeholder

Another way to show the indicators is to map them according to the stakeholder involved. The coloured cells in Table 10 indicate that one or more indicators within the indicator group are measured at the level of this specific stakeholder.

Table 10. Indicator groups per stakeholder

Platform provider

Application provider

Health/ Social Service provider

Informal Carer

Assisted person

Society

Assistance problem & characteristics of the platform and application

Description of user group

Description of application and service

Description of platform

Description of universAALisation

Technical aspects

Quality of the platform

Quality of the application

User perceptions

User acceptance

Use

Satisfaction

Value

Outcomes

Health & health consumption

Quality of life

Independent living

Adverse events & side effects

Economic aspects

Cost

Revenues

Willingness to pay

D5.2 – Evaluation Framework

Page 67 of 175

Table 10 continued…

Platform

provider Application provider

Health/ Social Service provider

Informal Carer

Assisted person

Society

Organizational aspects

Organizational fit

Implementation

Impact on core process

Strategic position

Contextual aspects

Ethical aspects

Legal aspects

Sociocultural aspects

Showcases

Description

Demonstration

Value

5.4. Indicators description per assessment domain

5.4.1. Background of the assessment

Table 11. Indicators Background for the assessment

Domain Assistance problem & characteristics of the platform and application

Involved stakeholders Platform provider, application provider, Health / Social Service provider, Informal carer, Assisted person

Indicator groups BA_1 Description of user group

BA_2 Description of application and service

BA_3 Description of platform

BA_4 Description of universAALisation

Indicators

1. Description of user group

BA_1a Assisted person

BA_1b Informal carer

BA_1c Formal caregiver

BA_1d Application developer

2. Description of application and service

BA_2a Description of application

BA_2b Description of service

3. Description of platform

BA_3a Components of universAAL

4. Description of universAALisation

BA_4a Pilot set-up

BA_4b Ontology

D5.2 – Evaluation Framework

Page 68 of 175

5.4.2. Technical aspects

Table 12. Indicators Technical aspects

Domain Technical aspects

Involved stakeholders Platform provider, application provider, Health / Social Service provider, Formal carer, Informal carer, Assisted person

Indicator groups TA_1 Quality of the platform

TA_2 Quality of the application

Indicators

1. Quality of the platform

TA_1a Platform test results

TA_1b Number of open bug reports

TA_1c Platform feature request

2. Quality of the application

TA_2a universAALisation of the application(s)

TA_2b Application test results

5.4.3. User perceptions

Table 13. Indicators User perceptions

Domain User perceptions

Involved stakeholders Health/care/social Service provider, informal carer, assisted person, application provider

Indicators groups UP_1 User acceptance

UP_2 Use

UP_3 Satisfaction

UP_4 Value

Indicators

1. User acceptance

UP_1a Reliability of the application

UP_1b Usability of uAAL

UP_1c Usability of application

UP_1d Usefulness of uAAL

UP_1e Usefulness of application

UP_1f Role of social environment

2. Use UP_2a Use of uAAL features

UP_2b Use of uAAL components

UP_2c Use of the application

3. Satisfaction UP_3a Satisfaction with the uAAL platform

UP_3b Satisfaction with(formal/informal) care

UP_3c Satisfaction with the application/service

UP_3d Information quality uAAL documentation

UP_3e Information quality application/service level

UP_3f Service quality uAAL

UP_3g Service quality application/service level

4. Value UP_4a Experienced value of uAAL

UP_4b Fit with needs of assisted person

UP_4c Experienced value of the service

UP_4d Experienced value of interoperability for end user

D5.2 – Evaluation Framework

Page 69 of 175

5.4.4. Outcomes

Table 14. Indicators Outcomes

Domain Outcomes

Involved stakeholders Health/care/social Service provider, informal carer, assisted person

Indicators groups OC_1 Health & health consumption

OC_2 Quality of Life

OC_3 Independent living

OC_4 Adverse events & side effects

OC_5 Service provisioning

OC_6 AAL development

Indicators

1. Health & health consumption

OC_1a Health of the assisted person

OC_1b Health consumption of the assisted person

2. Quality of life OC_2a Health related quality of life of assisted person

OC_2b Wellbeing related quality of life of assisted person

OC_2c Quality of life of informal carer

3. Independent living

OC_3a Independent living of the assisted person

4. Adverse events and side effects

OC_4a Adverse events using the uAALized application

OC_4b Falls

5.4.5. Economic aspects

Table 15. Indicators Economic aspects

Domain Economic aspects

Involved stakeholders Platform provider, application provider, Health / Social Service provider, informal carer, assisted person

Indicators groups EA_1 Costs

EA_2 Revenues

EA_3 Willingness to pay

Indicators

1. Cost EA_1a Cost of uAAL platform deployment

EA_1b Cost of universAALisation

EA_1c Cost of deployment and operation of AAL

EA_1d Cost of service

EA_1e Cost of importing an application from another pilot

2. Revenues EA_2a Revenues for platform provider

EA_2b Revenues for application provider

EA_2c Revenues for service provider

3. Willingness to pay

EA_3a Willingness to pay for uAAL platform

EA_3b Willingness to pay for universAALized applications

4. Market value EA_4a Market value of universAALized application

D5.2 – Evaluation Framework

Page 70 of 175

5.4.6. Organizational aspects

Table 16. Indicators Organizational aspects

Domain Organizational aspects (OA)

Involved stakeholders Health/care/social Service provider, application provider, platform provider

Indicators groups OA_1 Organizational fit

OA_2 Implementation

OA_3 Impact on core process

OA_4 Strategic position

Indicators

1. Organizational fit OA_1a Fit with work processes of service provider

OA_1b Fit with legacy systems of application provider

OA_1c Fit with legacy systems of service provider

OA_1d Innovation climate of application provider

OA_1e Innovation climate of service provider

2. Implementation OA_2a Implementation of universAALised applications and services

3. Impact on core process

OC_3a Productivity in development process

OC_3b Efficiency in deployment process

OC_3c Quantity of care/service

OC_3d Quality of care/service

4. Strategic position

OA_4a Strategic position of platform provider

OA_4b Strategic position of application provider

OA_4c Strategic position of service provider

5.4.7. Contextual aspects

Table 17. Indicators Contextual aspects

Domain Contextual aspects

Involved stakeholders Health/care/social Service provider,

Indicators groups CA_1 Sociocultural aspects

CA_2 Legal aspects

CA_3 Ethical aspects

Indicators

1. Sociocultural aspects

CA_1a Accessibility

CA_1b Policy for inclusion

2. Legal aspects CA_2a Procurement process

CA_2b Data protection

3. Ethical aspects CA_3a Ethical concerns of users

CA_3b Ethical approval of pilot

D5.2 – Evaluation Framework

Page 71 of 175

5.4.8. Showcases

Table 18. Indicators Showcases

Domain Showcases

Involved stakeholders Application developer, application provider, platform provider, formal caregiver, assisted person, informal carer

Indicators groups SHOW_1 Description of showcase

SHOW_2 Demonstration of showcase

SHOW_3 Value of showcase

Indicators

1. Description of the showcase

SHOW_1a Cross-application resource and capability sharing description

SHOW_1b Plug and Play description

SHOW_1c Advanced Distribution description

SHOW_1d Scalability description

SHOW_1e Evolution description

SHOW_1f Integration with legacy systems description

SHOW_1g Services Integration description

SHOW_1h Security & Privacy description

SHOW_1i Service Transferability description

SHOW_1j Advanced User Interaction description

SHOW_1k Personalized Content Push description

SHOW_1l Ambient Intelligence description

SHOW_1m Enhanced Market communication and distribution description

2. Demonstration of showcase

SHOW_2a Cross-application resource and capability sharing demonstration

SHOW_2b Plug and Play demonstration

SHOW_2c Advanced Distribution demonstration

SHOW_2d Scalability demonstration

SHOW_2e Evolution demonstration

SHOW_2f Integration with legacy systems demonstration

SHOW_2g Services Integration demonstration

SHOW_2h Security & Privacy demonstration

SHOW_2i Service Transferability demonstration

SHOW_2j Advanced User Interaction demonstration

SHOW_2k Personalized Content Push demonstration

SHOW_2l Ambient Intelligence demonstration

SHOW_2m Enhanced Market communication and distribution demonstration

3. Value of showcase

SHOW_3a Cross-application resource and capability sharing value

SHOW_3b Plug and Play value

SHOW_3c Advanced Distribution value

SHOW_3d Scalability value

SHOW_3e Evolution value

SHOW_3f Integration with legacy systems value

SHOW_3g Services Integration value

SHOW_3h Security & Privacy value

SHOW_3i Service Transferability value

SHOW_3j Advanced User Interaction value

SHOW_3k Personalized Content Push value

SHOW_3l Ambient Intelligence value

SHOW_3m Enhanced Market communication and distribution value

D5.2 – Evaluation Framework

Page 72 of 175

5.5. ReAAL impact indicators

Input for the validation are the results from the pilot and showcase evaluation, but also a set of indicators collected on the project level. The indicators are depicted in Table 19, but more might be developed during deployment phase of the (associated) pilots.

Table 19. ReAAL impact indicators

ReAAL impact indicators

Involved stakeholders All, measured on ReAAL level

Indicator groups IMPACT_1 universAAL success

IMPACT_2 Pilot’s success

IMPACT_3 Upscaling success

IMPACT_4 Dissemination success

1. universAAL success

IMPACT_1a Number of successfully demonstrated showcases

IMPACT_1b Number of supported operating systems

IMPACT_1c Number of supported device types

2. Pilot’s success IMPACT_2a Number of pilots with successful universAALisation

IMPACT_2b Number of pilots reaching number of users

IMPACT_2c Number of successfully implemented imported applications

3. Upscaling success

IMPACT_3a Number of associated pilots

IMPACT_3b Number of associated vendors

4. Dissemination success

IMPACT_4a Number of visits to website (measured quarterly for the whole project time)

IMPACT_4b Number of accounts in the developer depot of uAAL

IMPACT_4c Number of interested pilots

IMPACT_4d Number of visitors to ReAAL events

IMPACT_4e Number of H2020 proposals that use uAAL

5.6. Mapping of OPEA indicators and MAFEIP set

In 2012 a set of indicators was launched to assess the evolution and impact of the European Innovation Partnership of Active and Healthy Ageing: the MAFEIP [1]. The indicators refer to the triple win of the program: more healthy life years, a sustainable health system, and innovation & growth.

In Table 20 on the next page the main EIP-AHA indicators are listed. The bold indicators are also part of ReAAL’s evaluation. It becomes clear from the previous paragraphs that the ReAAL project has many more indicators.

D5.2 – Evaluation Framework

Page 73 of 175

Table 20. EIP-AHA indicators QoL Sustainability of

Health systems Innovation & growth

Primary indicators

- HrQoL - Mortality

- Health and care resource use - Health and care cost/expenditure

- SME’s and industry involved - Implemented technology and devices

- Employment rate

Secondary indicators

- Risk factors - Physical activity6 - Falls

Advice - Patient / user satisfaction

6 Will be assessed for the pilots that aim at improving physical activity. For all pilots it is part of QoL (EQ5D)

D5.2 – Evaluation Framework

Page 74 of 175

6. Evaluation design

This chapter describes the design of the evaluation activities for the ReAAL project. The design follows the evaluation plan (see Figure 15)

Figure 15. Evaluation plan

6.1. Stage I: Pilot evaluation

Stage I pilot evaluation consists of two phases. In this section, the general approach to the pilot evaluation is described. The general instruments used in the evaluation are described in chapter 7. Each pilot differs in application (AAL domain, eHealth domain), number of users per application (more or less than 100) and user group (whether or not informal carers are involved, for example). Therefore an extra document has been made, describing the evaluation plan for each pilot (see ‘Appendix B. Evaluation activities per pilot’). Updates of this lively document are available upon request from the Evaluation Team.

D5.2 – Evaluation Framework

Page 75 of 175

6.1.1. Phase 1: Preparation, adaptation and test phase

Phase 1 is based on the data collected in the preparation and adaptation phase of the pilot. This phase has, mainly, a technical focus.

From the assessment domains, several indicators are measured:

Background for the assessment

Description of user group

Description of application and service

Description of platform

Description of universAALisation

Assessment domain

Technical aspects

Quality of the platform

Quality of the application (incl. quality of the universAALisation)

User perceptions

Satisfaction

Economic aspects

Cost (of development and adapting to universAAL)

Contextual aspects

Ethical aspects

Legal aspects

The design in this phase is a pre-post design. Data is collected using:

descriptions the pilots provide about their application and the context in their country (T0)

descriptions the pilots provide in the knowledge portal about their ontologies and design (T0)

questionnaires for developers (T0 and T1) and technology providers (T1) and the test-users from the Real Life User Test test (T1)

questionnaire for pilot leaders (T1)

focus groups with developers and platform provider (T1)

cost interviews with the pilots (T1)

reports from other workpackages (lab test reporting, operation reports) (T1)

D5.2 – Evaluation Framework

Page 76 of 175

At two times, the pilots deliver their raw data to the evaluation team, in their internal deliverables (ID5.1xa and ID5.1xb). Only the first release will be used in this stage. In addition to this, the evaluation team itself collects data on a cross-pilot level. Furthermore, they have direct access to the questionnaire data since they are the administrator of the LimeSurvey database.

When the data is collected, the evaluation team will perform the following analyses:

- analysis of the questionnaire data (descriptive statistics)

- analysis of the focus group data (summary of most important findings and quotes)

- analysis of the lab test results

- analysis of the pilot ontologies and (if available) formal design with a team of experts from WP 2, 3 and 5. The ontologies are the best indicator for the quality of universAALisation.

- analysis of the cost data

In the first evaluation report on project level (D5.3a) cross-pilot comparisons will be made, as well as aggregated analyses, for example on the total group of developers, and the differences between the pilots in the results of their universAALisation. Also the first results of the showcase evaluation will be reported.

More information about the data collection tools can be found in paragraph 7.3. An overall planning is provided in paragraph 6.4. The questionnaires and templates used in this phase can be found in ‘Appendix C. Data collection tools’.

6.1.2. Phase 2: Deployment and operation phase

Phase 2 is based on the data collected in the deployment and operation phase of the pilot. This evaluation phase has, mainly, a user focus.

All pilots need to have an operation phase of at least six months. The required six months of operations with universAALized applications is meant for the pilots/applications, and not for the individual users: it does not mean that an assisted person has to use the application for at least 6 months. Some applications will be used for a shorter time. Capturing end user experiences and effects, should be tailored to the setup and goal of the service (for example before and after using the application).

From the assessment domains, several indicators are measured:

Assessment domains

Technical aspects

Quality of the platform (technical issues)

Quality of the application (technical issues)

Interoperability (user experience)

User perceptions

User acceptance

D5.2 – Evaluation Framework

Page 77 of 175

Use

Satisfaction

Value

Outcomes

Health & health consumption

Quality of life

Independent living

Adverse events & side effects

Economic aspects

Cost (of implementation, deployment and maintenance; cost of providing the service)

Revenues

Willingness to pay

Market value

Organizational aspects

Organizational fit

Implementation

Impact on core process

Strategic position

Contextual aspects

Sociocultural aspects

The design in this phase is a pre-post design. According to the DoW, each pilot would perform at least two measurements after deployment (with deployment we mean deploying on uAAL). Because most pilots have reduced their operation phase to the minimum of 6 months, there will be only two quantitative measurements in Phase 2: at the start of deployment (T0) and at the end (T1). The qualitative measurements will be performed beginning-halfway deployment and at the end of deployment, using focus groups (T*).

Data is collected using:

questionnaires for assisted persons (T0 and T1) informal carers (T0, T1), formal caregivers (T0, T1) which will be filled in by a sample of users

focus groups with assisted persons and caregivers (T*, T1),

cost interviews with the pilots (T*, T1)

reports from other workpackages (operation reports) (T*, T1)

D5.2 – Evaluation Framework

Page 78 of 175

At two times, the pilots deliver their raw data to the evaluation team, in their internal deliverables (ID5.1xa and ID5.1xb). In addition to this, the evaluation team itself collects data on a cross-pilot level. Furthermore, they have direct access to the questionnaire data since they are the administrator of the LimeSurvey database.

When all data is received, the evaluation team will perform the following analyses:

- analysis of the questionnaire data (descriptive statistics)

- analysis of the focus group data (summary of most important findings and quotes)

- analysis of the cost data

The analyses will be performed for each pilot separately. In the second evaluation report on project level (D5.3b) cross-pilot comparisons will be made, as well as aggregated analyses, for example on the total group of users.

More information about the data collection tools can be found in paragraph 7.3. An overall planning is provided in paragraph 6.4. The questionnaires and templates used in this phase can be found in ‘Appendix C. Data collection tools’.

6.1.3. Pilot evaluation in the associated pilots

Four associated pilots join the project to contribute to the evidence delivery and up scaling targets. The associated pilots use the same evaluation framework. In their pilots, the same indicators will be measured, using the same data collection tools. Because the associated pilots have less time to adapt and deploy (and to evaluate), the timing of their T0 and T1 might need to be adjusted, as well as the required sample size. They are not obliged to have a pilot pearl, and so they can use the light version of the questionnaire for end users. The associated pilots each deliver one evaluation report, which will be included in D5.3b.

6.2. Stage I: Showcase evaluation

The showcase evaluation consists of several steps, which will be explained in the next paragraphs.

6.2.1. Description of the showcase

The technical experts of the evaluation team described fourteen showcases from a technical perspective, and argued the value for technology provider, service provider, end user and government (society). These descriptions were already provided in paragraph 3.5.

6.2.2. Demonstration of the showcase

The next step was to map the applications deployed in ReAAL to the list of showcases. The aim of this mapping is to assure that most showcases can indeed be demonstrated in the ReAAL project. This demonstration can occur in several ways, depending on the showcase:

Analytical level: this requires an analysis of the system or the code;

D5.2 – Evaluation Framework

Page 79 of 175

Operational: this means that the test has to be performed via a physical operation, for example changing devices;

Technical: this means that technical analysis is required, for example analysing logs; or changing the running environment

Programmatical: this means a specific test code has to be developed/used in order to run the procedure

Thus, the showcase evaluation does not only rely on readily available data, but also on “work” from technical experts to actually measure or demonstrate the showcase. The showcase evaluation needs a test environment. Therefore (most of) the technical work needed for the assessment takes place at the central lab test facilities of SmartHomes (Eindhoven, Netherlands). For each showcase a script has been defined to implement the showcase, and demonstrate this value of an open platform. These scripts can be obtained from the evaluation team. See for an example script ‘Appendix C. Data collection tools’.

6.2.3. Value assessment

Once the showcase has proven to be implemented, the final step is to assess the value for all stakeholders. The potential value has to be checked, while demonstrating in real life the key selling points of open platforms. This will take place in several focus groups at pilot level, at events where ReAAL has a demonstration booth, and at a seminar in the final months of the project. In addition to the technical script, the evaluation team also developed questions for the focus group at pilot level. During the focus group the attendants will get a real-life or video demonstration of the showcases, and then the questions will be asked. The questions will be discussed via a group discussion and a poll. For the focus group on pilot level, each pilot is requested to invite stakeholders from the region. For example other vendors, policymakers of the municipality/region, a representative organization for elderly, etc. These showcase focus groups are scheduled for Q4, 2015. See for example script with focus group questions ‘Appendix C. Data collection tools’.

6.2.4. Showcase evaluation in the associated pilots

To a certain extent, the associated pilots also participate in the showcase evaluation. If their set-up covers a showcase that cannot be tested by the regular pilots, it will be added to the showcase evaluation. In addition, representatives of the associated pilots are invited to the seminar in which the final showcase valuation takes place.

6.3. Stage II: ReAAL impact validation

In the final months of the project, the results of ReAAL should be validated. The ReAAL impact is based on indicators at project level, that have been measured throughout the project; and a scenario analysis using the most powerful results of the pilot evaluation and showcase evaluation.

D5.2 – Evaluation Framework

Page 80 of 175

6.3.1. ReAAL impact indicators

The project level indicators are reported in both versions of D5.3. The indicators mainly illustrate the expansion of the project. Positive scores on number of associated partners/vendors or other contacts at pilot level or project level are a first indicator for universAAL’s success. The project is too short to show a real market breakthrough.

6.3.2. Scenario analysis

Building on the results of the pilot evaluation, the evaluation team wishes to look at the future: what would happen if the pilots continued deployment, if technology providers continued developing new universAALized applications, using optimal combinations of ReAAL’s features, and service providers expanded their services. What would the cost be? And what would be the added value for all stakeholders? As said earlier, it is likely that network externalities can only be shown after the project, because the three year ReAAL project is not long enough to build a stable ecosystem. In order to draw conclusions about the socioeconomic benefit of open platforms, the evaluation team needs to perform an extra analysis on top of the ReAAL evaluation.

This final stage can be described as a scenario analysis, focusing on the network externalities and return on investment of universAAL deployment. Because without EU funding the situation is very different, we cannot make prognoses extrapolating the pilot’s results. Rather, we have to draw scenarios from scratch, but taking the outcomes deemed most relevant into account.

Based on the pilot and showcase results, several scenarios will be selected for further research: on each of the four levels we distinguished in the project. From the perspective of the AAL market as a whole, the perspective of the open platform universAAL, the perspective of a technology provider who develops applications, and from the perspective of a service provider deploying applications and services. In scenario analysis it is common to draw an optimistic, a pessimistic and a most likely scenario. To validate the conclusions drawn from the scenario analysis internally, the scenarios will be presented to and discussed with the members of the ReAAL consortium. They will judge the scenarios on their likelihood.

For further generalizability, the scenarios will also be discussed with a panel of experts at a dedicated seminar. This panel consists of members of ReAAL’s Advisory Board and a few invited experts. In total 8-10 experts will be invited for this event, which will take place in February/March 2016.

D5.2 – Evaluation Framework

Page 81 of 175

6.4. Overall planning of evaluation activities and deliverables

In Table 21 the rough planning for the evaluation stages is provided

Table 21. Overall rough planning

2015 2016

Q1 Q2 Q3 Q4 Q1

Stage I Pilot evaluation Phase 1 (adaptation)

T0 and T1 adaptation phase

Focus groups developers

Write deliverable

Stage I Pilot evaluation Phase 2 (deployment)

T0 measurements pilot level first pilots

T0 measurements pilot level later pilots

Focus groups (combined with showcase)

Write deliverable

T1 measurements pilot level all pilots

Write deliverable

Stage I, Showcase evaluation

Preparations showcase demonstration 1 (summit)

Focus groups for showcase

Showcase validation

Stage II, ReAAL impact validation

Continuous measure ReAAL impact indicators

Continuous measure ReAAL impact indicators

Continuous measure ReAAL impact indicators

Start data-analysis of Stage I

Finish data analysis Stage I

Scenario analysis

Write deliverable

Associated pilots

Preparations for deployment phase evaluation

T1 measurements adaptation

T0 measurements deployment

T1 measurements deployment

Deliverables D5.2 (M25)

Evaluation framework, tools etc.

ID5.1xa (M29)

Pilots provide data about the adaptation phase and T0 of deployment

D5.3a (M33)

First results of showcase evaluation

adaptation phase results

ID5.2xb (M37)

Pilots provide data about the T1 of deployment phase and the showcase validation in their region

D5.3b (M39)

Full results of pilot evaluation, showcase evaluation and ReAAL impact assessment

Results of associated pilots

D5.2 – Evaluation Framework

Page 82 of 175

7. Evaluation guidelines and instruments

7.1. User definition

The ReAAL project aims at 7000 users in the ecosystem. But what counts as a user? Technically speaking there are three types of users of uAAL: the developers, who use the codes and standards, the primary user (who actually uses an application running on uAAL) and the secondary user, who benefits from the use of the uAAL platform by others (e.g. their formal caregiver).

User

The EC requested a pilot with at least 5000 users, meaning 5000 persons who use the AAL applications. For them, a user is an end-user not a developer (see Table 22).

Table 22. Who counts as a user?

User User role Count as user for EU?

Developer Primary No

Assisted person Primary Yes

Secondary No

Formal caregiver Primary Yes

Secondary No

Informal carer Primary Yes

Secondary No

To count as a user for the use statistics and the EC target, the following criteria should be met:

1. Consent form or “other contract” signed (assisted person agrees on terms of ReAAL project). If the user is a caregiver, the service provider is responsible and accountable for counting only those people that actually use the application

2. Receipt/installation of the technology + preferably also attending a training session or receiving instructions at home. If the user is a caregiver, they should also have had instructions about using the application

3. Using the technology at least 1 time in the first month after installation/instruction

The implication of applying these criteria is that secondary users (beneficiaries) do not count as users, but given the fact that they are part of the value network, they are very important for assessing the impact of AAL.

Drop-offs and drop-outs

The pilots agreed on definitions for drop-offs and drop-outs and the strategy to follow (replace or not):

D5.2 – Evaluation Framework

Page 83 of 175

Drop-offs: participants who leave before they have signed informed consent forms, or before they received the technology and the training. They have not entered the study, and can be replaced easily. Pilots have the obligation to replace the drop-offs. According to our criteria, drop offs do not count as users.

Drop-outs: there are 2 types of drop-outs: 1) participants with signed informed consent forms/contracts, who – after receiving training – never used the technology. These dropouts are non-users, and should preferably be replaced. We are able to identify these drop-outs in the first months of the project. 2) Users who stop using the technology somewhere along the way. A pilot has no obligation to replace them. According to our criteria, these drop-outs count as users.

Since for ReAAL project the non-users are as important as the users, the pilots have to record the reason for drop-out/non-use in their user database.

All pilots have invested in installations and training, so they will be motivated to minimize the number of drop-outs. In the national deployment plans (part of WP4) each pilot described their strategy for user exit. Therefore we expect that the total number of users to be reached, excluding the drop-outs, will be at least 5000. Still, the pilots should look after their deployment phasing to make sure that most of their users will benefit from the applications for at least 6 months, according to the DoW.

7.2. Sampling and Pilot Pearl

In a project with thousands of users, we cannot collect data on each individual in the value network. While the amount of developers, technology providers and service providers is limited, the number of assisted persons, informal carers and formal caregivers is considerable. For these last three stakeholder (sub)groups, we propose a sampling strategy. However, if a pilot has resources to collect data for all users, that will be encouraged. The tools in the knowledge portal can be used for this purpose. Pilots with more than two applications for more than two different users groups are allowed to select two applications they will evaluate in depth, and with a large sample. The other applications will be evaluated in a more easy way, with a small questionnaire and for a smaller sample of users. This way, each pilot has one or two “pearls”, i.e. applications they believe to bring most value to the end user, but has evaluation data about all applications. We chose this strategy because evaluation of the applications is not the primary aim of the ReAAL project.

Minimal sample per pilot

Technology provider (application developers and application providers): The sample consists of developers (see below), R&D/innovation department, someone from management/board for each technology provider involved in a pilot. Some will be members of the local project team for ReAAL. They will fill in questionnaires and be involved in focus group meetings. Some will be interviewed directly by the evaluation team.

Developer: All developers involved in the project. All should receive questionnaires, and at least one developer per pilot should contribute to interviews or focus groups.

Service provider: members of the local project team and someone from management & board level (duplicate in case of more service providers involved) of all the service provider organisations who implemented the AAL services. The IT department of the service provider is also an important respondent. Some will be members of the local project team for ReAAL. They will fill in questionnaires and be

D5.2 – Evaluation Framework

Page 84 of 175

involved in focus group meetings. Some will be interviewed directly by the evaluation team.

Assisted person: 10-12 people, possibly with their informal carer, to be invited to a focus group meeting. 50 to 100 respondents to the questionnaires, per selected application. If an application has less than 50 users, all users should receive the questionnaire. For the pilot pearl a sample size of 100 is aimed for (or less, if in total there are fewer users). This sample received a questionnaire at T0 and T1. For the other applications 30-50 users should be enough. They receive a questionnaire at T1 only.

Formal caregiver: 8-10 people, to be invited to a focus group meeting. Questionnaire to all caregivers involved in the pilot, using the selected applications. Because caregivers have internet access, they can easily fill in the questionnaires.

Informal carer: a sample of 50% of the informal carers, with a maximum of 50. The involvement of informal carers in the pilot evaluation depends on the goal of the application and the primary users of it.

In ‘Appendix B. Evaluation activities per pilot’ a short description of all pilots is provided including a goal for the sample size per application, which is proposed to be the pilots’ pearl (to be confirmed by the pilots after Real Life Test), and which showcase they are involved in.

A random sample is chosen. However, it is most likely that the sample will contain mainly the early users, because they need to use the application for 6 months.

7.3. Data collection

We take a multi-pronged approach to data collection, which supports the mixed methods design used for the evaluation. We will collect data by means of questionnaires, focus groups, data entry templates, interviews and blogs. The uAAL platform itself will produce technical logs that will generate information about the performances of the platform.

Candidate applications for the pilots will have to pass testing before being deployed in patient’s homes. The results of these tests, which will also provide information about bugs in the platform, will be collected.

In addition to these sources, other documents produced in the ReAAL project (such as the operational reports ID4.2.x) and input on the Knowledge portal will be used as a data source for the evaluation.

The evaluation of ReAAL has two stages: the pilot and showcase evaluation and the ReAAL impact validation. The pilot evaluation can be divided in two phases: Adaptation (which runs from the preparation, adaptation to universal, lab testing and user testing) and Deployment (which runs from deployment to operation).

Each phase has at least a T0 and a T1 measurement. T0 is at the start of the adaptation or deployment, and T1 at the end of adaptation or deployment. Extra measurements, for example intermediate measures are referred to as T* (see

Table 23).

D5.2 – Evaluation Framework

Page 85 of 175

Table 23. Evaluation timing

Evaluation phase Code for timing

Description

Adaptation AT0 Start of adaptation

AT* During adaptation

AT1 End of adaptation

Deployment DT0 Start of deployment

DT* During operation

DT1 End of deployment

Table 24 below provides an overview of all instruments per stakeholder.

Table 24. Instruments per stakeholder

Stakeholder Individual target group

Instruments Timing

Platform provider

Developer Platform provider T0 Survey Platform provider T1 Survey Platform provider T2 Survey Platform provider template Focus group application developer + platform provider end of adaptation

AT0 AT1 DT1 Continuous AT1

Technology provider

Application provider (business perspective)

Application provider T0 Survey Application provider T1 Survey Application provider T2 Survey Application provider template

AT0 AT1 DT1 Continuous

Application developer (technical perspective)

Application developer T0 Survey Application developer T1 Survey Application developer T2 Survey Application developer template Focus group application developer + platform provider end of adaptation Focus group application developer/application provider end of deployment

AT0 AT1 DT1 Continuous AT1 DT1

Health / Social Service provider

Health / Social Service provider (business/management perspective)

Service provider T0 Survey Service provider T1 Survey Service provider template Focus group service provider end of deployment

DT0 DT1 DT0, DT1 DT1

Formal caregiver Minimal Data Set Formal caregiver T0 Survey Formal caregiver T1 Survey Focus group Formal caregiver during deployment

Continuous DT0 DT1 DT*

Informal carer

Informal carer Minimal Data Set Informal carer T0 Survey Informal carer T1 Survey Focus group Assisted person + informal carer during deployment

Continuous DT0 DT1 DT*

D5.2 – Evaluation Framework

Page 86 of 175

Assisted person

Assisted person Minimal Data Set Real life test User Survey Assisted person T0 Survey Assisted person T1 Survey Focus group Assisted person + informal carer during deployment

Continuous AT1 DT0 DT1 DT*

Pilot Pilot template Pilot cost template – cost interview Interview pilot leader

AT1 AT1, DT1

Descriptions of application, pilot set-up, ontology, design

In the Knowledge portal all pilots describe their main features. This data is combined by the evaluation team to describe the background for the assessment. The pilots are asked to check the descriptions.

Technical test data

Testing and bug fixing is a basic activity in software development that will involve both the adaptation of the uAAL platform and the adaptation of the AAL applications proposed by the pilots. The activities for this take place in WP2 and WP3, who will produce a test plan, a bug tracker, an environment for discussing problems and requesting features. This will be monitored in WP4, for the output of this work is used for the evaluation. In addition, all pilots will be asked for automatic testing results, if available.

The technical test data will be analysed by the technical partners in the evaluation Team, mainly the universAAL experts and test leader (Smart Homes).

Minimal Data Set

The Minimal Data Set (MDS) is an Excel file in which each pilot collects basic data about all people that count as a user in the ReAAL project (see user definition in paragraph 7.1). For drop outs, the reason for drop-out is recorded.

Table 25. Minimal Data Set

Items

Minimal Data Set Age, gender, start date, end data, role, reason end, which application?, userID, organisation (service provider)

For the periodic operation reports and evaluation reports the MDS will be updated and user statistics will be derived.

Questionnaires

The questionnaires contain validated questions (from other relevant HTA studies), complemented with some self-constructed questions to account for the comprehensive set of indicators defined in ReAAL (see chapter 5).

The validated questions come from the following sources: Rand-36 (subjective health) [34], EuroQoL (EQ5D, subjective health) [35], ICECAP-O (wellbeing) [36], CarerQoL (quality of life of informal carer) [37], and UEQ (user experience) [38].

D5.2 – Evaluation Framework

Page 87 of 175

Furthermore, we were inspired by questionnaires from AIMQ [39], RATER [40], Anderson & West on team climate [41], ASCOT [42] and the questionnaire from Sponselee [33]. The “questionnaire light”, which will be sent at T1 only, to users of applications that were not selected as a pilot pearl, contains mainly self-constructed questions about experience with the application.

The questionnaires will be sent to all stakeholder groups, either to the entire group or a sample. As the users (assisted persons, informal carer, formal caregivers and developers) make up a large number of people, samples will be made within these groups. For the pilot pearls the sample will be followed for the whole duration of the deployment phase; they fill in at T0 and T1.

Depending on the stakeholder group, the questionnaires will be sent out in intervals to obtain information at baseline and the end of the pilot phase. Since the pilots run in six different countries (including the associated pilots and projects this will be even more) and we cannot expect the stakeholders – especially elderly – to understand English, the questionnaires will be translated to each language. The translation takes place in two phases: first from English to local language and then, by someone else, from local language back to English.

To make the data collection process as efficient as possible, we prefer to use electronic questionnaires that can be filled in online. For this purpose we chose LimeSurvey. This runs on a server at Trialog. Only if respondents do not have internet access respondents will receive a paper copy. An alternative procedure is that the formal caregiver collects the data at the client’s house, for example on a laptop. It is possible to fill in the questionnaire offline, if special software is installed. Instructions on how to install LimeSurvey can be found in the Knowledge portal.

Where possible, the questionnaire will be filled out by the respondents, although some assisted persons may require help to do so. This help can be provided by the informal or formal caregiver. The final question in the questionnaire is about whether the assisted person had help.

In some pilots volunteers have an important role in providing care. Depending on local setup they will receive the questionnaire for informal carer or formal caregivers.

Questionnaires will be used also for evaluations with service providers and developers.

An overview is provided in Table 26. These topics will be used for constructing the questionnaires. All T1 questionnaires can also be used for an intermediate measurement (T*)

Table 26. Overview questionnaires

Questionnaire identifier Topics covered

Real life test User Survey Demographics, experience with technology, user experience

Assisted person T0 Survey

(baseline)

Demographics, experience with technology, subjective health, quality of life / wellbeing, social interaction, (in)dependency, health & service consumption, satisfaction with care, expectations of the application(s), satisfaction with training

Assisted person T1 Survey

Regular and light version

Demographics, subjective health, quality of life, social interaction, (in)dependency, use, experienced value and impact, satisfaction with service, health & service consumption

D5.2 – Evaluation Framework

Page 88 of 175

(end)

Informal carer T0 Survey

(baseline)

Demographics, experience with technology, subjective health, quality of life/wellbeing, expectations of the application(s), satisfaction with training

Informal carer T1 Survey

(end)

Demographics, subjective health, quality of life, acceptance of application, use, experienced value and impact, satisfaction with service

Formal caregiver T0 Survey

(baseline)

Formal caregivers only receive a questionnaire of they are users of the application, not if they benefit indirectly

Demographics, experience with technology, attitude towards independent living, expectations of application, innovation climate of the organization

Formal caregiver T1 Survey

(end)

Demographics, use, acceptance, satisfaction with service, experienced value and impact

Platform provider T0 Survey

(baseline)

The platform provider is the uAAL team in ReAAL (WP2 people). Profiling survey. Demographics, experience Developer

Platform provider T1 Survey

(end of adaptation)

First impressions Survey.

Platform provider T2 Survey

(end of pilot)

Reflection on project as a whole

Application developer T0 Survey

(baseline)

Profiling survey. Demographics, experience & competence

Application developer T1 Survey

(end of adaptation)

First impressions Survey.

Application developer T2 Survey

(end of pilot)

End of pilot survey. Attitude to open platforms & interoperability, satisfaction with uAAL platform, support

Application provider T0 Survey

(baseline)

Attitude to open platforms & interoperability, current characteristics of workflow and clients, strategic position, experience with AAL, expectations of project

Application provider T1 Survey

(deployment phase)

Attitude to open platforms & interoperability, changes in workflow, market impact

Health / Social Service provider T0 Survey

(baseline)

Attitude to open platforms & interoperability, current workflow, strategic position

D5.2 – Evaluation Framework

Page 89 of 175

Health / Social Service provider T1 Survey

(end)

Attitude to open platforms & interoperability, impact on workflow, strategic position

In ‘Appendix C. Data collection tools’ a selection of questionnaires is added. All questionnaires and templates can be obtained via the Evaluation team.

Focus group interviews

The experiences of each stakeholder group will be collected at different points in time via focus groups. These focus groups are important to complement the surveys with rich content information that may provide explanations for trends or minority views found through the survey analyses. In addition, the focus groups will play an important role in the showcase evaluation.

The focus groups will take place at both the pilot and the project level and will be based on a topic list provided by the evaluation experts. The evaluation leaders will host the focus groups at the project level, which will be held in English. In contrast, the focus groups at the pilot level will be the responsibility of the respective pilot managers and will be held in the respective language. Instructions how to organize a focus group can be found in the Knowledge portal.

To ensure a certain level of quality and transparency, all focus group meetings will be audiotaped and the pilots will provide a summary that includes relevant quotes translated to English.

We propose to collect information on experiences of assisted persons and their informal carers via two focus groups at each pilot site. For the first focus group, the pilot responsible will invite the users from the Real life tests or first batch. The second focus group would consist of a sample of users from the first batch and some new users to obtain information about experiences after a period of use (the first batch) and information about first experiences with a potentially more developed service (second batch).

If it is difficult to arrange a focus group, it is also possible to conduct individual interviews at the homes.

The procedure will be repeated with a sample of formal caregivers at each of the pilot sites. This can be organized at the end of a regular staff meeting.

The other focus groups will be organized by the evaluation team, and co-hosted by the technical team of ReAAL.

To obtain information about experiences of the service provider organisations, we will organize a meeting at one of the joint events, which will allow service providers from different pilots to exchange experiences.

Similarly, in terms of the application providers and their developers, we will organize meetings at one of the joint events (e.g. training) for representatives of these groups from the different pilots. We will supplement such focus groups with focus groups at the pilot level, where we will mix representatives of the relevant technology providers and developers to let them reflect on the adaptation work, and to look ahead at possible challenges.

D5.2 – Evaluation Framework

Page 90 of 175

Focus group interviews are used for all the piloted services and applications. An overview of topics is provided in Table 27. These topics will be used for constructing a list with questions and a script for conducting the meetings.

Table 27. Overview focus group interviews

Focusgroup identifier Topics covered

Focusgroup Assisted person + informal carer halfway deployment

Experiences with the AAL service, experiences related to ease of use, usefulness, actual use, fit with need, sociocultural topics, value of interoperability, roles and responsibilities of informal carer

Showcases

Focusgroup Formal caregiver halfway deployment

Experiences with the AAL service, experiences related to ease of use, usefulness, actual use, fit with need, sociocultural topics, impact on assisted need, value of interoperability

Showcases

Focusgroup application developer + platform provider end of adaptation

Experiences with uAAL platform, adaptation, testing, deploying, challenges, value of open platforms and interoperability, future expectations

Showcases

Focusgroup application developer/application provider end of deployment

Experiences with uAAL platform, adaptation, testing, deploying, challenges, value of open platforms and interoperability, impact on innovation, business impact, future expectations

Showcases

Focusgroup service provider end of deployment

Experiences with uAAL platform, challenges, value of open platforms and interoperability, impact on innovation and strategic position of provider, business impact, future expectations

Showcases

Template

For evaluation we need a good description of each pilot. Much of the information has been reported in other (internal) deliverables, but the pilots have to fill in a template for the missing data, or if more detail is required.

To facilitate the collection of indicator data on time investment and other issues faced by developers and application providers in the use of the open platform, we will send an online data entry form to all developers and application providers at the end of the adaptation phase and at the end of deployment.

Such templates will be a valuable resource for the evaluation by providing a constant overview of the barriers and potentials of open platforms. On a process basis, it will allow for making important adjustments to aid pilot implementation, deployment and operation. At the end of the project, it will give the evaluators a retrospective overview of what went right and wrong in the use of the open platform and what implications such factors may have had on the evaluation results. Moreover, it will give insight in investment of resources and the results of these investments. A consolidation of such information can prove valuable to a future up scaling of AAL.

In each pilot, the service providers will also be asked to fill in a template with process and financial data. The IT department, financial department and management are the best candidates for collecting this data.

D5.2 – Evaluation Framework

Page 91 of 175

An overview is provided in Table 28. These topics will be used for constructing the template and an instruction on how to fill it.

Table 28. Overview templates

Template identifier Topics covered

Pilot template Description of application, description of service, description of procurement process, plan for universAALisation, pilot set-up, number of users

Pilot cost template Development costs, universAALisation costs, implementation costs, training and supporting users costs, maintenance costs

Platform provider template

Time used, activities performed, problems encountered, strategies followed (end of adaptation, end of deployment), number of supported operating systems and device types

Application developer template

Time used, activities performed, problems encountered, strategies followed (end of adaptation, end of deployment)

Application provider template

Time used, activities performed, problems encountered, strategies followed (end of adaptation, end of deployment)

Service provider template

Workflow, resources for AAL ecosystem (baseline & within ReAAL), including personnel and technology costs (can be combined with the pilot cost template if pilot leader = service provider)

Pilot level interviews

The evaluation leaders will conduct in-depth interviews with the pilot managers. These interviews will be crucial for the evaluators to get a broad overview of the impact of the pilots and the main benefits, potentials, and barriers that will occur throughout ReAAL. We use the interviews with pilot leaders to go through and assist in filling in the cost template.

The interviews will be conducted in Dutch, English or Spanish and will be recorded for transparency purposes. Further, relevant quotes will be transcribed and approved by the interviewee before the quotes are made official.

An overview is provided in Table 29. These topics will be used for constructing a list with questions and a script for conducting the interviews.

Table 29. Overview pilot level interviews

Pilot interview identifier Topics covered

Pilot interview 1 Experiences in the ReAAL project, role in value network, challenges, future expectations. First data for cost template

Pilot interview 2 Experiences in the ReAAL project, role in value network, challenges, future expectations. Update of cost template

Blogs

Each pilot will be required to share updates, experiences, and advice on the internal project knowledge portal via a dedicated blog (see the documentation of Task 5.4 for details).

D5.2 – Evaluation Framework

Page 92 of 175

The blog content can be useful as a means to follow the pilot progress in a non- resource costly manner. The pilot representatives can blog at their own convenience, thus avoiding scheduled meetings that may interrupt their workflow. This reporting method may enable the pilot representatives to blog about highly technical or pilot-specific issues, which would not be possible to grasp in a focus group or interview conducted by a non-expert.

Blogs will be used for the evaluation of the whole pilot.

7.4. Data analysis

Pilot evaluation

The technical data (ontology, lab results, automatic test result) is analysed by universAAL experts, using a template. Numeric data will be analysed using Excel.

The quantitative data from the templates and questionnaires (closed questions) will be analysed in Excel and/or SPSS.

First data will be analysed by means of statistical descriptives. Data on utilisation of medical services will be cleaned and imputed when necessary. Data on costs per unit of (medical) services will be gathered and multiplied with utilisation data, in order to estimate costs.

Data on quality of life and wellbeing of assisted persons and informal carer will be analysed, cleaned and imputed when necessary. Quality of life and wellbeing outcomes will be transformed to utilities and capabilities, using (population) algorithms from previous studies.

The qualitative data (interview transcripts, minutes of meetings, blogs) will be analysed deductively, using the indicators as codes.

One of the goals of the evaluation is to assess the impact of uAAL on costs and benefits of AAL on society, participating organisations and users. In the ReAAL evaluation, the focus is on the economic aspects of the uAAL platform, for those who develop and use it (perspective of technology provider and developer) and for those who buy it (service providers or end users). However, we will also include any changes in care consumption. This, however, will not be done extensively, due to resource constraints.

We build the economic evaluation on the framework of Drummond for evaluating health programs [43]. When measuring the costs, activity based costing will be used, to assure that comparisons between pilots can be made. An interview with each stakeholder group will be used to select the types of cost that are relevant.

As a basis, the FLOSS methodology is used to capture the costs of the open platform for migration and ownership. Four groups of costs are distinguished: software, support, training, and staffing [44]. Further, typically, many of the costs are upfront, while most savings accrue in the long-term, which place restrictions on the assessment. Discounting of future costs and benefits will be relevant for the ReAAL project since the pilots within the project will last at least one year. When costs and benefits that occur at different times within a project are discounted to their present value, they can be compared [45].

The economic evaluation will be a multicriteria-analysis. The different consequences (outcomes, such as wellbeing and quality of life) will be presented, next to the costs.

D5.2 – Evaluation Framework

Page 93 of 175

On pilot level also ethical, legal and sociocultural aspects will be investigated, for example any issues regarding rules for procurement or data protection. The evaluation is mainly a description of current practice and experiences in the pilots and the way this might have influenced the results of deployment.

Showcase evaluation

In the showcase evaluation the analysis consists of a comparison of the predicted value for stakeholder versus the perceived value. Furthermore, the technical members of the evaluation team assess to what extent the showcase has been demonstrated.

Scenario analysis: network externalities and return on investment

For the scenario analysis Excel will be used. For the business case the return on investment (ROI) should be estimated. Philips and Philips [45] present a methodology to measure return on investment (ROI) alongside technology projects. The challenge of developing a ROI methodology is including the needs of different groups of stakeholders (e.g. researchers, technology developers, managers). Philips and Philips suggest to take the following levels of evaluation into account when measuring ROI:

Reaction and satisfaction, including “measure of satisfaction of key stakeholders”

Learning and understanding, includes checking if stakeholders have learned to use the technology.

Application and implementation, focuses on the frequency of use.

Business impact, including measures of productivity, quality, costs, time, and consumer satisfaction.

All data collected at pilot level will be assessed in the light of this scenario analysis.

7.5. Knowledge portal

In the ReAAL project, the Knowledge portal (accessible via http://reaal.aaloa.org/wiki/ReAAL_Knowledge_Portal) is an important source for the evaluation.

First of all, it is used for data collection. One of the current features is blogging. All pilots will blog regularly on their experiences and lessons learned. Another feature is a Q&A tool. By analysing the questions and answers, we know more about the problems each pilot faces.

Second, it is used to assist the pilots. Via the knowledge portal, the pilots have access to the tools they need for evaluation. They can download the instructions, and find hyperlinks to the templates and questionnaires. For the templates Excel is used, and for the questionnaires LimeSurvey. Both programs have the advantage that the templates can be modified very easily. Respondents will receive an email with a unique link to the LimeSurvey questionnaires or a link to the Excel file in the project’s LiveLink environment.

Third, the knowledge portal is used to present the results of the evaluation and facilitate discussions about it. During the execution of the evaluation the knowledge

D5.2 – Evaluation Framework

Page 94 of 175

portal will be updated with the preliminary results. The wiki feature offers all ReAAL partners to comment on the findings.

Instructions on how to use the knowledge portal can be found in the deliverables of task 5.1.

D5.2 – Evaluation Framework

Page 95 of 175

8. Responsibilities and quality assurance

8.1. The evaluation team

The ReAAL project has been designed in such a way that a group of technical and evaluation experts from universities and research institutes is responsible for the design of the evaluation framework, while the pilots are responsible for data collection. The evaluation team will analyse the data.

Evaluation Leaders

The evaluation leaders are based at Erasmus University Rotterdam (EUR) and the Polytechnic University of Madrid (UPM).

EUR Team

Marleen de Mul PhD, assistant professor, health scientist, specialized in eHealth evaluation. Marleen is WP5 Evaluation & Knowledge management Leader + Task Leader of T5.2

Marc Koopmanschap PhD, associate professor, health economist, specialized in Health Technology Assessment

Prof. Joris van de Klundert, full professor, has a background in Computer Science and Operations Research. His current research area is optimization in health service networks.

For practical work (construction of questionnaire, running analyses in SPSS) EUR will hire student-assistants and have unpaid students work on the project as part of their master thesis in Healthcare Management or Health Economic, Policy and Law.

The EUR team has not been involved in the universAAL project, and is also not doing any technical work in/for the project. That makes them the most independent academic partner in the consortium.

UPM Team

Alessio Fioravanti (Task leader of T5.3), M. Sc. Computer Science Engineering, expert in information platforms and technologies for monitoring chronic disease patients. Alessio is also chair of the ethical board.

Alejandro Medrano, expert in Ambient Assisted Living and SW quality, especially in the area of health and security and involved in the development of user interaction frameworks.

Further members of the evaluation team

Task T5.1, Knowledge management: Bruno Jean-Bart and Olivier Meridat (Trialog)

Task T5.4, Replication guidelines: Florian Visser (RijnmondNet)

The other universities and research institutes of ReAAL (CNR, Fraunhofer, SINTEF, SmartHomes, UPV) are more actively involved in technical assistance to the pilots, and are responsible for the technical evaluation. For example they demonstrate the showcases or are asked as an expert to assess the ontologies. SmartHomes has

D5.2 – Evaluation Framework

Page 96 of 175

the role of coordinating the showcase evaluation activities that take place in the lab setting in Eindhoven.

8.2. Responsibilities in execution of pilot evaluation

The execution of the evaluation is task 5.3 of WP5. Task leader is UPM. In Table 30 the activities are listed and who is responsible for what. We distinguish the role of the evaluation leaders and the pilots. On pilot level the pilot leader is responsible that the task is executed. Tasks can be delegated to other partners of the pilot, including subcontracted parties.

Table 30. Responsibilities per activity

Phase Activity Evaluation team Pilots

Preparations Construction of data collection tools - questionnaire + instructions + cover letter - template + instructions - focus group script + template for reporting - template for blog

EUR, UPM Provide feedback only

Pilot specific evaluation plan with timing UPM/EUR to help out

All pilot leaders

Translate questionnaires (translation and back translation)

EUR (NL, DK), UPM (ES, IT)

SINTEF (NO), SL (DE), PUG (IT), BSA (ES), ODE (DK)

Make data collection tools available in knowledge portal and Livelink

EUR, UPM, Trialog -

Download data collection tools (if data entry is on paper) and instructions for how to use them, from the knowledge portal

- All pilot leaders

Construct database with user info on pilot level (to keep track on the users, an anonymised version is needed for pilot statistics)

UPM -

Select sample for questionnaires UPM to check Pilot leader

Informed consent form - All pilot leaders

Approval of ethics committee UPM to keep track All pilot leaders if applicable

Data collection

Fill & update database with user info (internal use, anonymised user info is provided to UPM)

- All pilot leaders

Distribute questionnaires to formal caregivers, informal carer, assisted persons (incl. dropouts)

EUR to provide cover letter

All pilot leaders

Distribute questionnaires to platform providers, application developers and application providers + data entry templates platform providers, application developers, application providers & service providers

UPM/EUR -

D5.2 – Evaluation Framework

Page 97 of 175

Organise focus group meetings on pilot level (invite people, lead interview, make audio recording & photographs), including showcase

- All pilot leaders

Write digest on focus group meetings `- All pilot leaders

Organise focus group meetings on consortium level, including showcase focus groups

UPM -

Interview pilot leader EUR, UPM, CNR All pilot leaders need to be available

Write blogs for T5.4 All pilot leaders

Data storage & analysis

Upload data in knowledge portal and Livelink All evaluation partners who have collected data

All pilot leaders

Check data quality UPM, EUR All pilot leaders, before uploading

Analyse data UPM, EUR -

Send pilot specific digest to pilot leader for their periodic report

UPM, EUR -

Reporting Construct template for periodic reports (IDx) UPM -

Write evaluation paragraph in periodic operation reports (on progress, not results)

- All pilot leaders

Fill periodic report with raw data from pilot (2 times)

- All pilot leaders

Write ReAAL evaluation, validation and evidence report (2 versions)

All evaluation leaders

Provide feedback

8.3. Quality assurance

In order to ensure the appropriate level of independence and avoid bias as much as possible in the evaluation, the following strategy will be followed:

As much as possible validated questionnaires will be used.

Partners responsible for defining the evaluation framework, methodology and tools for data collection do not belong to any Pilot site partner, they come from Universities and Research institutions with a solid background in evaluation.

A workshop will be organized in which the experts from WP5 will train the pilot site partners in the use of the different tools in order to ensure a similar understanding of the evaluation goals and procedures across all pilot sites.

The evaluation team keeps track on selection bias by the pilots.

D5.2 – Evaluation Framework

Page 98 of 175

Experts from WP5 will be available to assist pilot partners when performing interviews or focus groups to ensure that methods and tools are used consistently within all pilot sites. In addition to that, if the evaluation team suspects that a pilot does not follow the procedure they will visit the pilot.

The evaluation team from WP5 will be in charge of analysing all collected data to extract conclusions, and both, data sources and results, will be made available for inspection to the Advisory Board to ensure that it has been as unbiased as possible.

Members of the Advisory Board, have been asked to review the methodology and tools and will provide external expert assessment of the evaluation procedures. Furthermore, they will be invited to participate as an expert in Stage II of the evaluation.

D5.2 – Evaluation Framework

Page 99 of 175

References

[1] Abadie, Fabienne, Christian Boehler, Maria Lluch, and Ramon Sabes-Figuera. 2014. Monitoring and Assessment Framework for the European Innovation Partnership on Active and Healthy Ageing (MAFEIP). Available via: https://ec.europa.eu/jrc/sites/default/files/jrc91162_0.pdf (Accessed March 10, 2015).

[2] Aanesen, M., A. T. Lotherington, and F. Olsen. 2011. Smarter elder care? A cost-effectiveness analysis of implementing technology in elder care. Health Informatics J 17 (3):161-72.

[3] Sixsmith, AJ. 2000. An evaluation of an intelligent home monitoring system. Journal of telemedicine and telecare 6 (2):63-72.

[4] Donabedian, Avedis. 1980. Explorations in quality assessment and monitoring. Volume I. Ann Arbor, MI: Health Administration Press.

[5] Eurostat, S. B. (2008). The life of women and men in Europe. A statistical portrait, Office for Official Publications of the European Communities Luxembourg. Available via: http://epp.eurostat.ec.europa.eu/cache/ITY_OFFPUB/KS-80-07-135/EN/KS-80-07-135-EN.PDF (Accessed on March1st 2014)

[6] Sun, H., V. De Florio, N. Gui, and C. Blondia. 2010. The missing ones: Key ingredients towards effective ambient assisted living systems. Journal of Ambient Intelligence and Smart Environments. Available via: http://arxiv.org/ftp/arxiv/papers/1401/1401.2657.pdf (Accessed on March 1st 2014)

[7] Martin, S, G Kelly, WG Kernohan, B McCreight, and C Nugent. 2008. Smart home technologies for health and social care support. Cochrane Database Syst Rev 4.

[8] BRAID. 2010. ICT and Ageing Scenarios. Available via http://www.braidproject.eu/sites/default/files/Ageing_scenarios.pdf (Accessed March 1st 2014)

[9] ReAAL DoW, 2012

[10] Rogers, R, Y Peres, and W Müller. 2010. Living longer independently–a healthcare interoperability perspective. e & i Elektrotechnik und Informationstechnik 127 (7-8):206-211.

[11] Rojas, Stephanie Vergara, and Marie-Pierre Gagnon. 2008. A systematic review of the key indicators for assessing telehomecare cost-effectiveness. Telemedicine journal and e-health : the official journal of the American Telemedicine Association 14 (9):896-904.

[12] Koopmanschap, Marc A, Job A van Exel, Bernard van den Berg, and Werner BF Brouwer. 2008. An overview of methods and applications to value informal care in economic evaluations of healthcare. Pharmacoeconomics 26 (4):269-280.

[13] Orbell, Sheina, Nicholas Hopkins, and Brenda Gillies. 1993. Measuring the impact of informal caring. Journal of Community & Applied Social Psychology 3 (2):149-163.

D5.2 – Evaluation Framework

Page 100 of 175

[14] Presentation Peter Wintlev-Jensen at AAL Forum 2013. Available via: https://app.box.com/s/lsk03nza5rnkhcgpnm5u/1/1284112873/11501848887/1 (Accessed on March 1st 2014)

[15] Garcia-Valverde, T., F. Campuzano, E. Serrano, A. Villa, and J. A. Botia. 2012. Simulation of human behaviours for the validation of Ambient Intelligence services: A methodological approach. Journal of Ambient Intelligence and Smart Environments 4 (3):163-181.

[16] AAL Forum 2012 Proceedings. Tomorrow in Sight: From Design to Delivery. Proceedings of the 4th AAL Forum Eindhoven, the Netherlands, 24-27 September 2012. Available via http://www.aal-europe.eu/wp-content/uploads/2013/11/AAL_Forum_2012_Eindhoven_Proceedings.pdf (accessed January 29th 2014).

[17] Boudreau, K. 2010. Open Platform Strategies and Innovation: Granting Access vs. Devolving Control. Management Science 56(10): 1849-1872.

[18] Katz, Michael L., and Carl Shapiro. 1986. Technology Adoption in the Presence of Network Externalities. Journal of Political Economy 94 (4):822-841.

[19] Almeida, Fernando, José Oliveira, and José Cruz. 2011. Open standards and open source: enabling interoperability. Int. J. Soft. Eng. App 2 (1).

[20] Kung, Antonio, and Bruno Jean-Bart. 2010. Making AAL platforms a reality. In Ambient Intelligence: Springer.

[21] Varian, Hal R. 2001. Economics of information technology. University of California, Berkeley, http://people.ischool.berkeley.edu/~hal/Papers/mattioli/mattioli.html.

[22] Baarsma, Barbara, Carl Koopmans, José Mulder, and Corine Zijderveld. 2004. Kosten en baten van open standaarden en open source software in de Nederlandse publieke sector. Amsterdam, the Netherlands.(Costs and benefits analysis of the adoption of open source software for the Dutch government.).

[23] Porter, M. E. What Is Value in Health Care? N Engl J Med 2010; 363:2477-2481

[24] Kidholm, Kristian, Anne Granstrøm Ekeland, Lise Kvistgaard Jensen, Janne Rasmussen, Claus Duedal Pedersen, Alison Bowes, Signe Agnes Flottorp, and Mickael Bech. 2012. A model for assessment of telemedicine applications: mast. International journal of technology assessment in health care 28 (01):44-51.

[25] Woodall, T. Conceptualizing "value for the customer": an attributional, structural and dispositional analysis. Academy of Marketing Science Review, 2003 (12).

[26] Institute of Medicine. Value in Health Care: Accounting for Cost, Quality, Safety, Outcomes, and Innovation. Workshop Summary. December 16, 2009

[27] Shapiro, Carl & Hal Varian. 1999. Information Rules: A Strategic Guide to the Network Economy. Harvard Business Press.

D5.2 – Evaluation Framework

Page 101 of 175

[28] Methotelemed. 2010. Final Study Report. Available via: http://www.mast-model.info/Downloads/MethoTelemed_final_report_v2_11.pdf (Accessed March1st 2014)

[29] Delone, William, and Ephraim McLean. 2003. The DeLone and McLean Model of Information Systems Success: A Ten-Year Update. Journal of Management Information Systems 19(4):9-30.

[30] Davis, F.D. 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), p. 319-340.

[31] Ajzen, I. & M. Fishbein. 1980. Understanding attitudes and predicting social behavior. Englewood Cliffs, NJ: Prentice-Hall.

[32] Venkatesh, V., M.G. Morris, G.B. Davis & F.D. Davis. 2003. User acceptance of information technology: toward a unified view, MIS Quarterly, 27(3), p. 425-478.

[33] Sponselee, Anne-mie. 2013. Acceptance and Effectiveness of Telecare Services from the End-User Perspective. Thesis. Eindhoven: Technical University Eindhoven

[34] Hays, R.D. & L.S. Morales. 2001. The RAND-36 measure of health-related quality of life. Ann Med, 33(5), p. 350-357. Available via: http://www.rand.org/pubs/reprints/RP971.html (Accessed March 1st 2015)

[35] Herdman, M, Gudex C, Lloyd A, Janssen M, Kind P, Parkin D, Bonsel G, Badia X. 2011. Development and preliminary testing of the new five-level version of EQ-5D (EQ-5D-5L). Qual Life Res. 20(10), p. 1727-36. (see also: http://www.euroqol.org)

[36] Flynn T, Chan P, Coast J, Peters TJ. 2011. Assessing quality of life among British older people using the ICEPOP CAPability (ICECAP-O) measure. Applied Health Economics and Health Policy, Volume 9, Issue 5, p. 317-329. (see also http://www.birmingham.ac.uk/research/activity/mds/projects/HaPS/HE/ICECAP/ICECAP-O/index.aspx)

[37] Brouwer WB, van Exel NJ, van Gorp B & Redekop WK. 2006. The CarerQol instrument: a new instrument to measure care-related quality of life of informal caregivers for use in economic evaluations. Qual Life Res. 15(6), p. 1005-21.

[38] Laugwitz, B.; T. Held, and M. Schrepp. 2008. Construction and evaluation of a user experience questionnaire. In: Holzinger, A. (Ed.): USAB 2008, LNCS 5298, pp. 63-76. (see also: http://www.ueq-online.org/)

[39] Lee, Yang W., Diane M. Strong, Beverly K. Kahn, Richard Y. Wang. 2002. AIMQ: a methodology for information quality assessment. Information & Management 40, p. 133-146.

[40] RATER questionnaire. Available via: http://www.ec.tuwien.ac.at/~dorn/Courses/SDMC/RATER%20Questionnaire.pdf (Accessed 10 March 2015)

[41] Anderson, N. R., and M.A. West. 1996. The team climate inventory: Development of the TCI and its applications in teambuilding for

D5.2 – Evaluation Framework

Page 102 of 175

innovativeness. European Journal of Work and Organizational Psychology, 5(1), p. 53-66.

[42] Netten, A., J. Beadle-Brown, J. Caiels, J. Forder, J. Malley, N. Smith, B. Trukeschitz, A. Towers, E. Welch, and K. Windle. 2011. Adult Social Care Outcomes Toolkit v2.1: Main guidance, PSSRU Discussion Paper 2716/3, Personal Social Services Research Unit, University of Kent, Canterbury. (see also: http://www.pssru.ac.uk/ascot/index.php)

[43] Drummond, Michael F, Mark J Sculpher, and George W Torrance. 2005. Methods for the economic evaluation of health care programs: Oxford university press.

[44] Free/Libre Open Sourse Software. 2002. Available via: http://www.flossproject.org/report/FLOSS_Final0.pdf (Accessed March 1st 2014)

[45] Sisk, J. E., and J. H. Sanders. 1998. A proposed framework for economic evaluation of telemedicine. Telemedicine journal : the official journal of the American Telemedicine Association 4 (1):31-37.

[45] Phillips, Jack J, and Patti P Phillips. 2002. Technology’s return on investment. Advances in Developing Human Resources 4 (4):512-532.

[46] Institute of Medicine. 2001. Crossing the quality chasm. A new health system for the 21st century. Washington: National Academy Press

D5.2 – Evaluation Framework

Page 103 of 175

Appendix A. Detailed indicator list

1. Overall structure of the indicator descriptions

The specified indicator table has the following structure:

Names and codes

We choose names for the indicator groups and subindicators that are easy to understand. The way the indicator will be measured might be more complex, and technical. This is only relevant for those interested in indicator construction.

Each subindicator has a code (see Table 31). The first two characters refer to the assessment domain, the digit refers to the indicator group.

Table 31. Codes

Code Assessment domain

BA Background

TA Technical aspects

UP User perceptions

OC Outcomes

EA Economic aspects

OA Organizational aspects

CA Contextual aspects

SHOW Showcases

D5.2 – Evaluation Framework

Page 104 of 175

Measurement

For each subindicator we describe how it should be measured. If the indicator is based on a validated instrument, or if we use self-constructed questions, this is mentioned here. It is important to note that most indicators are not measured in isolation. We made questionnaires for various stakeholder groups and for various moments in time, covering many indicators at once. If the instrument used to measure the indicator is an existing (validated) questionnaire, or based on an existing model, a reference is made here.

Timing

The evaluation of ReAAL has two stages: the pilot and showcase evaluation and the ReAAL impact validation.

The pilot evaluation can be divided in two phases:

1. Adaptation: this runs from the preparation, adaptation to universal, lab testing and user testing

2. Deployment: this runs from deployment to operation.

Each phase has at least a T0 and a T1 measurement. T0 is at the start of the adaptation or deployment, and T1 at the end of adaptation or deployment. Extra measurements, for example intermediate measures are referred to as T* (see Table 32).

Table 32. Timing

Evaluation phase Code for timing

Description

Adaptation AT0 Start of adaptation

AT* During adaptation

AT1 End of adaptation

Deployment DT0 Start of deployment

DT* During operation

DT1 End of deployment

D5.2 – Evaluation Framework

Page 105 of 175

Data collection and target group

Usually the data is collected via a questionnaire, but for some indicators data may be collected in a different way, for example using a template or focus groups. Other data is readily available in the Knowledge portal. We describe here where the data should be collected, i.e. at which stakeholder, and which instrument will be used. For an overview of instruments, please refer to paragraph 7.3.

2. Background of the assessment

Table 33. Subindicators Background of the assessment

Code Name of Subindicator

How to be measured? Timing Data collection & target group

BA_1a Assisted person Minimal Data Set:

Gender

Year of birth

Name of Health/Social service provider

Start date in project

End date in project

Which application(s) Real life user test:

Age

Gender T0 questionnaire:

Age

Gender

Living status

Highest level of education

Experience with technology

Which application(s) Analysis on individual application level and pilot level;

AT1, DT0 (+MDS continuous)

Real life test User Survey

Minimal Data Set

Assisted person T0 Survey

BA_1b Informal carer Minimal Data Set:

Gender

Year of birth

Name of Health/Social service provider

Start date in project

DT0 (+MDS continuous)

Minimal Data Set

Informal carer T0 Survey

D5.2 – Evaluation Framework

Page 106 of 175

End date in project

Which application(s) T0 questionnaire:

Age

Gender

Relationship to care recipient

Highest level of education

Experience with technology

Which application(s)

How long have you been an informal caregiver?

On a weekly basis, how many hours do you provide care?

BA_1c Formal caregiver Minimal Data Set:

Name of Health/Social service provider

Profession

Start date in project

End date in project

Which application(s) T0 questionnaire:

Age

Gender

Profession

How many hours do you work on a weekly basis?

How long have you been working for this organization?

Experience with technology

Which application(s)

DT0 (+MDS continuous)

Minimal Data set

Formal caregiver T0 Survey

BA_1d Application developer Age

Gender

Name of company

Role in project

Experience in AAL field

Experience with programming languages and IDEs

AT0, AT1 Application developer T0 Survey

Application developer T1 Survey

BA_2a Description of application

Aim

Target group

BRAID scenario

Hardware components

Software components

AT1 Pilot template

Description in Knowledge portal

D5.2 – Evaluation Framework

Page 107 of 175

Methodology followed to design the application

BA_2b Description of service Aim

Target group

Role of application

Role of formal caregiver, informal carer, assisted person

AT1 Pilot template

Description in Knowledge portal

BA_3a Components of universAAL

Description of the components of universAAL and an overview which pilot uses which component

Which operating system is universAAL deployed on?

Is it used as frontend or backend?

Which platform containers are you using?

AT1 Application developer T1 Survey

Description in Knowledge portal

BA_4a Pilot set-up Description of each pilot from perspective of its set-up; whether or not the applications were already available or if another platform was used. Three set-ups are distinguished:

Building universAALized applications from scratch

Porting existing applications

Switching platforms

AT0 Pilot template

Description in Knowledge portal

BA_4b Ontology Description of the way an application in a pilot will be universAALized. The ontology visualises all relevant elements in a diagram. Each pilot will have several diagrams. The ontology is the best indicator for universalization quality, so the description will be assessed by universAAL experts.

AT1 Ontology description in Knowledge portal

3. Technical aspects

Table 34. Subindicators Technical aspects

TA_1a Platform test results The information coming from technical software analysis (Hudson) from the Annex of the Lab testing report and the laboratory questionnaire will be used.

AT1 (lab test) Test report WP3

Report: ID2.1

Laboratory questionnaire (for Smart Homes)

TA_1b Number of open bug reports

Count the number of open bug reports AT1, DT1 Come from ID2.1

Pilot individual issue reporting

Test report WP3

TA_1c Platform feature request

Count the number of (unique) open feature requests at 3 times in the project: start of adaptation, end of adaptation, end of operation

AT0, AT1, DT1

Report: ID2.1

D5.2 – Evaluation Framework

Page 108 of 175

TA_2a universAALisation of the application(s)

Assessment by universAAL experts of each pilot ontology, as an indicator for the quality and level of universAALisation.

AT1 Assessment template

TA_2b Application test results Summary of the test reports of the lab testing AT1 (lab test) Test report WP3

Automatic tests at pilot level (if available)

4. User perceptions

Table 35. Subindicators User perceptions

Code Name of Subindicator

How to be measured?

Timing Data collection & target group

UP_1a Reliability of the application

Question to end user:

How often did you encounter technical problems when using the application?

Statements for end user:

I expect [application x] is reliable in use

[application x] is reliable in use

AT1, DT*, DT1 Assisted person T0 Survey

Informal carer T0 Survey

Formal caregiver T0 Survey

Assisted person T1 Survey

Informal carer T1 Survey

Formal caregiver T1 Survey

UP_1b Usability of uAAL Self-constructed questions:

What are the main issues you have encountered using the features of universAAL?

What are the main issues you have encountered using the components of universAAL?

What is the learning curve See also UP_2a and UP_2b

AT1 Application developer T1 Survey

Application provider T1 Survey (or Focus Group)

Focusgroup application developer + platform provider end of adaptation

Test report WP3 (from emails and other informal communications)

Interview with SH

UP_1c Usability of the application

Usability in Real life test:

UEQ questionnaire (integral) [38] Expected usability:

I expect [application x] is easy to use Questions about experienced usability (based on Technology Acceptance Model TAM) [30]:

How easy do you find the application to use? Tick all boxes that apply

Did you need any assistance during the usage of the

AT1, DT*, DT1 Real life test User Survey

Assisted person T1 Survey

Informal carer T1 Survey

Formal caregiver T1 Survey

Focus group with assisted person and informal carer during deployment.

Focus group with Formal caregiver during deployment

D5.2 – Evaluation Framework

Page 109 of 175

application?

Did your care recipient need any assistance during the usage of the application?

Usability is topic in focus group Analysis on individual application level; comparison between applications with similar goals

UP_1d Usefulness of uAAL Questions about acceptance of universAAL, self-constructed but based on TAM [30]:

I would find universAAL useful in my job.

Using universAAL enables me to accomplish tasks more quickly.

Using universAAL increases my productivity.

If I use universAAL, I will increase my chances of getting a raise.

Using universAAL is a bad/good idea.

universAAL makes work more interesting.

Working with universAAL is fun.

I like working with universAAL.

People who influence my behavior think that I should use universAAL.

People who are important to me think that I should use universAAL.

The senior management of this business has been helpful in the use of universAAL.

In general, the organization has supported the use of universAAL.

I have the resources necessary to use universAAL.

I have the knowledge necessary to use universAAL.

universAAL is not compatible with other technologies I use.

A specific person (or group) is available for assistance with difficulties related to universAAL.

I could complete a job or task using universAAL. o If there was no one around to tell me what to do as I go. o If I could call someone for help if I got stuck. o If I had a lot of time to complete the job for which the o software was provided. o If I had just the built-in help facility for assistance.

I feel apprehensive about using universAAL.

AT1, DT1 Application developer T1 Survey

Application provider T0 Survey

Application provider T1 Survey

Survey/Focusgroup application developer

Survey/Focusgroup platform provider

Survey/Focusgroup application provider

Survey/Focusgroup application provider end of deployment

Test report WP3

D5.2 – Evaluation Framework

Page 110 of 175

It scares me to think that using universAAL could affect the quality of my code.

I hesitate to use universAAL for fear of making mistakes I cannot correct.

universAAL is somewhat intimidating to me.

I intend to use universAAL in the next year.

I predict I would use universAAL in the next year.

I plan to use universAAL in the next year.

UP_1e Usefulness of application

Usefulness is based on expectations and experience. Questions are based on UTAUT and Sponselee questionnaire [33]. Expectations: Statements for assisted person:

I expect [application x] is fun to use

I expect [application x] is useful for me Statements for informal carer:

I expect [application x] is useful for the tasks I perform as informal caregiver

I expect [application x] to be a valuable addition to the tasks I perform as informal caregiver

I expect [application x] will have a positive effect on my workload as informal caregiver

I expect [application x] will benefit my quality of life

I expect [application x] will benefit the quality of life of the care recipient

I expect [application x] will benefit my relationship with the care recipient

I expect [application x] to enhance the independence of the care recipient

I expect [application x] will benefit the health of clients of the care recipient

Statements for formal caregiver:

I expect [application x] is useful for my work

I expect [application x] to be a valuable addition to my work

I expect [application x] will improve the quality of my work

I expect [application x] will have a positive effect on my workload

AT1, DT1 Assisted person T0 Survey

Informal carer T0 Survey

Formal caregiver T0 Survey

Assisted person T1 Survey

Informal carer T1 Survey

Formal caregiver T1 Survey

Focusgroup Formal caregiver (maybe measured from SCs Survey)

D5.2 – Evaluation Framework

Page 111 of 175

I expect [application x] will benefit my relationship with clients

I expect [application x] to enhance the independence of clients

I expect [application x] will benefit the quality of life of clients

I expect [application x] will benefit the health of clients EXPERIENCES Statements assisted person:

[application x] is fun to use

[application x] is useful for me Statements informal carer:

[application x] is fun to use

[application x] is useful for my tasks as informal caregiver

[application x] fits my tasks as informal caregiver

[application x] fits my own standards Statements formal caregiver

[application x] is fun to use

[application x] is useful for my work

[application x] fits my professional norms

[application x] fits my work routines

In your opinion, for which type of client did the application work best?

See, for other statements, the indicator group “Value”. Analysis on individual application level; comparison between applications with similar goals Analysis on individual application level; comparison between applications with similar goals

UP_1f Role of social environment

Assisted person:

Is your family supportive about the application or service? Do they stimulate you to use it? (T1)

Informal carer:

Was your social environment supportive about the use of the application? Do they stimulate you to use it?

Do you stimulate your care recipient to use the application?

DT*, DT1 Assisted person T1 Survey

Informal carer T1 Survey

Formal caregiver T1 Survey

D5.2 – Evaluation Framework

Page 112 of 175

Formal caregiver:

Was your supervisor supportive about the application? Does (s)he stimulate you to use it?

Were your colleagues supportive about the application? Do they stimulate you to use it?

UP_2a Use of uAAL features In questionnaire for application developers: T0 and T1

Please specify your level of confidence per feature

What features have you used so far?

Please rate the learning curve you have experienced

Please rate your perceived complexity

Please rate the expected fitness of the features to solve your problem

Please rate the actual fitness of the features to solve your problems

Assessment by the universAAL experts of the formal reports to check per pilot which uAAL features have been used

AT0, AT1 Application developer T0 Survey

Application developer T1 Survey

Formal Ontology/Design Report

UP_2b Use of uAAL components

In questionnaire for application developers: T0 and T1

Please specify your level of confidence per component of universAAL

What components have you used so far?

Please rate the learning curve you have experienced

Please rate your perceived complexity of the components

Please rate the expected fitness of the components

Please rate the actual fitness of the components Assessment by the universAAL experts of the formal reports to check per pilot which uAAL features have been used

AT0, AT1 Application developer T0 Survey

Application developer T1 Survey

Formal Ontology/Design Report

UP_2c Use of the application Minimal Data Set

Which application(s)

Reason end Questionnaire for assisted person, informal carer, formal caregiver (based on Sponselee questionnaire [33]):

Which applications do you not use anymore?

Why did you stop using it/tem?

Are there things regarding the application that need to be

DT*, DT1 Minimal Data Set

Assisted person T1 Survey

Informal carer T1 Survey

Formal caregiver T1 Survey

D5.2 – Evaluation Framework

Page 113 of 175

changed, in order to use it?

How often did you use the application in that past month?

When do you use the application? Analyses on individual application level (number of users and non-users, etc.); comparison between applications with similar goals; total ReAAL statistics

UP_3a Satisfaction with the uAAL platform

Questions to application providers:

How likely is it that you will universAALize other applications in your portfolio?

How likely is it that you will use universAAL to build new applications?

I intend to use universAAL in the next year.

I predict I would use universAAL in the next year.

I plan to use universAAL in the next year.

DT1 Application developer T1 Survey

Application developer T2 Survey

Focusgroup application developer + platform provider end of adaptation

UP_3b Satisfaction with (formal/informal) care

2 questions at T0 and T1:

Overall, how satisfied or dissatisfied are your with the care and support services you receive?

Overall, how satisfied or dissatisfied are your with the informal care you receive?

DT0, DT*, DT1 Assisted person T0 Survey

Assisted person T1 Survey

UP_3c Satisfaction with the application/service

Questions of application and service satisfaction are based on RATER model [45]. 3 questions for assisted person:

Overall, how satisfied are you with the application or service?

Would you recommend this application to a friend?

Do you have any suggestions for improvement? 4 Questions in questionnaire to formal caregiver:

Overall, how satisfied are you with the application or service?

Would you recommend this application to your colleagues?

Would you recommend clients to use this application?

Do you have any suggestions for improvement?

4 Questions in questionnaire to informal carer:

Overall, how satisfied are you with the application or service?

Would you recommend this application to other caregivers and their care recipients?

Would you recommend your care recipient to use or continue to use this application?

DT*, DT1 Assisted person T1 Survey

Informal carer T1 Survey

Formal caregiver T1 Survey

Focusgroup Assisted person + informal carer during deployment

Focusgroup Formal caregiver

D5.2 – Evaluation Framework

Page 114 of 175

Do you have any suggestions for improvement? Satisfaction is also a topic for focusgroups. Analysis on individual application level; comparison between applications with similar goals; total ReAAL statistics

UP_3d Information quality uAAL documentation

Statements based on AIMQ [38]

The universAAL documentation is easily accessible.

The amount of information is not sufficient for my needs.

The documentation is trustworthy.

This information covers the needs of my tasks.

The representation of the information is compact and concise.

The information is represented in a consistent format.

The documentation is easy to manipulate to meet my needs.

The information is correct.

The information is easily interpretable.

The information is objective and impartial.

The documentation is useful to my work.

The information has a reputation for quality.

The information is sufficiently up-to-date for my work.

The information is easy to comprehend.

Please rate the amount of technologies, concepts or material which is required before attempting to learn each feature.

What did you like about the documentation?

What did you dislike?

How can the documentation be improved? Topics will also be discussed in focusgroup

AT1 Application developer T0 Survey

Application developer T1 Survey

Application provider T0 Survey/Focusgroup

Application developer T1 Focusgroup

Platform provider Survey/Focusgroup

Focusgroup application provider end of adaptation

UP_3e Information quality application/service level

One question based on RATER [40], to be used for the ISS construct “information quality”:

Did you feel there was enough information provided about the application? (T0)

Are you satisfied about the written information you received about the application? (for example a user manual)

Analysis on individual application level

DT0 Assisted person T0 Survey

Informal carer T0 Survey

Formal caregiver T0 Survey

UP_3f Service quality uAAL Self-constructed statement/questions inspired by RATER questionnaire [40] about support from the universAAL Staff

AT1 Application developer T1 Survey

D5.2 – Evaluation Framework

Page 115 of 175

(=support team):

Relationship with universAAL Staff

Processing of requests for changes

Degree of training provided to developers

Attitude of the universAAL support staff

Reliability of information provided by support staff (email, forum, bug tracker,...)

Relevancy of information provided by support staff (email, forum, bug tracker,...)

Accuracy of information provided by support staff (email, forum, bug tracker,...)

Precision of information provided by support staff (email, forum, bug tracker,...)

In your experience, how much time did it take for the universAAL support Staff to solve the experienced issues?

UP_3g Service quality application/service level

Assisted person:

Did you think the training/introduction of the applications was useful? (T0)

Did you feel there was enough support for the applications from the care organization? (T1)

Are your satisfied about the helpdesk for technical questions, processing technical problems or other questions? (T1)

Informal carer:

Did you think the training/introduction of the applications was useful? (T0)

Did you feel there was enough support for the applications from the care organization? (T1)

Are your satisfied about the helpdesk for technical questions, processing technical problems or other questions? (T1)

Formal caregiver:

Did you think the training/introduction of the applications was useful? (T0)

Did you feel there was enough support from the care organization? (T1)

Are your satisfied about the helpdesk for technical questions, processing technical problems or other questions? (T1)

Analysis on service provider level

DT0, DT1 Assisted person T0 Survey

Assisted person T1 Survey

Informal carer T0 Survey

Informal carer T1 Survey

Formal caregiver T0 Survey

Formal caregiver T1 Survey

D5.2 – Evaluation Framework

Page 116 of 175

UP_4a Experienced value of uAAL

Self-constructed questions:

How much time do you think the universAAL features save on the overall development of your solution, compared to any previous development?

How would you rate the usefulness, potential or vale of the features of uAAL in the AAL domain?

How much time do you think the uAAL components save on the overall development of your solution, compared to any previous development?

How would you rate the usefulness, potential or value of the uAAL components?

AT1, DT1 Application developer T0 Survey

Application developer T1 Survey

Focusgroup application developer/application provider end of deployment

Focusgroup SCs

UP_4b Fit with needs of assisted person

Questions about needs at T0:

What aspects of your life, your wellbeing, or your health do you want to improve or sustain?

How would you rate your current situation on these aspects on a range of 1 to 10?

Questions about needs at T0:

What aspects of your life, your wellbeing, or your health did you want to improve or sustain?

How would you rate your current situation on these aspects on a range of 1 to 10?

DT*, DT1 Assisted person T0 Survey

Assisted person T1 Survey

UP_4c Experienced value of the application / service

Assisted person:

How much value does the application have for you?

How has the application affected o your feeling of safety o you feeling of comfort in your home o your feeling of being in control of your life o your dependency o your physical activities o management of your chronic condition o your health or wellbeing

Has the application affected the contact you have with formal caregivers?

Informal carer:

How much value does the application have for you?

What was the impact of the application on your workload?

What was the impact of the application on the content of your tasks as informal caregiver?

What was the impact of the application on the quality of the

DT*, DT1 Assisted person T1 Survey

Informal carer T1 Survey

Formal caregiver T1 Survey

D5.2 – Evaluation Framework

Page 117 of 175

contact you have with your care recipient?

What was the impact of the application on the duration of the contact you have with your care recipient?

[application x] enhances the independence of the care recipient

[application x] benefits the quality of life of the care recipient

[application x] benefits the health of the care recipient Formal carer:

How much value does the application have for you?

What was the impact of the application on your workload?

What was the impact of the application on the content of your work?

What was the impact of the application on the quality of the contact you have with your client?

What was the impact of the application on the amount of contact you have with your client?

In my opinion...[application x] enhances the independence of clients

[application x] benefits the quality of life of clients

[application x] benefits the health of clients

UP_4d Experienced value of interoperability for end user

Question to end user:

You use multiple applications. How do you value the way these applications work together?

DT*, DT1 Assisted person T1 Survey

Informal carer T1 Survey

Formal caregiver T1 Survey

5. Outcomes

Table 36. Subindicators Outcomes

Code Name of Subindicator

How to be measured? Timing Data collection & target group

OC_1a Health of the assisted person

3 questions from RAND-36 [34]:

In general, my health is

Compared to one year ago, how would you rate your health in general now

Health rating on 0-100 scale Chronicity

Do you have a chronic disease or handicap?

DT0, DT*, DT1 Assisted person T0 Survey

Assisted person T1 Survey

D5.2 – Evaluation Framework

Page 118 of 175

OC_1b Health consumption of the assisted person

Do you receive formal care?

Do you receive informal care (incl. care from volunteers)?

Were you admitted to a hospital or nursing home in the past 6 months?

DT0, DT*, DT1 Assisted person T0 Survey

Assisted person T1 Survey

OC_2a Health related quality of life of assisted person

EQ5D-5L (integral) [35], statements on:

Mobility

Self-care

Usual activities

Pain / discomfort

Anxiety / depression

DT0, DT*, DT1 Assisted person T0 Survey

Assisted person T1 Survey

OC_2b Wellbeing related quality of life of assisted person

ICECAP-O (integral) [36], statements on

Love and friendship

Thinking about the future

Doing things that make you feel valued

Enjoyment and pleasure

Independence

DT0, DT*, DT1 Assisted person T0 Survey

Assisted person T1 Survey

OC_2c Quality of life of informal carer

CarerQoL [37]

I have [blank space] of satisfaction in the fulfillment of my care tasks

I have [blank space] relational issues with the care recipient (like communication problems)

I have [blank space] problems with my mental health (like stress, anxiety, worrying about the future)

I have [blank space] problems in combining my daily activities with my care tasks

I have [blank space] financial problems in performing my care tasks

I have [blank space] of support in performing my care tasks

I have [blank space] problems with my physical health

Can you indicate in the table below how happy you feel at this moment?

DT0, DT*, DT1 Informal carer T0 Survey

Informal carer T1 Survey

OC_3a Independent living of the assisted person

Self-constructed, inspired by ASCOT [42], statements on:

Taking care of myself

Having social contacts with people I like

Feeling safe

Feeling in control of my life

Feeling comfortable in my home

DT0, DT*, DT1 Assisted person T0 Survey

Assisted person T1 Survey

D5.2 – Evaluation Framework

Page 119 of 175

OC_4a Adverse events using the uAALized application

Assisted person:

Have there been situations when you did not feel safe, because the application did not work properly?

Informal carer:

Did you experience any problems with the application that (might) have affected your safety or the safety of the care recipient? Examples: missed alerts, etc.

Formal caregiver:

Did you experience any problems with the application that (might) have affected your safety or the safety of the client? Examples: delays in the provision of care, missed alerts, giving the wrong care, etc.

DT*, DT1 Assisted person T1 Survey

Informal carer T1 Survey

Formal caregiver T1 Survey

OC_4b Falls Have you fallen in your home or outside in the past 6 months? DT0, DT*, DT1 Assisted person T0 Survey

Assisted person T1 Survey

6. Economic aspects

Table 37. Subindicators Economic aspects

Code Name of Subindicator

How to be measured? Timing Data collection & target group

EA_1a Cost of uAAL platform deployment

From the perspective of the organization(s) responsible for uAAL deployment in the future, the following costs should be estimated:

Training and supporting users

Maintenance

Further development Costs include: Personnel cost (senior and junior), cost of materials

DT1 Platform provider cost template Other sources: business plans from WP6

EA_1b Cost of universAALisation

From the perspective of the total pilot, the cost of universAALisation should be estimated as precisely as possible, using the cost claims and other data:

Getting to know uAAL

Programming and testing Costs include: Personnel cost (senior and junior), cost of materials, other costs.

AT1 Pilot cost template

EA_1c Cost of deployment From the perspective of the total pilot, the cost of deployment should DT1 Pilot cost template

D5.2 – Evaluation Framework

Page 120 of 175

and operation of AAL

be estimated as precisely as possible, using the cost claims and other data (for the situation without funding):

Implementation

Training and supporting users

Maintenance Costs include: Personnel cost (senior and junior), cost of materials, other costs

EA_1d Cost of service From the perspective of the total pilot, the cost of the service as a whole (with and without the universAALized applications) should be estimated as precisely as possible: Costs include: Personnel cost (senior and junior), cost of materials, other costs

DT1 Service provider template

EA_1e Cost of importing an application from another pilot

Here mainly the costs of adaptation is relevant, for example the translation to another language or the connection to another brand of sensor. Costs include Personnel cost (senior and junior), cost of materials, other costs

DT1 Pilot cost template

EA_2a Revenues for platform provider

In the course of the ReAAL project this will be the payment by the EU (if any). Actual revenues in ReAAL should be compared to expected revenues without EU funding.

DT1 Platform provider template

EA_2b Revenues for application provider

Revenues from the service providers (or directly from the users) who pay for using the application. Actual revenues in ReAAL should be compared to expected revenues without EU funding.

DT1 Pilot cost template

EA_2c Revenues for service provider

Revenues because of health insurance or direct payments by the clients for using the service. Actual revenues in ReAAL should be compared to expected revenues without EU funding.

DT1 Service provider template

EA_3a Willingness to pay for uAAL

Application provider:

How much would you MAXIMALLY be willing to pay for using the universAAL platform?

DT1 Application provider Survey T1

EA_3b Willingness to pay for universalized applications

Assisted person:

How much would you MAXIMALLY be willing to pay for this application?

Would you or your informal carer purchase this application if it were on the market for that price?

Informal carer:

How much would you MAXIMALLY be willing to pay for this application?

DT1 Assisted person T1 Survey

Informal carer T1 Survey

Formal caregiver T1 Survey

Health / Social Service provider T1 Survey

D5.2 – Evaluation Framework

Page 121 of 175

Would you or your care recipient purchase this application if it were on the market for that price?

Formal caregiver:

Do you think clients would pay for this application? Service provider:

How much would you MAXIMALLY be willing to pay for this application?

Would you or your care recipient purchase this application if it were on the market for that price?

EA_4a Market value of universAALized application

Please estimate the market value of your application before and after universAALisation

DT1 Application provider T2 Survey

7. Organizational aspects

Table 38. Subindicators Organizational aspects

Code Name of Subindicator

How to be measured? Timing Data collection & target group

OA_1a Fit with work processes of service provider

Statements for service provider: T0

I expect the application(s) to fit the work processes of our staff

I expect our staff will have to adjust their work processes to use the application(s) optimally

I expect our technical staff (IT department) will have to adjust their work processes to deploy the application(s) optimally

T1

Our staff will had to adjust their work processes more than expected

Our technical staff (IT department) had to adjust their work processes more than expected

On a scale from 0-10, where 10 means optimal fit, how would you rate the fit of the application(s) with your works processes?

What is needed to have the application(s) optimally fit your work processes?

DT0, DT1 Service provider T0 Survey

Service provider T1 Survey

D5.2 – Evaluation Framework

Page 122 of 175

OA_1b Fit with legacy systems of application provider

Questions/statements for application provider:

On which platform(s) does your application portfolio run?

Fit with legacy systems is a prerequisite for any AAL investment of my organization

I expect universAAL to fit directly with my legacy systems and infrastructure

I expect much work is needed to align universAAL with my legacy systems

A fit with legacy systems is, at this point, not very important for my organization.

On a scale from 0-10, where 10 means optimal fit, how would you rate the fit of universAAL with your legacy systems?

What is needed to have universAAL optimally fit your legacy systems?

AT1, DT1 Application provider T0 Survey

Application provider T1 Survey

OA_1c Fit with legacy systems of service provider

Questions/Statements for service provider:

Which systems and infrastructure do you already have (only list those deemed relevant for the AAL service)?

Fit with legacy systems is a prerequisite for any AAL investment of my organization

I expect the universAALized applications to fit directly with my legacy systems and infrastructure

I expect much work is needed to align the universAALized applications with my legacy systems

A fit with legacy systems is, at this point, not very important for my organization.

On a scale from 0-10, where 10 means optimal fit, how would you rate the fit of the application(s) with your legacy systems?

What is needed to have universAALized applications fit optimally with your legacy systems and infrastructure?

DT0, DT1 Service provider T0 Survey

Service provider T1 Survey

OA_1d Innovation climate of application provider

Statements for application provider (management), selected from innovation climate scale by Anderson & West [41]:

This organization is open and responsive to change

This organization is always moving toward the development of new answers

In our organization it is normal to check if we’ve reached what we wanted to reach

Within this organization we work in an efficient manner

universAAL fits with innovative organizations in our domain

DT0 Application provider T0

D5.2 – Evaluation Framework

Page 123 of 175

OA_1e Innovation climate of service provider

Statements for formal caregiver:

As a client I would be happy to have care provided by this organization

There is an emphasis on client-focused care in this organization

This organization is open and responsive to change

This organization is always moving toward the development of new answers

In our organization it is normal to check if we’ve reached what we wanted to reach

Within this organization we work in an efficient manner Statements for service provider (management):

As a client I would be happy to have care provided by this organization

There is an emphasis on client-focused care in this organization

This organization is open and responsive to change

This organization is always moving toward the development of new answers

In our organization it is normal to check if we’ve reached what we wanted to reach

Within this organization we work in an efficient manner

AAL fits with innovative organizations in our domain

universAAL fits with innovative organizations in our domain

DT0 Formal caregiver Survey T0

Service provider T0 Survey

OA_2a Implementation of universAALised applications and services

Topics for focus group:

Do you have experience with implementing AAL applications?

Did you encounter any differences with the implementation of applications in ReAAL compared to other projects?

Did it take more/less time? Was to more/less complex? Was there more/less resistance?

Were there specific implementation issues because the applications were universAALized?

DT1 Focusgroup service provider end of deployment

OC_3a Productivity in development process

Topics for focus group:

Which productivity gains do you potentially see for universAAL?

Did these gains already occur during the ReAAL project?

If not, do you expect this productivity gains will become visible in the near future?

What are, in your opinion, prerequisites for productivity gains from universAAL?

DT1 Focus group application developer/provider end of deployment

OC_3b Efficiency in deployment process

Topics for focus group:

Did you encounter any differences with the deployment of applications in ReAAL compared to other projects?

DT1 Focus group application developer/provider end of deployment

D5.2 – Evaluation Framework

Page 124 of 175

Did it take more/less time? Was to more/less complex? Was there more/less resistance of the service provider?

OC_3c Quantity of care/service

Self-constructed questions T0:

In general I expect that with AAL we can provide more care or services to our clients

I expect with universAALIzed applications we can provide more care or services to our clients than without these applications

T1:

In my opinion, we can now provide more care or services to our clients than before

In my opinion we now have less care or services to our clients, without affecting quality of care

In my opinion we now have less care or services to our clients, negatively affecting quality of care

Please explain how the quantity of care or service has increased/decreased

DT0, DT1 Service provider T0 Survey

Service provider T1 Survey

Focus group Service provider end of deployment

OC_3d Quality of care/service

Self-constructed questions. The quality dimensions stem from the IOM report on quality of healthcare [46] T0:

In general I expect that with AAL we can provide better quality of care or service to our clients

I expect with universAALized applications we can provide better quality of care or services to our clients than without these applications

T1:

In my opinion, we can now provide higher quality of care or services to our clients than before

In my opinion, the quality of care and service has decreased since we started using the AAL applications

How has the introduction of universAALized applications affected the quality of care/service on the following dimensions:

o Effectiveness o Efficiency o Timeliness o Patient/client centeredness o Safety o Equity (equal access)

DT0, DT1 Service provider T0 Survey

Service provider T1 Survey

Focus group Service provider end of deployment

D5.2 – Evaluation Framework

Page 125 of 175

Please explain how the quality of care has increased/decreased.

OA_4a Strategic position of platform provider

Self-constructed questions and statements

I expect the results of the ReAAL project to improve our strategic position compared to other (open) platforms in Europe

Platform provider T0 Survey

Platform provider T1 Survey

OA_4b Strategic position of application provider

Self-constructed questions and statements T0:

I expect our strategic position in the pilot region will improve

I expect our strategic position in the pilot country will improve

I expect our strategic position in Europe will improve

I expect our universAAL experience will make us more attractive than other application providers

T1:

How many new partnerships with other technology providers did the ReAAL project result in?

How many new partnerships with service providers did the ReAAL project result in?

We now have a better product portfolio

Our universAAL experience makes us more attractive than other application providers in our domain

Our strategic position in the pilot region has improved because of our universAAL experience

Our strategic position in the pilot country has improved because of our universAAL experience

Our strategic position in Europe has improved because of our universAAL experience

I see many opportunities to further improve our position

DT0, DT1 Application provider T0 Survey

Application provider T1 Survey

Application provider T2 Survey

OA_4c Strategic position of service provider

Self-constructed questions and statements T0:

I expect our strategic position in the pilot region will improve

I expect to be able to further expand our services T1:

How many new partnerships with technology providers did the ReAAL project result in?

Our strategic position in the pilot region has improved because of

Service provider T0 Survey

Service provider T1 Survey

Focus group Service provider end of deployment

D5.2 – Evaluation Framework

Page 126 of 175

our universAAL experience

Our strategic position in the pilot country has improved because of our universAAL experience

Our organization is more attractive than others in the region, because of our experience with AAL

I see many opportunities to further improve our position

8. Contextual aspects

Table 39. Subindicators Contextual aspects

Code Name of Subindicator

How to be measured? Timing Data collection & target group

CA_1a Accessibility Question application provider:

Have you taken into account the accessibility of the application? Please explain

Question service provider:

Have you taken into account the accessibility of the service? Please explain

Topic focus group: did the service providers experience differences in accessibility between groups? (age, gender, SES)

AT*, DT1 Application provider T0 Survey

Service provider T0 Survey

Focusgroup service provider end of deployment

CA_1b Policy for inclusion What is the strategy of the service provider to improve accessibility at the end of the pilot?

DT1 Focusgroup service provider end of deployment

Service provider template

CA_2a Procurement process

Procurement process description

Number of interested parties in a pilot

Duration of procurement process in a pilot

AT1 Pilot template

CA_2b Data protection Did the pilot make all arrangements necessary to protect privacy? AT* Other sources: PIA documentation

CA_3a Ethical concerns of users

Question in questionnaire:

Do you have any ethical concerns about this application? Please specify

DT0 Informal carer T0 Survey

Formal caregiver T0 Survey

Service provider T0 Survey

CA_3b Ethical approval of pilot

Is the documentation on ethical approval or clearance of ethical approval complete?

AT* Other sources: Ethical manual Annex documentation

D5.2 – Evaluation Framework

Page 127 of 175

9. Showcases

Table 40. Subindicators Showcases

Code Name of Subindicator

How to be measured? Data collection & target group

SHOW_1a Cross-application resource and capability sharing description

Knowledge portal:

What does this showcase mean?

Which applications from which pilot demonstrate this showcase?

Knowledge portal

SHOW_1b Plug and Play description

Knowledge portal:

What does this showcase mean?

Which applications from which pilot demonstrate this showcase?

Knowledge portal

SHOW_1c Advanced Distribution description

Knowledge portal:

What does this showcase mean?

Which applications from which pilot demonstrate this showcase?

Knowledge portal

SHOW_1d Scalability description

Knowledge portal:

What does this showcase mean?

Which applications from which pilot demonstrate this showcase?

Knowledge portal

SHOW_1e Evolution description

Knowledge portal:

What does this showcase mean?

Which applications from which pilot demonstrate this showcase?

Knowledge portal

SHOW_1f Integration with legacy systems description

Knowledge portal:

What does this showcase mean?

Which applications from which pilot demonstrate this showcase?

Knowledge portal

SHOW_1g Services Integration description

Knowledge portal:

What does this showcase mean?

Which applications from which pilot demonstrate this showcase?

Knowledge portal

SHOW_1h Security & Privacy description

Knowledge portal:

What does this showcase mean?

Which applications from which pilot demonstrate this showcase?

Knowledge portal

D5.2 – Evaluation Framework

Page 128 of 175

SHOW_1i Service Transferability description

Knowledge portal:

What does this showcase mean?

Which applications from which pilot demonstrate this showcase?

Knowledge portal

SHOW_1j Advanced User Interaction description

Knowledge portal:

What does this showcase mean?

Which applications from which pilot demonstrate this showcase?

Knowledge portal

SHOW_1k Personalized Content Push description

Knowledge portal:

What does this showcase mean?

Which applications from which pilot demonstrate this showcase?

Knowledge portal

SHOW_1l Ambient Intelligence description

Knowledge portal:

What does this showcase mean?

Which applications from which pilot demonstrate this showcase?

Knowledge portal

SHOW_1m Enhanced Market communication and distribution description

Knowledge portal:

What does this showcase mean?

Which applications from which pilot demonstrate this showcase?

Knowledge portal

SHOW_2a Cross-application resource and capability sharing demonstration

Lab set-up

How easy is it to demonstrate the showcase? Showcase test at Lab

SHOW_2b Plug and Play demonstration

Lab set-up

How easy is it to demonstrate the showcase? Showcase test at Lab

SHOW_2c Advanced Distribution demonstration

Lab set-up

How easy is it to demonstrate the showcase? Showcase test at Lab

SHOW_2d Scalability demonstration

Lab set-up

How easy is it to demonstrate the showcase? Showcase test at Lab

SHOW_2e Evolution demonstration

Lab set-up

How easy is it to demonstrate the showcase? Showcase test at Lab

SHOW_2f Integration with legacy systems demonstration

Lab set-up

How easy is it to demonstrate the showcase? Showcase test at Lab

SHOW_2g Services Integration demonstration

Lab set-up

How easy is it to demonstrate the showcase? Showcase test at Lab

D5.2 – Evaluation Framework

Page 129 of 175

SHOW_2h Security & Privacy demonstration

Lab set-up

How easy is it to demonstrate the showcase? Showcase test at Lab

SHOW_2i Service Transferability demonstration

Lab set-up

How easy is it to demonstrate the showcase? Showcase test at Lab

SHOW_2j Advanced User Interaction demonstration

Lab set-up

How easy is it to demonstrate the showcase? Showcase test at Lab

SHOW_2k Personalized Content Push demonstration

Lab set-up

How easy is it to demonstrate the showcase? Showcase test at Lab

SHOW_2l Ambient Intelligence demonstration

Lab set-up

How easy is it to demonstrate the showcase? Showcase test at Lab

SHOW_2m Enhanced Market communication and distribution demonstration

Lab set-up

How easy is it to demonstrate the showcase? Showcase test at Lab

SHOW_3a Cross-application resource and capability sharing value

Knowledge portal:

What is potentially the value of this showcase for a developer, technology provider, service provider, end user or government?

Topic in focus group

How do the developers rate the value of this showcase after demonstration?

How do the technology providers rate the value of this showcase after demonstration?

How do the service providers rate the value of this showcase after demonstration?

How do the end users rate the value of this showcase after demonstration?

How do the policy makers rate the value of this showcase after demonstration?

Knowledge portal

Focus group application developer/application provider

Focus group Formal caregiver during deployment

Focus group assisted person / informal carer during deployment

Focus group service provider end of deployment

SHOW_3b Plug and Play value

Knowledge portal:

What is potentially the value of this showcase for a developer, technology provider, service provider, end user or government?

Topic in focus group

How do the developers rate the value of this showcase after demonstration?

How do the technology providers rate the value of this showcase after demonstration?

How do the service providers rate the value of this showcase

Knowledge portal

Focus group application developer/application provider

Focus group Formal caregiver during deployment

Focus group assisted person / informal carer during deployment

Focus group service provider end of deployment

D5.2 – Evaluation Framework

Page 130 of 175

after demonstration?

How do the end users rate the value of this showcase after demonstration?

How do the policy makers rate the value of this showcase after demonstration?

SHOW_3c Advanced Distribution value

Knowledge portal:

What is potentially the value of this showcase for a developer, technology provider, service provider, end user or government?

Topic in focus group

How do the developers rate the value of this showcase after demonstration?

How do the technology providers rate the value of this showcase after demonstration?

How do the service providers rate the value of this showcase after demonstration?

How do the end users rate the value of this showcase after demonstration?

How do the policy makers rate the value of this showcase after demonstration?

Knowledge portal

Focus group application developer/application provider

Focus group Formal caregiver during deployment

Focus group assisted person / informal carer during deployment

Focus group service provider end of deployment

SHOW_3d Scalability value Knowledge portal:

What is potentially the value of this showcase for a developer, technology provider, service provider, end user or government?

Topic in focus group

How do the developers rate the value of this showcase after demonstration?

How do the technology providers rate the value of this showcase after demonstration?

How do the service providers rate the value of this showcase after demonstration?

How do the end users rate the value of this showcase after demonstration?

How do the policy makers rate the value of this showcase after demonstration?

Knowledge portal

Focus group application developer/application provider

Focus group Formal caregiver during deployment

Focus group assisted person / informal carer during deployment

Focus group service provider end of deployment

SHOW_3e Evolution value Knowledge portal:

What is potentially the value of this showcase for a developer, technology provider, service provider, end user or government?

Topic in focus group

How do the developers rate the value of this showcase after demonstration?

How do the technology providers rate the value of this showcase after demonstration?

Knowledge portal

Focus group application developer/application provider

Focus group Formal caregiver during deployment

Focus group assisted person / informal carer during deployment

Focus group service provider end of

D5.2 – Evaluation Framework

Page 131 of 175

How do the service providers rate the value of this showcase after demonstration?

How do the end users rate the value of this showcase after demonstration?

How do the policy makers rate the value of this showcase after demonstration?

deployment

SHOW_3f Integration with legacy systems value

Knowledge portal:

What is potentially the value of this showcase for a developer, technology provider, service provider, end user or government?

Topic in focus group

How do the developers rate the value of this showcase after demonstration?

How do the technology providers rate the value of this showcase after demonstration?

How do the service providers rate the value of this showcase after demonstration?

How do the end users rate the value of this showcase after demonstration?

How do the policy makers rate the value of this showcase after demonstration?

Knowledge portal

Focus group application developer/application provider

Focus group Formal caregiver during deployment

Focus group assisted person / informal carer during deployment

Focus group service provider end of deployment

SHOW_3g Services Integration value

Knowledge portal:

What is potentially the value of this showcase for a developer, technology provider, service provider, end user or government?

Topic in focus group

How do the developers rate the value of this showcase after demonstration?

How do the technology providers rate the value of this showcase after demonstration?

How do the service providers rate the value of this showcase after demonstration?

How do the end users rate the value of this showcase after demonstration?

How do the policy makers rate the value of this showcase after demonstration?

Knowledge portal

Focus group application developer/application provider

Focus group Formal caregiver during deployment

Focus group assisted person / informal carer during deployment

Focus group service provider end of deployment

SHOW_3h Security & Privacy value

Knowledge portal:

What is potentially the value of this showcase for a developer, technology provider, service provider, end user or government?

Topic in focus group

How do the developers rate the value of this showcase after demonstration?

How do the technology providers rate the value of this

Knowledge portal

Focus group application developer/application provider

Focus group Formal caregiver during deployment

Focus group assisted person / informal carer during deployment

D5.2 – Evaluation Framework

Page 132 of 175

showcase after demonstration?

How do the service providers rate the value of this showcase after demonstration?

How do the end users rate the value of this showcase after demonstration?

How do the policy makers rate the value of this showcase after demonstration?

Focus group service provider end of deployment

SHOW_3i Service Transferability value

Knowledge portal:

What is potentially the value of this showcase for a developer, technology provider, service provider, end user or government?

Topic in focus group

How do the developers rate the value of this showcase after demonstration?

How do the technology providers rate the value of this showcase after demonstration?

How do the service providers rate the value of this showcase after demonstration?

How do the end users rate the value of this showcase after demonstration?

How do the policy makers rate the value of this showcase after demonstration?

Knowledge portal

Focus group application developer/application provider

Focus group Formal caregiver during deployment

Focus group assisted person / informal carer during deployment

Focus group service provider end of deployment

SHOW_3j Advanced User Interaction value

Knowledge portal:

What is potentially the value of this showcase for a developer, technology provider, service provider, end user or government?

Topic in focus group

How do the developers rate the value of this showcase after demonstration?

How do the technology providers rate the value of this showcase after demonstration?

How do the service providers rate the value of this showcase after demonstration?

How do the end users rate the value of this showcase after demonstration?

How do the policy makers rate the value of this showcase after demonstration?

Knowledge portal

Focus group application developer/application provider

Focus group Formal caregiver during deployment

Focus group assisted person / informal carer during deployment

Focus group service provider end of deployment

SHOW_3k Personalized Content Push value

Knowledge portal:

What is potentially the value of this showcase for a developer, technology provider, service provider, end user or government?

Topic in focus group

How do the developers rate the value of this showcase after demonstration?

Knowledge portal

Focus group application developer/application provider

Focus group Formal caregiver during deployment

Focus group assisted person /

D5.2 – Evaluation Framework

Page 133 of 175

How do the technology providers rate the value of this showcase after demonstration?

How do the service providers rate the value of this showcase after demonstration?

How do the end users rate the value of this showcase after demonstration?

How do the policy makers rate the value of this showcase after demonstration?

informal carer during deployment

Focus group service provider end of deployment

SHOW_3l Ambient Intelligence value

Knowledge portal:

What is potentially the value of this showcase for a developer, technology provider, service provider, end user or government?

Topic in focus group

How do the developers rate the value of this showcase after demonstration?

How do the technology providers rate the value of this showcase after demonstration?

How do the service providers rate the value of this showcase after demonstration?

How do the end users rate the value of this showcase after demonstration?

How do the policy makers rate the value of this showcase after demonstration?

Knowledge portal

Focus group application developer/application provider

Focus group Formal caregiver during deployment

Focus group assisted person / informal carer during deployment

Focus group service provider end of deployment

SHOW_3m Enhanced Market communication and distribution value

Knowledge portal:

What is potentially the value of this showcase for a developer, technology provider, service provider, end user or government?

Topic in focus group

How do the developers rate the value of this showcase after demonstration?

How do the technology providers rate the value of this showcase after demonstration?

How do the service providers rate the value of this showcase after demonstration?

How do the end users rate the value of this showcase after demonstration?

How do the policy makers rate the value of this showcase after demonstration?

Knowledge portal

Focus group application developer/application provider

Focus group Formal caregiver during deployment

Focus group assisted person / informal carer during deployment

Focus group service provider end of deployment

D5.2 – Evaluation Framework

Page 134 of 175

10. ReAAL impact indicators

Table 41. ReAAL impact indicators

Code Name of Subindicator How to be measured? Data collection & target group

IMPACT_1a Number of successfully demonstrated showcases

After showcase evaluation at end of adaptation and during deployment

Source: showcase evaluation

IMPACT_1b Number of supported operating systems

After deployment, because adaptations to universAAL will be made throughout the project

Platform developer template

IMPACT_1c Number of supported device types After deployment, because adaptations to universAAL will be made throughout the project

Platform developer template

IMPACT_2a Number of pilots with successful universAALisation

After adaptation Source: test reports and pilot operation reports

IMPACT_2b Number of pilots reaching number of users

After deployment MDS and pilot operation reports

IMPACT_2c Number of successfully implemented imported applications

After deployment Pilot operation reports

IMPACT_3a Number of associated pilots Measured at the end of Year1, Year2, Year3 Project management data

IMPACT_3b Number of associated vendors Measured at the end of Year1, Year2, Year3 Project management data

IMPACT_4a Number of visits to website Measured monthly for the whole project time Dissemination data from WP6

IMPACT_4b Number of accounts in the developer depot

Measured monthly for the whole project time Data from WP2

IMPACT_4c Number of interested pilots (uncertain how it can be measured)

Measured monthly for the whole project time; only interested pilots contacting project management (Fraunhofer) are included.

Project management data

IMPACT_4d Number of visitors to ReAAL events Measured at each event Project management data; dissemination data from WP6

IMPACT_4e Number of H2020 proposals that use uAAL

Not sure how to measure. Via project officer? Project management data Via project officer?

D5.2 – Evaluation Framework

Page 135 of 175

Appendix B. Evaluation activities per pilot

An overview is provided of the pilots. Please note that the dates are indicative, since the project is evolving. The best accurate dates can be found in the Knowledge portal.

BRM (Baerum municipality)

Safer@Home

Description application

This is an application which allows for safety and security alarms from the home.

Planning

Start End

Deployment February 2015 April 2015

Operation April 2015 December 2015

Final evaluation April 2015 December 2015

Specifics Batch deployment, mixed group of end-users with or without internet access

Key figures

# assisted persons 90

# formal caregivers 60

# informal caregivers 20-40

# sample size Pearl: 90 assisted persons; all formal caregivers; 20 informal caregivers

Agenda/ calendar

Description application

This is an application which allows for relatives and caregivers to improve the cooperation. This application will act as a communication tool between caregivers in BRM and the elderly people.

Planning

Start End

Deployment April 2015 May 2015

Operation May 2015 December 2015

Final evaluation May 2015 December 2015

Specifics Batch deployment, mixed group of end-users with or without internet access

Key figures

# assisted persons 80

# formal caregivers 30

# informal caregivers 30

# sample size 30 assisted persons; 30 formal caregivers

D5.2 – Evaluation Framework

Page 136 of 175

Home node

Description application

This is an application which allows for collecting data and alarms from sensors in home environment.

Planning

Start End

Deployment May 2015 June 2015

Operation June 2015 December 2015

Final evaluation June 2015 December 2015

Specifics Batch and individual deployment, mixed group of end-users with or without internet access

Key figures

# assisted persons 130

# formal caregivers 30

# informal caregivers Involved, but no numbers

# sample size Pearl: 80 assisted persons; all formal caregivers; 20 informal caregivers

BSA pilot (Badalona services)

Help when Outdoor

Description application

Help when Outdoor makes it able to define a secure zone for the patient and receive an alert when the patient is leaving this zone. The application is able to guide the patient back home and is also able to show the current location of the patient to the formal/informal caregivers through a secured website. Furthermore the application will include the panic button which can be used to ask for help at the Contact Centre who will mobilize the needed resources.

Planning

Start Completion

Deployment February 2015 August 2015

Operation April 2015 September 2015

Final evaluation September 2015 November 2015

Specifics Individual deployment, end-users have no internet access

Key figures

# assisted persons 200

# formal caregivers 35

# informal caregivers 200

# sample size Pearl: 80-100 assisted persons; 50 informal caregivers; all formal caregivers

Welfare Service Network

Description application

Welfare Service Network sends reminders to different types of patients by sending them reminders about their disease(s).

Planning

Start Completion

Deployment February 2015 August 2015

Operation April 2015 September 2015

Final evaluation September 2015 November 2015

Specifics Individual deployment, end-users have no internet access

D5.2 – Evaluation Framework

Page 137 of 175

Key figures

# assisted persons 350

# formal caregivers 35

# informal caregivers Variable

# sample size Pearl: 80-100 assisted persons; all formal caregivers

Nomhad chronicity

Description application

Nomhad chronicit is a telemonitory application which allows for the monitoring of patients with chronic conditions. Different inputs are gathered through the fact that medical devices are installed at home at linked to the platform and the ability to deliver quesitonnaires to the patients.

Planning

Start Completion

Deployment February 2015 August 2015

Operation April 2015 September 2015

Final evaluation September 2015 November 2015

Specifics Individually planned deployment. New users are expected. End-users have no internet access.

Key figures

# assisted persons 300

# formal caregivers 18

# informal caregivers Variable

# sample size 50 assisted persons; all formal caregivers

IBR (Ibermatica pilot)

Immediate Aid Provider safety service

Description application

This service, will notify to the caregivers if the elderly needs help at a specific time. The elderly will also have a 'panic button' system to notify the caregivers or the family if the elderly has had an accident in his house.

Planning

Start End

Deployment January 2015 -

Operation - -

Final evaluation November 2015 December 2015

Specifics Batch deployment, end users have no internet access

Key figures

# assisted persons 210

# formal caregivers 20-40

# informal caregivers 100

# sample size Pearl: 100 assisted persons; all formal caregivers; 50 informal caregivers

Healthy habits and Mental Wellness

Description application

The main objective of this application is to help the users of the application to follow a healthy life by encouraging them to walk and stroll every day.

Planning

Start End

D5.2 – Evaluation Framework

Page 138 of 175

Deployment January 2015 -

Operation - -

Final evaluation November 2015 December 2015

Specifics Batch deployment, end users have no internet access

Key figures

# assisted persons 250

# formal caregivers 20-40

# informal caregivers 120

# sample size Pearl: 100 assisted persons; all formal caregivers; 50 informal caregivers

ODE (Odense municipality)

Task Scheduling

Description application

The Task Scheduling support the coordination and execution of daily tasks. Citizens will be equipped with a pedometer and the caregivers will help set goals for them. The aim is to make the elderly more independent and physically healthier by motivating them to exercise more

Planning

Start Completion

Deployment February 2015 April 2015

Operation February 2015 November 2015

Final evaluation September 2015 November 2015

Specifics Batch deployment, end-users have no internet access

Key figures

# assisted persons 500

# formal caregivers 400

# informal caregivers Not applicable

# sample size Pearl: 80-100 assisted persons; 80 formal caregivers

Rehabilitation Portal

Description application

Rehabilitation is a sensor which provides measure on how physical exercises are performed by patients and it gives patients an indication on how well they are performing.

Planning

Start Completion

Deployment February 2014 February 2015

Operation February 2015 November 2015

Final evaluation November 2015 December 2015

Specifics Individual deployment, mixed groups of end-users with or without internet access

Key figures

# assisted persons 280-300

# formal caregivers 15-20

# informal caregivers Not applicable

# sample size Pearl: 80-100 assisted persons; all formal caregivers

D5.2 – Evaluation Framework

Page 139 of 175

Puglia (Puglia region)

Safety at home

Description application

Sensor that can detect smoke, gas, open windows etc.

Planning

Start Completion

Deployment May 2015 July 2015

Operation July 2015 January 2016

Final evaluation January 2016 March 2016

Specifics

# assisted persons Unknown at this point; but 106 end-users in total of all applications

# formal caregivers Unknown at this point; but 106 end-users in total of all applications

# informal caregivers Unknown at this point; but 106 end-users in total of all applications

# sample size Pearl: all users; all formal caregivers; all informal caregivers

Home activity monitoring

Description application

Application that allows distance monitoring of activities in the homes of an assisted person (for example leaving); alarm function for (informal) cergiver.

Planning

Start Completion

Deployment May 2015 July 2015

Operation July 2015 January 2016

Final evaluation January 2016 March 2016

Specifics

# assisted persons Unknown at this point; but 106 end-users in total of all applications

# formal caregivers Unknown at this point; but 106 end-users in total of all applications

# informal caregivers Unknown at this point; but 106 end-users in total of all applications

# sample size Pearl: all users; all formal caregivers; all informal caregivers

Easy home control

Description application

Application that allows to control from one display lighting, locks, heating, etc.

Planning

Start Completion

Deployment May 2015 July 2015

Operation July 2015 January 2016

Final evaluation January 2016 March 2016

Specifics

# assisted persons Unknown at this point; but 106 end-users in total of all applications

# formal caregivers Unknown at this point; but 106 end-users in total of all applications

# informal caregivers Unknown at this point; but 106 end-users in total of all applications

# sample size Pearl: all users; all formal caregivers; all informal caregivers

D5.2 – Evaluation Framework

Page 140 of 175

RNT (Rijnmond region)

Curavista: selfmanagement diaries

Description application

This application offers a set of self-management diaries for people suffering from chronic conditions, ranging from Alzheimer to pain due to cancer.

Planning

Start Completion

Deployment March 2015 April 2015

Operation January 2015 April 2015

Final evaluation November 2015 December 2015

Specifics Batch deployment, end-users have internet access

Key figures

# assisted persons 160

# formal caregivers 20

# informal caregivers Not applicable

# sample size 60 assisted persons; all formal caregivers

MedicineMen : medication alert on smartwatch

Description application

This watch gives alarms on set times (this times can be set in the back end of the system) including a description of the expected action.

Planning

Start Completion

Deployment March 2015 April 2015

Operation January 2015 April 2015

Final evaluation November 2015 December 2015

Specifics Batch deployment, end-users have internet access

Key figures

# assisted persons 133

# formal caregivers 10

# informal caregivers Not applicable

# sample size 60 assisted persons; all formal caregivers

MiBida: screen to screen communication

Description application

This application provides primarily a screen to screen connection.

Planning

Start Completion

Deployment March 2015 April 2015

Operation January 2015 April 2015

Final evaluation November 2015 December 2015

Specifics Batch deployment, end-users have internet access

Key figures

# assisted persons 581

# formal caregivers 30

# informal caregivers Role unknown

# sample size Pearl: 100 assisted persons; all formal caregivers; informal carehiovers if included in pilot

Batches, internet access

MindDistrict: mental care modules

D5.2 – Evaluation Framework

Page 141 of 175

Description application

This application gives the possibility to train their mental status through a user-friendly communication platform.

Planning

Start End

Deployment April 2015 -

Operation December 2015 -

Final evaluation December 2015 -

Specifics Batch deployment, end-users have internet access

Key figures

# assisted persons 555

# formal caregivers 60

# informal caregivers Not applicable

# sample size Pearl: 80 assisted persons; all formal caregivers

NetMedical: weigthing scale & blood pressure

Description application

NetMedical provides patients with different measurement devices which allows to measure weight and blood pressure for the disease from home.

Planning

Start End

Deployment April 2015 -

Operation December 2015 -

Final evaluation December 2015 -

Specifics Batch deployment, end-users have internet access

Key figures

# assisted persons 142

# formal caregivers Unknown

# informal caregivers Not applicable

# sample size 60 assisted persons; all formal caregivers

Almende: VitAAL app

Description application

VitAAL is a smartphone app which enables to monitor and manage their personal, social and physical well-being. The application collects data from the sensors on the mobile phone and combines it with data from other informational sources, like personal calendar. Therefore, it is able to serve as a personal coaching mechanism.

Planning

Start End

Deployment April 2015 -

Operation December 2015 -

Final evaluation December 2015 -

Specifics Batch deployment, end-users have internet access

Key figures

# assisted persons 205

# formal caregivers Not applicable

# informal caregivers Not applicable

# sample size 100

Batches, interne access

D5.2 – Evaluation Framework

Page 142 of 175

SL (Smart Living pilot)

Smart Living system with: There for you, welfare services, information services, everyday commodities

Description application

If users feel ill, sick or bad, they can use this service. By hitting a "There for you" button a care-taker will be informed that you feel bad. The care-taker will call you back or will come around.

Planning

Start End

Deployment March 2015 April 2015

Operation April 2015 October 2015

Final evaluation October 2015 December 2015

Specifics Individual deployment, end-users have no internet access

Key figures

# assisted persons 43

# formal caregivers 2

# informal caregivers 1

# sample size Pearl: 43 assisted persons; all formal caregivers; all informal caregivers

TEA (Madrid region)

Cognibox

Description application

The application consists of cognitive training exercises monitored by a feedback process and new innovative exercises that arise in the field of cognitive analysis

Planning

Start End

Deployment January 2015 February 2015

Operation February 2015 September 2015

Final evaluation December 2015 December 2015

Specifics Batch deployment, end-users have no internet access

Key figures

# assisted persons 505

# formal caregivers 10

# informal caregivers 1000

# sample size Pearl: 100 assisted persons; all formal caregivers; 50 informal caregivers

E-Health

Description application

The application promotes independent living for elderly by improving motor skills through physical exercises.

Planning

Start End

Deployment January 2015 February 2015

Operation February 2015 September 2015

Final evaluation December 2015 December 2015

Specifics Batch deployment, end-users have no internet access

Key figures

# assisted persons 505

D5.2 – Evaluation Framework

Page 143 of 175

# formal caregivers 10

# informal caregivers 1000

# sample size Pearl: 100 assisted persons; all formal caregivers; 50 informal caregivers

SocialByElder

Description application

This is an application designed and tailored for the elder. It acts as Social Network, in which participants interests are defined and they can interact quickly and easily with the other participating members, seeking to improve communication and social skills.

Planning

Start End

Deployment January 2015 February 2015

Operation February 2015 September 2015

Final evaluation December 2015 December 2015

Specifics Batch deployment, end-users have no internet access

Key figures

# assisted persons 505

# formal caregivers 10

# informal caregivers 1000

# sample size Pearl: 100 assisted persons; all formal caregivers; 50 informal caregivers

OptiSad

Description application

This is an application which allows to manage the contact between patients and their caregivers. The application automatically optimizes the allocation of the Service Provider Company´s formal Caregivers, taking into account the specific needs of each user.

Planning

Start End

Deployment January 2015 February 2015

Operation February 2015 September 2015

Final evaluation December 2015 December 2015

Specifics Big bang deployment, end-users have no internet access

Key figures

# assisted persons 45

# formal caregivers 10

# informal caregivers 90

# sample size 5 assisted persons; 5 formal caregivers; 5 informal caregivers

WQZ (German construction sites pilot)

Smart home

Description application

This application is meant to control the electronic equipment at home, mainly light, outlet, blind, heating. Therefore all information in relation with the apartment will be captured, displayed and be accessible on a Terminal.

Planning

Start Completion

Deployment May 2015 June 2015

Operation June 2015 December 2015

D5.2 – Evaluation Framework

Page 144 of 175

Final evaluation December 2015 February 2016

Specifics Technology installed in the homes (new buildings)

# assisted persons 60

# formal caregivers n/a

# informal caregivers n/a

# sample size Pearl: all assisted persons

Capfloor

Description application

This application converts the floors of buildings to a valuable source of information on the basis of passive sensor systems. Such sensitive floors provide an exact system for indoor localization, fall recognition system, SMS notification service in case of fall and an intelligent lighting control system. Moreover, there is a plenty of use cases that can be implemented later on the basis of Magic Walk like house leaving control (important for dement people or else for reminders about devices left on or even for adapting the heating system), and burglaries detection and notification system.

Planning

Start Completion

Deployment May 2015 June 2015

Operation June 2015 December 2015

Final evaluation December 2015 February 2016

Specifics Technology installed in the homes (new buildings)

# assisted persons 60

# formal caregivers n/a

# informal caregivers n/a

# sample size Pearl: all assisted persons

D5.2 – Evaluation Framework

Page 145 of 175

Appendix C. Data collection tools

The following pages contain examples of the data collection tools developed for the project. All tools are available upon request.

Application developer T1 Survey

Assisted person T1 Survey

Cost Template

Script for evaluation of showcase

D5.2 – Evaluation Framework

Page 146 of 175

Application Developer T1 Survey

(=Developers First impressions)

Information about role We assume you are involved in the development process of ReAAL applications and that you know what your pilot is about. The following questions are related to your role within this development. If you have answered our previous questionnaire, most or all of this section is already answered (we will recover your previous answers), and you may proceed to the next section. 1. Please use a nickname, you will be asked to use the same in future questionnaires.* Please write your answer here: 2. Please specify the name of your pilot. * Please choose all that apply:

RNT

Baerum

TEA

Ibermatica

ODE

BSA

AJT

SmartLiving

Puglia

Other: 3. Name of the company you work for. Please write your answer here: 4. When were you born? Please enter a date: 5. Please specify your gender. Female Male 6. Please specify what is your role within the development process. * Please choose all that apply:

requirements elicitation

design

programming

testing and validation

decision making

Other: 7. What are the name(s) of the application(s) you are developing?

D5.2 – Evaluation Framework

Page 147 of 175

Please write your answer(s) here: ……………………….

Background In this section we would like to understand your background as a developer. If you have already answered our previous questionnaire, most or all of this section is already answered (we will recover your previous answers). 8. For how many years have you been programming professionally? Please write your answer here: Please exclude studying experience. 9. How many years have you been working in AAL or any related field (eHealth, mHealth etc.) ? * Please write your answer here: 10. Please specify the number of years you have been practicing with the following programming languages: * Please choose all that apply and provide a comment:

Java

C/C++

PHP

Python

Ruby

C#

ObjectC

Other: Please focus on those languages that you are mostly sure you are going to use in ReAAL. 11. What IDEs have you used? * Please choose all that apply:

Eclipse

Netbeans

Codeblocks

Visual Studio

IntelliJ

Other:

12. Please mark those topics that you believe you know. * Please choose all that apply:

distributed systems (SOA, web services, Jgroups)

web technologies (HTML5, AJAX)

semantics (ontologies modelling, OWL)

mobile technologies (Android, iOS, .NET CF)

home automation (X10, KNX, zWave)

embedded systems (AVR, ARM)

user interaction (UI programming, usability)

D5.2 – Evaluation Framework

Page 148 of 175

medical devices protocols (X73, BT HDP)

Other: You don't need to be an expert, but at least that you have worked on these topics a bit. 13. With your own words, tell us what is your background, what technologies you know better, what cool projects you have worked on etc. Please write your answer here: Please tell us at least an example of projects you have worked in

Usage of universAAL In this section we will ask you how you are going to use universAAL and what problems have you encountered. 14. Which Operating System is universAAL deployed on (or going to be)? * Please choose all that apply:

Windows

Android

iOS

Linux

I don't know (universAAL is installed in a backend for which I do not know this detail)

Other: Please note that your application may or may not be installed directly on universAAL (if using RAPI); In either case it is the universAAL container we are interested in. 15. universAAL platform is deployed as backend (e.g:in a server), frontend (e.g: in a tablet or set top box) or both? Please choose all that apply:

Backend

Frontend

Other: 16. How much do you know about th universAAL platform? * Please choose only one of the following:

none

just heard

I assisted training events

I have done some experiments

I am fully working on it

I was in universAAL project

I am a memebr of uR team

Other 17. What are the main issues that you have encountered in the use of universAAL until now? Please write your answer here:

D5.2 – Evaluation Framework

Page 149 of 175

18. Which of the following universAAL platform containers are you using? * Please choose all that apply:

Java-OSGi API of the middleware

Java-Android API of the middleware

ASOR (Java from within JavaScript)

Java Remote-API of the middleware

Other: 19. If using Remote API, could you explain briefly the deployment of your application. Which runtime are you using as client? Are you using, or have created, libraries to access the Remote API? Which runtime is the server running on? etc... * Please write your answer here: 20. Is there any Methodology you have followed for the design of your application, or the adaptation of the application? If so, please explain briefly or write a reference. Please write your answer here:

Features of universAAL In this section of the questionnaire we will explore the Features of the universAAL platform. We want to explore the impact that these features have on the universAALization of applications. Here is a summary of the Features: • Distribution the capability to run on and coordinate several hardware nodes into performing common tasks • Interoperability the capabilty to interoperate with different standards (communication standars, software

standards, ...) through ontologies • Extensibility the capability of easily extending funcionality once deployed • Adaptability the capability to automatically adapt to different situations (such as diferent homes, diferent

number of users per home, etc...) • Scalability the capability of being able to easily scale the deployment to more users • Integrability the capability of beign able to easily integrate with diferent technologies and/or other services • Security the capability to maintain user data and resources secure • Proactivity the capability to predict, anticipate and act upon certain situations • Personalizability the capability of personalizing any aspect to the individual preferences of any user

21. Any feature(s) that you think is missing from the universAAL platform? Please write your answer here: 22. Please specify your level of confidence per Feature of universAAL. * Please choose the appropriate response for each item:

I know nothing about it

I know something about it

I have used it a bit

I have used it several times

I know many details about it

Not applicable Confidence here means how much you know about the subject.

D5.2 – Evaluation Framework

Page 150 of 175

23. What features of universAAL have you used so far? * Please choose all that apply:

Distribution

Interoperability

Extensibility

Adaptability

Scalability

Integrability

Security

Proactivity

Personalizability

Other: 24. Please rate the learning curve you have experienced for the following features. * Please choose the appropriate response for each item:

Wall (Impossible)

Very steep

steep

easy

trivial

N/A 25. Please rate your perceived complexity of the following features: * Please choose the appropriate response for each item:

Very complex

Somehow complex

Neutral

Simple

Very simple

Not applicable Complexity here means how easy they are to understand and use. 26. Rate the expected fitness of the following features to solve your problem.* Please choose the appropriate response for each item:

Not fitted at all

Not quite fitted

Exactly what I needed

Even better than what I needed Expected is the expectation you had when you read about the feature. At learning phase or design phase you might want to use some feature because you thought it did something that would help your development. 27. Rate the Actual fitness of the following features to solve your problem.* Please choose the appropriate response for each item:

Not fitted at all

Not quite fitted

D5.2 – Evaluation Framework

Page 151 of 175

Exactly what I needed

Even better than what I needed Actual is the fitness you found the feature to have once you started developing with it. 28. After familiarizing with the features, rate how much time you think they save on the overall development of your solution compared to any previous development technology you were using. Please try to ignore delays due to bugs. * Please choose the appropriate response for each item:

It takes much more time

Some more time

The same time

Saves some time

saves a lot of time

N/A 29. After experiencing (developing and testing) the features, how many issues did you encounter. * Please choose the appropriate response for each item:

More than expected

As expected

Less than expected

N/A 30. In your experience, how would you rate the usefulness, potential or value of the following features in the AAL Domain. * Please choose the appropriate response for each item: 1 2 3 4 5 6 7 8 9 10 1 being the least value and 10 being the most value

Components of universAAL In this section of the questionnaire we will explore the Components of the universAAL platform. We want to explore the impact that these Components have on the universAALization of services. Basic middleware components • Context bus for publishing events and / or subscribing to events • Service bus for calling and / or providing services • User Interaction bus (a.k.a UI bus) for using universAAL’s UI description package and leaving the rendering

to situation-aware UI handlers Additional middleware components • Multi-language support • Configurability API mainly management of config parameters and config home directories for storing files and

resources • Logging mechanisms • Multi-tenancy support server-based usage of universAAL to connect to and serve several homes • Functional Manifest each module can contain a digitally signed “functional manifest” that is used for getting

user consent – similar to the Android permissions system that gets active when you decide to install a new app, then Android lists the permissions that the app claims to need and you decide if you will install the app or not • AAL Space Management API info about the available middleware instances, installed modules, etc.

D5.2 – Evaluation Framework

Page 152 of 175

• Deploy Manager API, under certain containers it manages the deployment of modules over the AALSpace • Serialization and parsing API currently only for RDF Turtle syntax universAAL “Manager” components (platform services) • Context History Entrepôt (a.k.a CHe) services, for querying data gathered in the home • Profiling Server services including saving and querying info describing users, objects, locations • Resource Manager important only if you plan to use the UI bus at the middleware level; in that case, you can

achieve a higher level of adaptability if you let the Resource Manager store your media objects that you want to be used when interacting with the user • Situation Reasoner services, it can store SPARQL CONSTRUCT-queries as rules to automatically generate

new context events whenever certain conditions hold; this can be used to recognize situations; e.g., if you want that a context event is published whenever the user is sleeping, a solution could be to tell the Situation Reasoner to publish this event whenever in the night the user is in the sleeping room in the bed and the lights are off • Drools Engine a second reasoning engine using the JBoss rules • ASOR (stands for AAL Space Orchestrator) Scripting ; with ASOR scripts, you can create composite services –

combinations of existing services

31. Any Component(s) that you think is missing from the universal platform? Please write your answer here: 32. Please specify your level of confidence per Component of universAAL. * Please choose the appropriate response for each item:

I know nothing about it

I know something about it

I have used it a bit

I have used it several times

I know many details about it

Not applicable Confidence here means how much you know about the subject. 33. What Components of universAAL have you used so far? * Please choose all that apply:

Context bus

Service bus

UI bus

Multi-language support

Configurability API

Logging

Multi-tenancy

Functional Manifest

AAL Space manager

Deploy manager

Serialization

CHe

Profiling Server

Resource Manager

Drools

ASOR

Other: 34. Please rate the learning curve you have experienced for the following Components. *

D5.2 – Evaluation Framework

Page 153 of 175

Please choose the appropriate response for each item:

Wall (Impossible)

Very steep

steep

easy

trivial

N/A 35. Please rate your perceived complexity of the following Components: * Please choose the appropriate response for each item:

Very complex

Somehow complex

Neutral

Simple

Very simple

Not applicable Complexity here means how easy they are to understand and use. 36. Rate the expected fitness of the following Components to solve your problem.* Please choose the appropriate response for each item:

Not fitted at all

Not quite fitted

Exactly what I needed

Even better than what I needed Expected is the expectation you had when you read about the Component. At learning phase or design phase you might want to use some Component because you thought it did something that would help your development. 37. Rate the Actual fitness of the following Components to solve your problem.* Please choose the appropriate response for each item:

Not fitted at all

Not quite fitted

Exactly what I needed

Even better than what I needed Actual is the fitness you found the feature to have once you started developing with it. 38. After familiarizing with the Components, rate how much time you think they save on the overall development of your solution compared to any previous development technology you were using. Please try to ignore delays due to bugs. * Please choose the appropriate response for each item:

It takes much more time

Some more time

The same time

Saves some time

D5.2 – Evaluation Framework

Page 154 of 175

Saves a lot of time

N/A 39. After experiencing (developing and testing) the Components, how many issues did you encounter. * Please choose the appropriate response for each item:

More than expected

As expected

Less than expected

N/A 40. In your experience, how would you rate the usefulness, potential or value of the following Components in the AAL Domain. * Please choose the appropriate response for each item: 1 2 3 4 5 6 7 8 9 10 1 being the least value and 10 being the most value 41. Since you are familiar with the Multi-Tenant component, do you Hhave any opinion on how the Security of this component (communication protocol, authentication, configuration of security rules, etc..) should be managed or improved? Only answer this question if the following conditions are met: Please write your answer here:

Quality of the documentation The following questions are related to the quality of the documentation of universAAL. We are here including all kinds of documentation including wikis, tutorials, samples, training material and in-code comments, please try to provide a balanced answer that takes into account these kinds of documentation. The questions are presented as affirmative sentences, you are asked to rate the degree of how much you agree with them. 42. The universAAL documentation is easily accessible. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 43. The amount of information is not sufficient for my needs. *

Completely disagree

Disagree

Neutral

Agree

Completely agree

D5.2 – Evaluation Framework

Page 155 of 175

44. The documentation is trustworthy. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 45. This information covers the needs of my tasks. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 46. The representation of the information is compact and concise. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 47. The information is represented in a consistent format. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 48. The documentation is easy to manipulate to meet my needs. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 49. The information is correct. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 50. The information is easily interpretable. *

Completely disagree

Disagree

Neutral

D5.2 – Evaluation Framework

Page 156 of 175

Agree

Completely agree

51. The information is objective and impartial. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 52. The documentation is useful to my work.

Completely disagree

Disagree

Neutral

Agree

Completely agree 53. The information has a reputation for quality. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 54. The information is sufficiently up-to-date for my work. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 55. The information is easy to comprehend.

Completely disagree

Disagree

Neutral

Agree

Completely agree 56. Please rate the amount of technologies, concepts or material which is required before attempting to learn each feature. * Please choose only one of the following:

None

Very little

As much as any other software library/platform out there

Several

Too many

N/A

D5.2 – Evaluation Framework

Page 157 of 175

We will like to know how much dependencies does the documentation and or learning of universAAL have with other technologies, study areas, concepts, external training material and so on. 57. Now, thinking about all the answers you have just given, are there any relevant aspects that you would like to comment on? What did you like about the documentation? What did you dislike? How can the documentation be improved? Please write your answer here:

Human support The purpose of the following questions is to measure how you feel about certain aspects regarding the assistance provided by the development community of universAAL. This includes all the developers involved in the maintenance of the platform and the ReAAL WP2. Below you will find different factors, each related to some aspect of the universAAL support. Please rate each factor on the descriptive scales that follow it, based on your evaluation of the factor. On the numerical scale, circle the position that best describes your evaluation of the factor being judged. (For example, for the first item if your relationship is more harmonious than dissonant circle a number on the higher end of the Dissonant/Harmonious scale and if it is more bad than good, circle a number on the lower end of the Bad/Good scale.) 58. Relationship with universAAL Staff * Please choose the appropriate response for each item: 1 2 3 4 5 Dissonant / Harmonious Bad / Good 59. Processing of requests for changes * Please choose the appropriate response for each item: 1 2 3 4 5 Fast / Slow Untimely / Timely 60. Degree of training provided to developers * Please choose the appropriate response for each item: 1 2 3 4 5 Complete / Incomplete Low / High 61. Attitude of the universAAL support staff * Please choose the appropriate response for each item: 1 2 3 4 5 Cooperative / Belligerent

D5.2 – Evaluation Framework

Page 158 of 175

Negative / Positive 62. Reliability of information provided by support staff (email, forum, bug tracker,...) * Please choose the appropriate response for each item: 1 2 3 4 5 High / Low Superior / Inferior 63. Relevancy of information provided by support staff (email, forum, bug tracker,...) * Please choose the appropriate response for each item: 1 2 3 4 5 Useful / Useless Relevant / Irrelevant 64. Accuracy of information provided by support staf (email, forum, bug tracker,...) * Please choose the appropriate response for each item: 1 2 3 4 5 Inaccurate / Accurate Low / High 65. Precision of information provided by support staf (email, forum, bug tracker,...) * Please choose the appropriate response for each item: 1 2 3 4 5 Low / High Definite / Uncertain 66. In your experience, how much time did it take for the universAAL support Staff to solve the experienced issues. * Please choose only one of the following:

More than expected

As expected

Less than expected

I did not report the issues

I did not know how to report the issues

N/A

Acceptance In the following questions we will assess how much you are inclined to include universAAL as a stable tool for your work. Please rate the following assertions according to how much you agree with them. 67. I would find universAAL useful in my job. *

Completely disagree

Disagree

D5.2 – Evaluation Framework

Page 159 of 175

Neutral

Agree

Completely agree 68. Using universAAL enables me to accomplish tasks more quickly. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 69. Using universAAL increases my productivity. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 70. If I use universAAL, I will increase my chances of getting a raise. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 71. Using universAAL is a bad/good idea. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 72. universAAL makes work more interesting. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 73. Working with universAAL is fun. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 74. I like working with universAAL. *

D5.2 – Evaluation Framework

Page 160 of 175

Completely disagree

Disagree

Neutral

Agree

Completely agree 75. People who influence my behavior think that I should use universAAL. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 76. People who are important to me think that I should use universAAL. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 77. The senior management of this business has been helpful in the use of universAAL. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 78. In general, the organization has supported the use of universAAL. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 79. I have the resources necessary to use universAAL. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 80. I have the knowledge necessary to use universAAL. *

Completely disagree

Disagree

Neutral

Agree

D5.2 – Evaluation Framework

Page 161 of 175

Completely agree 81. universAAL is not compatible with other technologies I use. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 82. A specific person (or group) is available for assistance with difficulties related to universAAL. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 83. I could complete a job or task using universAAL... * Please choose the appropriate response for each item: 1 2 3 4 5

If there was no one around to tell me what to do as I go.

If I could call someone for help if I got stuck.

If I had a lot of time to complete the job for which the software was provided.

If I had just the built-in help facility for assistance. 1 means "I completely disagree", 5 means "I completely agree" 84. I feel apprehensive about using universAAL. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 85. It scares me to think that using universAAL could affect the quality of my code. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 86. I hesitate to use universAAL for fear of making mistakes I cannot correct.*

Completely disagree

Disagree

Neutral

Agree

D5.2 – Evaluation Framework

Page 162 of 175

Completely agree 87. universAAL is somewhat intimidating to me. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 88. I intend to use universAAL in the next year. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 89. I predict I would use universAAL in the next year. *

Completely disagree

Disagree

Neutral

Agree

Completely agree 90. I plan to use universAAL in the next year. *

Completely disagree

Disagree

Neutral

Agree

Completely agree

Opinion Two final questions to collect your personal opinion on universAAL platform in general. 91. Please list any pros / advantages you think are worth mentioning of universAAL in general. If possible explain why. Please write your answer here: 92. Please list any cons/drawbacks you think that makes universAAL in general a non-recommendable solution. If possible explain why. Please write your answer here: 93. Any other observation about the universAAL in general you wish to mention. Please write your answer here:

D5.2 – Evaluation Framework

Page 163 of 175

Assisted Person T1 Survey

Health The following questions are about your health. 1. In general, my health is

Excellent

Very good

Good

Fair

Poor

2. Compared to one year ago, how would you rate your health in general now?

Much better now than one year ago

Somewhat better now than one year ago

About the same as one year ago

Somewhat worse now than one year ago

Much worse now than one year ago

3. Do you receive formal care? (from professional carers) *

I do not receive formal care

I receive domestic care (house cleaning)

I receive home care

I receive nursing care

Other

4. Overall, how satisfied or dissatisfied are you with the current care and support services you receive?

I am very satisfied

I am quite satisfied

I am neither satisfied nor dissatisfied

I am quite dissatisfied

I am very dissatisfied

5. Do you receive informal care? (including care from volunteers) *

No

Yes, from my husband/wife/partner

Yes, from my children

Yes, from neighbours/friends/acquaintances

Yes, from volunteers

6. Overall, how satisfied or dissatisfied are you with the current informal care you receive?

I am very satisfied

I am quite satisfied

I am neither satisfied nor dissatisfied

I am quite dissatisfied

I am very dissatisfied

D5.2 – Evaluation Framework

Page 164 of 175

7. Were you admitted to a hospital or nursing home in the past 6 months?

Yes

No

8. Have you fallen in your home or outside in the past 6 months?

Yes

No

Quality of life The following questions are about your daily life. Under each heading, please tick the box that best describes your health today 9. Mobility

I have no problems in walking about

I have slight problems in walking about

I have moderate problems in walking about

I have severe problems in walking about

I am unable to walk about

10. Self-Care

I have no problems washing or dressing myself

I have slight problems washing or dressing myself

I have moderate problems washing or dressing myself

I have severe problems washing or dressing myself

I am unable to wash or dress myself

11. Usual activities

I have no problems doing my usual activities

I have slight problems doing my usual activities

I have moderate problems doing my usual activities

I have severe problems doing my usual activities

I am unable to do my usual activities

12. Pain / Discomfort

I have no pain or discomfort

I have slight pain or discomfort

I have moderate pain or discomfort

I have severe pain or discomfort

I have extreme pain or discomfort

13. Anxiety / Depression

I am not anxious or depressed

I am slightly anxious or depressed

I am moderately anxious or depressed

I am severely anxious or depressed

I am extremely anxious or depressed

D5.2 – Evaluation Framework

Page 165 of 175

14. Love and Friendship

I can have all of the love and friendship that I want

I can have a lot of the love and friendship that I want

I can have a little of the love and friendship that I want

I cannot have any of the love and friendship that I want

15. Thinking about the future

I can think about the future without any concern

I can think about the future with only a little concern

I can only think about the future with some concern

I can only think about the future with a lot of concern

16. Doing things that make you feel valued

I am able to do all of the things that make me feel valued

I am able to do many of the things that make me feel valued

I am able to do a few of the things that make me feel valued

I am unable to do any of the things that make me feel valued

17. Enjoyment and pleasure

I can have all of the enjoyment and pleasure that I want

I can have a lot of the enjoyment and pleasure that I want

I can have a little of the enjoyment and pleasure that I want

I cannot have any of the enjoyment and pleasure that I want

18. Independence

I am able to be completely independent

I am able to be independent in many things

I am able to be independent in a few things

I am unable to be at all independent

Independent living Thinking about your daily life, which of the following statements best describes your present situation? 19. Taking care of myself

I can take care of myself completely

With help, I can take care of myself

I can take care of myself a bit, but not enough

I cannot take care of myself at all

20. Having social contacts with people you like

I have as much social contact as I want with people I like

I have adequate social contact with people

I have some social contact with people, but not enough

I have little social contact with people and feel socially isolated

D5.2 – Evaluation Framework

Page 166 of 175

21. Feeling safe (By feeling safe we mean feeling safe both inside and outside the home. This includes fear of abuse, falling or other physical harm and fear of being attacked or robbed)

I feel as safe as I want

Generally I feel adequately safe, but not as safe as I would like

I feel less than adequately safe

I don’t feel safe at all

22. Feeling in control of my life

I feel in control of my daily life

With help I feel in control of my daily life

I have some control over my daily life but not enough

I have no control over my daily life

Needs 23. What aspects of your life, your wellbeing, or your health do you want to improve or sustain?

Being more active

Having more social activities

Having more social contacts

Have better nutrition

Loose weight or sustain current weight

Being less anxious or depressed

Reduce my blood pressure

Have a better self-image

Being less dependent of other people

Sleeping better

Sustaining my independence

Feeling more safe in my home

Feeling more safe when I go outside

Taking my medication on time

Improve my physical status after surgery

Manage my chronic disease better

Manage my contacts care providers better

Improve or sustain my memory and/ or cognitive function Having more comfort in my home

24. Did the application help improve these aspects of your life?

Yes

No

Use of the application The following questions are about the applications offered to you. For each application you will be asked some questions. 24. Which applications are offered to you, to use?

Application 1

Application 2

D5.2 – Evaluation Framework

Page 167 of 175

Application 3

Application 4

25. Which application(s) do you not use anymore?

Application 1

Application 2

Application 3

Application 4

26. Why did you stop using it/ them?

I found it too difficult to use

I did not like the look of it

It broke or was damaged

I did not know how to use it properly

It has been replaced by a better application or service

It felt unsafe

Other

27. Are there things regarding the application or service that need to be changed, in order to use it? Please write your answer here: The following questions are about the applications you still use. 28. How often did you use the application/service in the past month? *

Multiple times a day

Once a day

Several times per week

Several times per month

Almost never

Application 1

Application 2

Application 3

29. When do you use the application and/or service?

I only use it with my care givers present

I mostly use it with my caregivers present

I mostly use it on my own

I only use it on my own

Application 1

Application 2

Application 3

D5.2 – Evaluation Framework

Page 168 of 175

30. What are in general your experiences of using application x? (Repeat for each application)

Strongly agree Agree Neutral Disagree

Strongly disagree

Application x is fun to use

Application x is reliable in use

Application x is useful for me

31. How easy did you find the application or service to use? (repeat for each application)

Learning to operate the application and/ or service is easy for me

I find it easy to get the application and/ or service to do what I want it to do

My interaction with the application and/ or service is clear and understandable

It is easy to become skillful using the application/ and or service

It is easy to remember how to perform tasks using the application and/ or service

I find the application and/ or service easy to use

32. Did you need any assistance during the usage of the application?

No

Yes, from my family

Yes, from my caregivers

33. How often have you had technical problems using the application?

Never

Sometimes

Often

All the time

34. Have there been situation when you did not feel safe, because the application did not work properly?

No

Yes, from my family

Yes, from my caregivers

35. Did you need any assistance during the usage of the application?

No

Yes, from my family

Yes, from my caregivers

Benefits of application 36. How much value does the application have for you?

D5.2 – Evaluation Framework

Page 169 of 175

It is extremely valuable for me

It is quite valuable for me

It is not really valuable for me

It is not at all valuable for me

Application 1

Application 2

Application 3

37. How has the application or service affected your feeling of safety?

Multiple times a day

Once a day

Several times per week

Several times per month

Almost never

Application 1

Application 2

Application 3

38. How has the application or service affected your feeling of being in control of you life?

It has made it much better

It has made it a little better

It has not had any effect

It has made it a little worse

It has made it a lot worse

Application 1

Application 2

Application 3

39. How has the application or service affected your independency?

It has made it much better

It has made it a little better

It has not had any effect

It has made it a little worse

It has made it a lot worse

Application 1

Application 2

Application 3

D5.2 – Evaluation Framework

Page 170 of 175

40. How has the application or service affected your physical activities?

It has made it much better

It has made it a little better

It has not had any effect

It has made it a little worse

It has made it a lot worse

Application 1

Application 2

Application 3

41. How has the application or service affected the management of your illness?

It has made it much better

It has made it a little better

It has not had any effect

It has made it a little worse

It has made it a lot worse

Application 1

Application 2

Application 3

42. How has the application or service affected your health or wellbeing?

It has made it much better

It has made it a little better

It has not had any effect

It has made it a little worse

It has made it a lot worse

Application 1

Application 2

Application 3

General This is the final question. 43. Did you fill in this questionnaire by yourself or did you have help from someone else?

I filled it in myself

I had help from a care worker

I had help from someone else

Thank you for completing this survey.

D5.2 – Evaluation Framework

Page 171 of 175

D5.2 – Evaluation Framework

Page 172 of 175

Cost template

Cost estimates of applications Erasmus University Rotterdam Marleen de Mul, Marc Koopmanschap The purpose of the cost estimate below is to get an impression of the extra costs of having applications with uAAL versus applications without uAAL. Please provide below a first, provisional cost estimate for each specific application. Use the best knowledge you currently have. At the end of the project the pilots will be asked to provide a new and improved cost estimate for each application. Please fill in a table for each application. When we use the term cost, we mean total costs (excluding VAT), which are the sum of fixed costs (that don’t vary with the amount of users, such as most parts of the software development) and variable costs that do vary (such as sensors). When we use the term total costs we also mean the sum of personnel costs and material costs, see the table hereunder. Please don’t include overhead costs (costs of management, insurances, buildings etc.) we will use a general mark up for that. Material costs: e.g. hardware, sensors, cables: refers to the costs to purchase these per 100 users.. When you hire development services from outside your organisation, please mention the purchasing costs (excluding VAT), instead of the number of hours, but please use the € sign. Implementation costs include the costs of all tasks and efforts to prepare and make the applications functioning properly at the clients site. This may also involve staff of municipalities, suppliers of home care etc. It could be the case that some implementation costs are made for several applications together. If so, please indicate that as a note in the table.

D5.2 – Evaluation Framework

Page 173 of 175

Pilot name: …………. Application: ………. Number of expected users of your application: ……………..(A)

Cost categories Number of hours staff (or euros in case of external purchase)

Total material (and/or personnel) costs of hardware, sensors, training of users etc. for the number of users stated above (A)

Development costs (= costs of developing the application without the uAAL-platform/-components)

NA

Universalisation costs (= costs of adaptation to the uAAL-platform)

NA

Implementation costs

Training and supporting users costs

Maintenance costs per year (eg server costs)

Total cost staff per hour, incl. tax and social insurance premiums

NA

Potential information sources to fill this table: deployment plan (also include costs paid by other parties), internal budgets, internal and external financial reports.

D5.2 – Evaluation Framework

Page 174 of 175

Script for evaluation of Advanced Distribution showcase

Testing if an application/deployment complies with the showcase Procedure type ☒Analytical

☐Operational

☐Technical

☐Programatical

Procedure general description

Determine if the deployment being evaluated is indeed distributed.

Procedure script

1. Analyse the architecture of the whole scenario or application, identifying where the software components are physically installed.

2. If software components are distributed across 2 or more physical nodes, then there is “Distribution”. Next step is to determine if this distribution makes use of the platform advanced features for it.

3. Determine if the software components in the distributed nodes communicate with each other through one of the Advanced Distribution features of the platform:

a. Local discovery and peering b. Remote discovery and peering through Gateway c. Remote node virtualization through Remote-API d. Multi-tenancy (only combined with the above)

Success cases

An application scenario is considered to be designed to use Advanced Distribution if the 3 steps above are verified, with the last one having validated at least one of the options available. If any of these fails, the application was not designed for an Advanced Distribution scenario. This does not mean that it is wrong or poorly adapted, only that this showcase does not apply. Do not continue with further procedures for this SC.

Procedure type ☐Analytical

☒Operational

☒Technical

☐Programatical

Procedure general description

Confirm the availability of distributed nodes

Procedure script

1. Deploy the software modules of the application scenario under test in arbitrary different nodes than those of its habitual operation (This is the default situation during the centralized lab tests). Nodes must be configured exactly analogous to the original ones.

2. Perform all or a set of the typical application operations and determine that the fact of being deployed in different physical nodes is seamless and unappreciable to the end user: the application behaves in the same way.

3. For the cases where a multi-tenant scenario is deployed, one or both of these must apply, provided that data sets and configuration are carefully replicated in the “new” nodes:

a. It should be possible to have the “local” node(s) changed while maintaining the “server” nodes the same, and still accomplish step 2.

b. Alternatively to a), the “local” node(s) could be maintained while the “server” node is changed and step 2 should still be accomplished.

D5.2 – Evaluation Framework

Page 175 of 175

Success cases

Successful results in the above steps indicate that an application that was designed to benefit from Advanced Distribution does indeed work seamlessly through any nodes no matter where they are, or which ones they are. Even changing the physical nodes for new ones keeps the application operative. Even in “cloud” scenarios where a node is hosted in a remote server.

Measurement of the value Focus group Service Provider

Value centred in ☒Technological Provider

☐Service Provider

☐Assisted Person

☐Caregiver

☐Government

Description

Questions:

How “seamless” would you qualify this distribution?

If you currently have “single-node” applications, do you think they could benefit from a distributed scenario like this? How?

If you currently have “cloud-based” applications, how would you compare, in terms of complexity, your current connection to the server to these advanced distribution features?