Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 1 of 88
Deliverable Title D2.1.2 – Global architecture and overall design
Deliverable Lead: TCF
Related Work package: WP2
Editor(s): Jeremie Leguay (TCF), Farid Benbadis(TCF)
Dissemination level: Public
Due submission date: 01/09/2011
Actual submission: 12/10/2011
Project Number 258142
Instrument: IP
Start date of Project: 01/06/2010
Duration: 30 months
Project coordinator: THALES
Abstract The TEFIS project develops an open platform to access heterogeneous and complementary experiment facilities for communities of software/business developers to test, experiment and collaboratively elaborate knowledge. It provides tools and methodologies to address the full development life-cycle of innovative services. The main objective of this deliverable is three-fold: describe facilities available for the TEFIS project; introduce an updated global architecture design; identify and define viable TEFIS use-cases.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 2 of 88
Project funded by the European Commission under the 7th European Framework Programme for RTD - ICT theme of the Cooperation Programme.
License
This work is licensed under the Creative Commons Attribution-Share Alike 3.0 License.
To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.
Project co-funded by the European Commission within the Seventh Framework Programme (2008-2013)
Copyright by the TEFIS Consortium
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 3 of 88
Versioning and Contribution History
Version Date Modification reason Modified by 0.1 27/07/2011 ToC proposal Farid Benbadis(TCF) 0.2 29/07/2011 Update of ToC Farid Benbadis(TCF)
0.3 16/08/2011 Id management Farid Benbadis(TCF)
0.4 05/09/2011 Monitoring, connectors, execution template, kyatera, eHealth use case, ETICS account management, Botnia,
Farid Benbadis, Brian Pickering, Gabriele Giammatteo, José Junion, Annika Sällström
0.5 07/09/2011 Tefis core services, software decision justification, ProActive, Paca Grid,
Elvio Borrelli
0.6 14/09/2011 Teagle, Account management Konrad Campowsky
0.7 19/09/2011 SQS contributions Jonathan Gonzalez
0.8 03/10/2011 iRods, Zabbix, Id management, Kyatera, Parallel task execution
Brian Pickering, Jose Junior, Lucia Bonelli, Farid Benbadis
0.9 07/10/2011 Open call experiment description and supervision module
Farid Benbadis, Brian Pickering
1.0 10/10/2011 Open call experiment description Farid Benbadis
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 4 of 88
TABLE OF CONTENTS
TABLE OF CONTENTS .................................................................................................................................... 4
Executive Summary ...................................................................................................................................... 5
1. Software architecture review ................................................................................................................ 6
1.1. Description of components ............................................................................................................ 6
1.2. Software decision justification ..................................................................................................... 10
2. Challenges ........................................................................................................................................... 14
2.1. Identity and account management .............................................................................................. 14
2.2. Monitoring .................................................................................................................................... 31
2.3. Connector Improvements ............................................................................................................ 46
2.4. Living lab integration .................................................................................................................... 51
2.5. Parallel task execution .................................................................................................................. 54
3. Use case integration ............................................................................................................................ 58
3.1. eTravel : current status ................................................................................................................. 58
3.2. eHealth ......................................................................................................................................... 60
3.3. IMS-‐Botnia .................................................................................................................................... 67
4. New Experiments ................................................................................................................................ 76
4.1. Typology of Experiments .............................................................................................................. 76
4.2. QUEENS (Dynamic Quality User Experience ENabling Mobile Multimedia Services) ................... 77
4.3. Experiment 2 (Tefpol) ................................................................................................................... 79
4.4. Experimenting with Quagga Open API and Cross‐layer Coordinated Networks ....................... 81
4.5. Experiment 4 (Smart Ski) .............................................................................................................. 83
5. Conclusion ........................................................................................................................................... 85
6. Glossary ............................................................................................................................................... 86
References .................................................................................................................................................. 87
Table of figures: .......................................................................................................................................... 88
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 5 of 88
Executive Summary
The TEFIS project develops an open platform to access heterogeneous and complementary experiment facilities for communities of software/business developers to test, experiment and collaboratively elaborate knowledge. It provides appropriate tools and methodologies to address the full development life-‐cycle of innovative services.
The deliverable D2.1.2 “Global architecture and overall design” extends D2.1.1 by describing the TEFIS software architecture and the additional architectural design of core components such as monitoring and identity management. It also describes the architecture revisions of the connector for the integration of Living Labs and of the Experiment manager for parralel tasks execution. Finally, it discusses the integration of our three initial use cases and introduces the new ones coming form the TEFIS Open Call.
The high-‐level objectives of these two deliverables are three-‐fold: describe facilities available for the TEFIS project; introduce an initial global architecture design; identify and define viable TEFIS use-‐cases.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 6 of 88
1. Software architecture review TEFIS framework is organized around four building blocks: Portal, Middleware (including back-‐end components and core services), users tools, and testbed connectors.
The TEFIS architecture, depicted in Figure 1, is detailed in the following.
Figure 1: The TEFIS Software Architecture
1.1. Description of components
1.1.1. TEFIS portal The TEFIS portal is a single access point for the definition of experiments for managed execution remotely on heterogeneous testbeds. It provides the service intelligence for tools and testing methodologies to be customised to the task in hand, giving access to test resources for carrying out the experiment.
The following components form the TEFIS portal:
§ The TEFIS Resource Directory: The Directory is the repository of tools, facilities, and a link to any resources provided by the heterogeneous testbeds. Any resource in the directory can be used for experiment specification and be retrieved from the Resource Directory Interface. This component provides the list of resources for each experiment, based on their respective needs
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 7 of 88
and requirements, and allows check-‐in/check-‐out management of software, hardware, laboratories, documentation or any other resource the Directory accounts for.
§ The TEFIS Identity Management: This architectural block is in charge of creating and managing accounts for TEFIS users. Accounts created here are valid across all components of the TEFIS infrastructure. It manages User Profile definition, and the access policies in each case. It is responsible for the protection of the whole platform with simple and effective user account management techniques, whilst ensuring ease-‐of-‐use for TEFIS experimenters and testbed providers. User information is always available for the TEFIS Portal administrators.
§ The TEFIS Experiment Manager: The Experiment Manager Interface allows experiments to be executed on a combination of TEFIS testbeds to be defined by the user. It makes use of the TEFIS Resource Directory to list available resources, allow configuration of those resources and to plan for experiment execution. The Experiment Manager provides interfaces that adapt to the needs of the specific user. Specifically, the user is presented with the following:
o The Experiment designer: Here, the experimenter outlines his/her needs, the experiment specification and design, and in return is presented with test strategy types for tests that may be performed (e.g. functional or integration testing).
o The Experiment planner: here the user enters the appropriate test strategy and any specific criteria they may have and gets as output the test plan.
o The Experimental workflow manager: The test plan is input by the user and in response partial results of experiment execution and the activities workflow are returned.
o The Configuration assistant: this subcomponent guides the user through the definition of their experiment, along with its organisation and associated monitoring tasks, providing an easy-‐to-‐use interface with a set of basic experiment features that help the user select and complete with relevant detail. Guided by the experiment configuration assistant, the user provides detail for the experiment at different phases.
§ The Experimental Data Interface: This interface allows the experimenter to access their own experimental data as well as to search for publically available data from other experimenters. Experimental data may be in various different forms, such as input data to be used to initiate an experiment or the output, which could include result files from the experiment or monitoring data. Any data held within the RPRS or TEFIS environment is, by default, available only to the owning experimenter. (S)He has the option to make any of these data available to others to allow them to search for related work.
1.1.2. TEFIS core services The TEFIS core services, as Figure 1 shows, consist of the following components: Experiment and Workflow Scheduler, Resource Manager and Supervision Module. The functionalities offered by these three components are the following:
• Resource management, deployment, and dynamic provisioning;
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 8 of 88
• Scheduling, Matching and Identification of resources that can be activated;
• Code provisioning with configuration and deployment of necessary packages, data set access and transfer;
• Generic mechanism for the reproduction of user load, incidents and attacks in the experiments.
In particular, the TEFIS Experiment and Workflow Scheduler is the component in charge of scheduling the low level workflow, created from the user experiment, during the run phase of the experiment. It manages the overall orchestration of the running experiments over the available testbeds and also organizes the schedule of the different parts of an experiment from the build of the service, through regression and performance tests to the final tests performed by an end user. The Experimenter and Workflow Scheduler, is a tool that automates the execution of the steps of the user experiment onto the heterogeneous set of testbeds without user intervention and hiding all the details related to the way a testbed is accessed from the TEFIS experimenter as well as how data are transferred to a testbed (e.g., which protocols are used). The Experiment and Workflow Scheduler also provides the automatic transfer of the output data of an experiment step to the step that follows.
The Resource Manager service is in charge of monitoring resources status, retaining information about the resources available and providing the Experiment and Workflow Scheduler the access to the resources needed as specified in the workflow to be executed. It provides a mechanism to match and identify the correct resource to activate or to contact. This feature is required at the level of the scheduling service but it's the Resource Manager service that enables the Experiment and Workflow Scheduler to access the appropriate resource. The Resource Manager supports the TEFIS user (i.e., experimenter and/or testbed provider) during resource definition, configuration and booking. It hides the low level details in performing those activities, it manages the heterogeneity of the resources and it automatically performs the booking of the resources defined and configured by the experimenter without requesting their intervention.
Finally, the Supervision Module handles all monitoring activities, checking performance within the TEFIS environment itself as well as communicating with the testbeds to retrieve monitoring output from them. The Supervision Module also provides services that allow the configuration of the tools to establish the conditions for the service under test (stress conditions, failure injections, network load variability, etc...) and the features that need to be monitored. It provides the possibility of using monitoring configuration in a reproducible manner in order to conduct subsequent experiments using different configurations.
1.1.3. TEFIS Connectors The TEFIS project aims to create a federation of testbeds providing experimenters with a general framework to define and execute complex experiments that span over different testbeds. TEFIS testbed federation is expected to be dynamic (the platform must support testbeds joining and leaving) and heterogeneous (potentially any type of testbed can join the federation).
From an architectural point of view, the TEFIS system relies on the concept of a connector to manage the interaction with testbeds.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 9 of 88
The main challenge for the connectors is represented by the heterogeneity of testbeds that TEFIS will support. The heterogeneity of the testbeds resides in their capabilities, technologies, interfaces, interaction and communication protocols. After analysing the testbeds that will initially take part in the TEFIS federation and the project use cases, a minimum set of macro-‐functionalities that TEFIS should be enabled to control in testbeds has been identified:
• resource management: resource reservation, resource creation, resource configuration and resource release. These operations will be used by the TEFIS Experiment and Workflow Scheduler and Resource Manager to create, configure and release resources that will be used to run experiments. To make this process as automatic as possible, connectors should expose a set of operations to control testbed resources from the TEFIS system;
• data I/O management: to run an experiment (or any individual step within the experiment) on a given testbed, experimental input data should be provided and the execution of the experiment will produce output data (experiment results) which will need to be made available to the experimenter. If an experiment is made up of different steps that run on different testbeds and the output of step n is the input of step n+1, TEFIS should be able to transfer (or at least notify the availability of) data from one testbed to another. The support for data transfer operations by connectors is essential to satisfy this use case;
• monitoring capabilities: collect and store monitoring data during the execution of an experiment is key feature that testbeds should typically provide. This data is analysed at the end of (or during) the experiment to evaluate the experiment itself. Since TEFIS will deal with experiments that run on multiple testbeds, to evaluate the whole experiment TEFIS should be capable of collecting monitoring data from single testbeds, make it available to the experimenter and, in some complex use cases, filter, process and integrate it. Since connectors are the contact point between TEFIS and the testbeds, they have to provide operations to transfer monitoring data collected by the testbed during the experiment run to TEFIS;
• script execution: with respect to experiment descriptions, individual steps of an experiment are executed, where possible, automatically by the TEFIS platform on behalf of the experimenter. Of course, this is possible only if the testbeds where the experiment will run have an execution engine that automatically runs scripts and/or execution workflows. One possible way of launching execution on testbeds directly from TEFIS is to provide connectors with an interface that retrieves the script/workflow to execute and launches it on the testbed.
• identity management: any operation executed by the TEFIS system (or using TEFIS connectors) in a testbed is executed on behalf of the experimenters and, therefore, the identity of the experimenter should be used in the authentication and authorization processes. The main challenge in doing this is the heterogeneity of testbeds which might use different formats for credentials, and different authentication and authorization mechanisms. TEFIS should provide experimenters with a uniform access method to testbeds. It would also be worth investigating identity federation mechanisms;
• secure communication: communication between the TEFIS system (that resides in the TEFIS infrastructure) and testbeds (that reside within their own infrastructures) will take place over the
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 10 of 88
network and, thus, needs to be secure in order to avoid network attacks and information disclosure.
1.1.4. TEFIS Experiments Data Manager The TEFIS platform needs to provide appropriate management capabilities specifically for the data associated with all aspects of the experiment throughout its lifecycle. The Experiments Data Manager provides both local (the Research Platform Repository Service – RPRS) and remote (Testbed Infrastructure Data Service – TIDS) data facilities to manage all data for testbed providers, experimenters and other TEFIS components via a RESTful interface. Data are stored across the experiment lifecycle from design to execution and then evaluation in a logical structure to make it easy for the experimenter to identify what has been done. During execution, the testbeds can access specific data locations to retrieve appropriate input and write to designated locations when output is available. In addition, and using appropriate security features, the data are protected until such times as the experimenter decides to make them available to all or some.
1.2. Software decision justification
1.2.1. Experiment Manager For the Experiment Manager, an MVC pattern was followed, in order to obtain the maximum flexibility and optimal development results, while the four main blocks which are part of the Experiment Manager were conceived in line with its logical and conceptual structure. The technology chosen for this component is, in this case, the PHP language and one of the most widely used frameworks, Cake-‐Php. The selection of this was based mainly on two criteria: the ease of integration and communication with all the other TEFIS blocks, and secondly the power and efficiency of the language were adequate in order to be able to implement and execute complex algorithms with minor latencies. To meet the first requirement, there exists a comprehensive REST library for PHP, that would simplify the development not only for the Experiment Manager itself, but also should possible modifications be needed by other blocks; which is something that has turned out to be a crucial feature. And with respect to the logical algorithms, it is clear that PHP, as an interpreted language, can satisfy these requirements. Therefore, the platform and technologies chosen for the development of the Experiment Manager have made it possible to obtain a fully operative and integrated tool with a development period that has been reduced, saving time and resources, and that offers optimum response times and the best architecture.
1.2.2. TEFIS core services
Experiment and Workflow Scheduler To implement the Experiment and Workflow Scheduler of TEFIS the ProActive Scheduling tool was used. It is a component of the ProActive Parallel Suite1. We chose this software product first of all because it is open source , but in addition because among all the other open source scheduler software offerings ProActive matches the majority of the TEFIS requirements for the Experiment and Workflow Scheduler, needing only minor extensions to fulfill all those requirements. The features needed by the Experiment and Workflow Scheduler of the TEFIS platform are the following:
1 http://proactive.inria.fr/
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 11 of 88
• Be able to create a low level workflow whose structure exactly reflects the one of the high level workflow defined by the experimenter;
• The translation from the high level workflow into an executable must be easy. That means the work required by the TEFIS developers to do this must not be too hard;
• Add all the necessary and missing information (e.g., information about how to access the testbed connectors) to the high level workflow to turn it into an executable;
• Enact the executable low level workflow;
• Low level workflow tasks must include the logic to interact with the testbed connectors;
• Define as many low level tasks as testbed connectors. In fact, in TEFIS there are different connectors, one for each testbed plugged into the TEFIS platform. Testbed connectors implement the same interface, the Testbed Connector Interface (TCI), but their implementations differ because each of them has to reflect the interaction specifics of a particular testbed. This means each testbed has its own connector and there is a different low level task for each connector;
• Define different “task templates”, one for each testbed. The task template can be seen as the combination of the low level task and its own set of configuration parameters (testbed connector to contact, resources involved in the execution of the activity that task represents etc…). In other words there is one task for each testbed connector and each task can be configured differently depending on the activity it has to perform on the testbed it’s associated with. That means the task of the low level workflow should be parametrizable;
• Workflow tasks should be linked among themselves, so that beyond the flow of execution there should be a flow of data too. That means the transfer of data from one task towards the one that follows should be done automatically without the user’s intervention;
• Should the workflow not be enacted on a single machine, then the tasks will have to be distributed to different machines to balance their execution load. This is an important point because it allows TEFIS to be scalable: if the number of workflows (and so experiments) increases then more machines can be added to avoid overloading the existing ones;
• Different platforms (UNIX/LINUX, Mac, Windows) must be supported. This means that when the Experiment and Workflow Scheduler has to be scaled the administrator of the TEFIS platform is not constrained to add only a particular platform;
• Allow the monitoring of the execution of the workflow (e.g., what is the overall state of the running workflow, which step is currently executed etc…).
As we have said above the ProActive Scheduling tool matches most of the identified requirements. The only feature it does not provide is the concept of “Task Template”. But as stated in the deliverable “D4.2.1 Experiment and Workflow Scehduler prototype” that feature has been easily implemented.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 12 of 88
Resource Manager In the TEFIS platform we need to manage two separate sets of resources: the one made up of heterogeneous resources provided by the testbeds and the other that includes TEFIS internal computational resources needed by the Experiment and Workflow Scheduler to enact the low level workflow created from the experiment the TEFIS user runs. We chose two different existing software products to manage the two sets of resources: TEAGLE2 to perform the management of the resources provided by the testbeds and the ProActive Resourcing tool to manage the internal computational resources.
The reasoning behind this choice is that TEAGLE perfectly matches the TEFIS requirements related to testbed resource management: the storage of resource specifications, the creation of resource instances, the browsing of resource specifications and instances and the booking of resources. On the other hand, the ProActive Resourcing tool is highly compatible with the TEFIS Experiment and Workflow Scheduler since it is implemented using the ProActive Scheduling tool that belongs to the same suite, the ProActive Parallel Suite, that the ProActive Resourcing tool belongs to. Moreover the goal of the ProActive Resourcing tool is precisely to provide computational resources to the ProActive Scheduling tool.
Supervision Module The Supervision Module provides monitoring capabilities with the TEFIS environment. With this in mind, we have identified three main stakeholders:
i. The experimenter, who is looking for feedback on resource requirements for their experiment as well any application-‐specific output;
ii. The TEFIS platform operator, who needs to be kept aware of the status of the platform itself; and
iii. The TEFIS testbeds, who wish to keep track of resource usage at their facility.
In the TEFIS platform we need to manage two separate sets of resources: the one made up of heterogeneous resources provided by the testbeds and the other that includes TEFIS internal computational resources needed by the Experiment and Workflow Scheduler to enact the low level workflow created from the experiment the TEFIS user runs. We chose two different existing software products to manage the two sets of resources: TEAGLE3 to perform the management of the resources provided by the testbeds and the ProActive Resourcing tool to manage the internal computational resources.
The reasoning behind this choice is that TEAGLE perfectly matches the TEFIS requirements related to testbed resource management: the storage of resource specifications, the creation of resource instances, the browsing of resource specifications and instances and the booking of resources. On the other hand, the ProActive Resourcing tool is highly compatible with the TEFIS Experiment and Workflow Scheduler since it is implemented using the ProActive Scheduling tool that belongs to the same suite, the ProActive Parallel Suite, that the ProActive Resourcing tool belongs to. Moreover the goal of the ProActive Resourcing tool is precisely to provide computational resources to the ProActive Scheduling tool.
2 http://www.fire-‐teagle.org/ 3 http://www.fire-‐teagle.org/
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 13 of 88
These and similar requirements are common to a number of different projects, federating or connecting to multiple technologies and services. In common with BonFIRE4, we have selected ZABBIX5 as the basis for the Supervision Module and the associated monitoring capabilities. ZABBIX offers an open source monitoring platform which can be readily integrated into the TEFIS environment. The ZABBIX Server will run locally as part of the TEFIS Core Services, co-‐ordinating data gathering and aggregation in response to monitoring configurations defined by the experimenter. Such configurations, using the ZABBIX template concept, can be readily opened up to the testbed providers to identify their metrics and capabilities as part of testbed registration. The ZABBIX Server will also respond to requests from other TEFIS components for monitoring information and set up; a RESTful interface will be developed as an abstraction layer for TEFIS above and in connection with the ZABBIX API. In conjunction with the server, a ZABBIX Agent takes monitoring input from remote resources to hand on to the server. Agents may run locally at the testbed site, within a deployed VM running a test, or from the TEFIS domain communicating with the remote test facilities via the TEFIS Connector and the TCI.
The ZABBIX Agent is responsible for returning monitoring data back to the server to query and collate for the experimenter. A MySQL database is used by the server to this end. In the context of TEFIS, although initial experimenter interaction may be run from the Experimental Data Interface to the ZABBIX-‐provided MySQL database, all data will be transferred to and maintained within the TEFIS Experiments Data Manager (see Section 1.1.4) for curation and provenance tracking.
In summary, to support the Supervision Module and associated experiment and platform monitoring requirements, the ZABBIX open source monitoring platform has been selected in common with BonFIRE. In the TEFIS context, the base server will be made available to other TEFIS components via a TEFIS-‐specific RESTful interface; a ZABBIX Agent will be deployed to interact with the testbed native monitoring capabilities locally (running on the testbed) or remotely (running within TEFIS and communicating via the TCI and TEFIS Connector); and the data collected and managed initially by ZABBIX will be transferred to and maintained long-‐term with all other experimental data by the TEFIS Data Services, exploiting existing data interaction capabilities within TEFIS.
1.2.3. TEFIS connector The main challenge for the TEFIS connector is to handle the heterogeneity of testbeds. The heterogeneity of the testbeds resides in their capabilities, technologies, interfaces, interaction, and communication protocols. TEFIS connector offers functionnalities for resource management, data I/O management, monitoring capabilities, script execution, and identity management.
An interresting solution for the TEFIS connector implementation is the SFA (Slice Federation Architecture) coming from GENI and FIRE projects (such as OneLab2, OpenLab). SFA offers a solution to establish a trust chain between the different providers and users, implementing the federation concepts that have emerged in the last years. It also allows them to expose their resources in the format they like using RSPEC. In this architecture, testbed wrappers offer an XML-‐RPC based interface to manage identity management and expose resources. With regards to TEFIS needs, SFA could potentially cover the identity management need. However, it does not currently cover the other requirements. SFA implementation
4 http://www.bonfire-‐project.eu/ 5 http://www.zabbix.com/
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 14 of 88
for resource management are mainly considering PlanetLab Slices and computationnal resources, support of the heterogenous resources proposed by TEFIS would have to be provided.
As the coverage in terms of functionnalities is low today, we have decided for now to specify and implement our own connector but will follow the developments on SFA. Convergence points will be probably identified in the future to perform joint developments.
Slice Federation Architecture (SFA) has however be selected in the design of the TEFIS connector for PlanetLab. The first reason is that, by contrast to the PLC (PlanetLab Central) API, it gives a unified interface to allocate and access resources from testbeds other than PLC. For example, a PLC user may access resources from a PLE (PlanetLab Europe) or PLJ (PlanetLab Japan) seamlessly as all testbeds can be fully federated together. Federation with SFA is reciprocal: each testbed advertises its resources to the other testbeds and similarly, users from one of the testbeds may benefit from resources from another testbed. The second reason is that initial work has already been done to federate PlanetLab resources using Teagle.
2. Challenges
2.1. Identity and account management Identity and account management is related to how users are identified and authorized across the TEFIS platform, its internal systems, and the connected testbeds. It covers issues such as:
• how users are provided with an identity on the TEFIS platform, its internal systems, and the available testbeds,
• how this identity can be used,
• how the TEFIS identity can be associated with the other required identities, and
• how the identity can be protected.
The identity management module is thus responsible for managing information required for authentication. It represents one challenging aspect for the TEFIS portal. Indeed, the following aspects have to be taken into account when considering this challenge:
The different TEFIS internal systems (Teagle, iRods, Zabbix, and ProActive6) currently use separate credentials, but it is foreseen to unify account management for these components.
Each testbed (currently Botnia, ETICS, IMS, KyaTera, PACA Grid, and PlanetLab) uses its own credentials.
Some TEFIS users may already have existing accounts on different testbeds or frameworks while others need to create those accounts. The TEFIS portal should then be able to deal with both cases. For this purpose, below we suggest a way to delegate account creation for both TEFIS internal systems and testbeds to the TEFIS portal.
6 See Figure XX for where these technologies are used in the TEFIS platform.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 15 of 88
2.1.1. TEFIS portal account creation To use the TEFIS system, each user must first create an account. Following the « register » link on the TEFIS portal, the user is taken to the interface shown in Figure 2. This interface asks the user for personal information (first and last names, organization, and email address) and allows him/her to choose a username/password pairing, which constitutes the credentials to access the TEFIS platform.
This username/password pairing is required by the portal each time the user needs to access the TEFIS platform, where he/she can list his/her activity and get access to the TEFIS system.
The user’s connection to the TEFIS portal may grant him/her direct access to internal systems and the different testbeds. Such a mechanism, however, may present some security issues, which we address in the rest of this section.
After the registration process is completed the user account is not automatically activated. Instead, an administrator has to review and approve each individual application. When this has happened, the account details are transferred from a temporary database to the Teagle user repository where they are available for the different TEFIS components.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 16 of 88
Figure 2 The TEFIS account creation interface
2.1.2. Account management of TEFIS internal systems In this section, we discuss account credential management of TEFIS internal systems. We address here the different internal systems according to how each one operates.
Teagle: Accounts created on the TEFIS portal in through the process described above are stored in the Teagle user repository. Passwords are not stored in cleartext. Rather, an MD5 hash of the user’s password is calculated which is then stored.
The Teagle user repository exposes a RESTful interface that can be queried by the other TEFIS components to perform authentication against.
Currently, this user database is not used by all components within the TEFIS architecture. Only the Teagle components and the TEFIS Experiment Manager use it. In the future, a harmonization of the credentials
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 17 of 88
management across the core components is planned. In this regard, there is also an investigation under way to identify any other interfaces for user accounts that should be provided, e.g. LDAP.
iRods: iRODS provides a traditional authentication method: a user account is protected via username and password. There are two types of user: administrators as well as straight-‐forward users. An administrator may use any of the iRODS client-‐side tools to access the data management system to carry out standard administrative tasks, such as creating new users. Passwords are encrypted prior to saving to disk.
Users with an iRODS account may make use of iRODS own built-‐in access control, thereby protecting their own data or allowing other users to access their data or the data structures (folder system) generated for that user.
These iRODS capabilities, specifically user authentication and access permission control, provide the base functions required by the TEFIS stakeholders:
• the TEFIS administrator can create and control user accounts;
• the TEFIS end-‐users (experimenters) can access the data services folders using their authentication details (username and password); they can also provide and manage access to their data and data areas;
• the TEFIS testbed providers can be gain appropriate access to user data areas with the agreement of the TEFIS end-‐user.
From release 1.1, iRODS also provides support for GSI (Grid Security Infrastructure7) as an authentication method in addition to the default secure password authentication method. In addition, it supports the use of Kerberos8.
Zabbix Zabbix supports a number of different authentication mechanisms, including:
• a standard username/password pairing, with user credentials stored internally internally within the Zabbix database;
• LDAP authentication, which is useful if other components already use LDAP and therefore users are already defined as part of an LDAP structure; and
• Web server authentication (via HTTP): Zabbix can support any aunthentication method provided by the web service.
This all means that there is significant flexibility in Zabbix for integration into the TEFIS platform. The choice will probably come down to the role of the various stakeholders (TEFIS experimenter, TEFIS administrator and TEFIS testbed providers) when accessing monitoring facilities. There are in addition three types of user supported, with increasing powers over the Zabbix system:
7 http://www.globus.org/security/overview.html 8 https://www.irods.org/index.php/Kerberos
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 18 of 88
1. the standard Zabbix user, with limited access to monitoring facilities, and without authority to add or change monitoring templates;
2. the Zabbix administrator, with the added capability to access to configuration, allowing the administrator (though not the Zabbix user, that is the TEFIS experimenter) to add or remove resources to be monitored; and
3. the Zabbix super user, with access to all Zabbix utilities and functions, including account management.
In the context of the roles supported in Zabbix, and the authentication methods offered, it would be possible to integrate Zabbix into the TEFIS ID management schema mapping roles to the Zabbix roles and exploiting custom extensions to the Zabbix API. In collaboration with the ID management service, the Experiment Manager could, for instance, request a Zabbix user account to be created as a standard Zabbix user. This user could then be given read-‐only access to predefined monotoring configurations, based on those defined by the testbed providers during registration.
ProActive ProActive, and in particular the ProActive Scheduling tool and the ProActive Resourcing tool, are client/server applications. In both of them regardless of which method is actually used to perform the authentication, credentials need to be passed from the client to the server side of the Scheduling or Resourcing tool, through the network. The data will be encrypted with an AES symmetric secret key to allow unlimited credentials size, and the AES key itself will be encrypted with an RSA keypair.
The server side of the Scheduling and Resourcing tool own a public key that a client can request so that it can encrypt its credentials to perform authentication. This method does not require the administrator of the Scheduling or Resourcing tool to manually propagate public keys to all the users. Users can encrypt theirs credentials with the create-‐cred[.bat] script distributed with the ProActive Scheduling and Resourcing server and client tools.
Conecerning the authentication and authorization method, ProActive (both Scheduling and Resourcing tools) can stores user accounts in two ways: as files, or via LDAP. The property “pa.scheduler.core.authentication.loginMethod” in the file ”config[scheduler/settings.ini“ specifies which kind of authentication will be used in the ProActive Scheduling tool. There is a equivalent property, “pa.rm.authentication.loginMethod”, in the file “config/rm/settings.ini“ to specify the kind of authentication used by the ProActive Resourcing tool.
By default, the authentication method used by the ProActive Scheduling and Resourcing tools is by file (e.g., in the case of the ProActive Scheduling tool the value of the property “pa.scheduler.core.authentication.loginMethod” is “SchedulerFileLoginMethod”. If the user wants to use the LDAP-‐based authentication, (s)he has to replace the "SchedulerFileLoginMethod" value by "SchedulerLDAPLoginMethod").
When ProActive uses the file-‐based authentication, user accounts, passwords, and group memberships, are stored in two files: “config/authentication/login.cfg” and “config/authentication/group.cfg”. In the first, each line follows the format “user:password” while in the second file a line looks like “user:group” where the “group” can be one of two values: “user” or “admin”. The path of these two files can be
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 19 of 88
changed specifying their respective absolute path as the value of the properties “pa.scheduler.core.defaultloginfilename” and “pa.scheduler.core.defaultgroupfilename” in the file “config/scheduler/settings.ini” for the ProActive Scheduling tool and as the value of the properties “pa.rm.defaultloginfilename” and “pa.rm.defaultgroupfilename” in the file “config/rm/settings.ini” in the case of the ProActive Resourcing tool.
The ProActive Scheduling and Resourcing tools are able to connect to an existing LDAP to check user login/password, and verify user group membership. To use LDAP there are several points to configure: path in LDAP where the ProActive Scheduling/Resourcing tool user and admin entries are stored, LDAP groups that define user and admin group membership, the URL of the LDAP, the LDAP binding method used by connection and the configuration of SSL/TLS if a secured connection between the ProActive Scheduling/Resourcing tool and LDAP is needed. All LDAP connection parameters are set in the file “config/authentication/ldap.cfg”. The files “config/scheduler/settings.ini” and “config/rm/settings.ini” contain the properties that define the path to the configuration file in which all LDAP connection and authentication properties are stored, for the ProActive Scheduling and Resourcing tool respectively. The default path identifies the default LDAP configuration file: “config/authentication/ldap.cfg”. More details about the configuration of LDAP authentication/authorization for the ProActive Scheduling and Resourcing tools can be find in the ProActive documentation accessible or downloadable as an HTML or PDF document from the URL “http://proactive.inria.fr/index.php?page=manual_proactive”.
File-‐based and LDAP authentication/authorization can be used simultaneously. This means if the LDAP-‐based authentication fails, ProActive can check the user’s password and group membership in login and group files, as performed in the file-‐based method. To allow this there are two rules:
• If LDAP group membership checking fails, then fall back to group membership checking with group file. To activate this behaviour the “pa.ldap.group.membership.fallback” has to be set to true in the LDAP configuration file;
• If a user is not found in LDAP, then fall back to authentication and group membership checking with login and group files. To activate this behaviour, the “pa.ldap.authentication.fallback” property has to be set to true, in LDAP configuration file.
2.1.3. Testbed account management By contrast to internal systems which are only used by the TEFIS platform, credentials are also required to access the different tesbeds connected to TEFIS. Access to these testbeds is possible directly, without going through the Tefis portal.
PlanetLab Creating an account on the PlanetLab testbed requires the user to fill inthe form shown in Figure 3. Once this form has been filled in, an email is sent to the user in order to confirm account creation. Finally, a PlanetLab administrator validates the subscription, which allows the user access the PlanetLab facilities.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 20 of 88
In order to access resources on the PlanetLab testbed, the user also needs to generate an ssh key pair, which will grant him/her access to any PlanetLab node, including those on his/her own site. All the experiments run by the user will require this ssh pair of keys.
Figure 3 The PlanetLab account creation form
PACA Grid The access to the PACA external Grid is secured using the SSH protocol. In order to gain access to the external Grid, PACA labs and SMEs must ask for SSH credentials from the grid administrator. The creation of the account for a user is an offline process (i.e., there is no registration form on PACAGrid). The user can request the PACAGrid administrator to create their account by sending an email in which the following information is specified;
• The user first name and last name;
• The email address to which the user wishes to receive the private and public key pair to use to access PACAGrid.
After the request has been approved and the account has been created by the administrator of PACAGrid, the user receives the private and public key pair that he can use to authenticate himself on PACAGrid.
ETICS The ETICS authentication mechanism is based on X509 digital certificates. The pre-‐requisite to register on the system is to have a trusted digital certificate installed on the browser. The account creation procedure is as follow:
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 21 of 88
1. the user goes to the ETICS Portal registration form having the certificate installed on the browser;
2. the user fills in the required fields and submits the request;
3. an email will be sent to the ETICS administrator to activate the user;
4. an email will be sent to the user, as confirmation of the registration;
Once registered, the user can access the ETICS Portal by simply using the same certificate used during the registration process installed on the browser.
The ETICS authorization mechanism is based on a set of seven pre-‐defined roles detailed in table Table 1. Administrators (or any other users with appropriate rights) can assign users one or more roles in one or more ETICS modules. User permissions can be added and removed directly in the ETICS Portal through an ad-‐hoc interface shown in Figure 4.
Table 1 Pre-defined roles on which the authorization mechanism of ETICS is based
Role Description
Administrator Administrator of the system. They grant the access to all system functionalities
Developer This role allows read/write access to configurations of the modules granted. It also allows remote builds and remote tests to be submitted, and artefacts (packages and reports) to be registered.
Guest This role allows only read-‐only access.
Integrator This role allows remote builds to be submitted, artefacts (packages and reports) in the repository to be registered and volatile areas to be used.
ModuleAdministrator
This role allows read/write access to child modules and configurations for the modules granted. It also allows permissions for the modules granted to be viewed/granted/deleted.
ReleaseManager This user can tag artefacts and releases and upload them to the repository.
Tester This role allows remote tests to be submitted, test artefacts (packages and reports) to be registered in the repository and volatile areas to be used.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 22 of 88
Figure 4: The ETICS interface
It is worth mentioning that user permissions are taken into consideration only for write operations, while all read operations does not require any authentication/authorization. Therefore, all data relating to ETICS projects is freely browsable by any user, but only authorized users can change project data and submit jobs.
User and role management can also be handled programmatically via the ETICS Web Service Interface. This interface provides methods to: create, modify, delete users, check user existence, list users in the system, list user roles, activate users, and revoke user permissions. By contrast, user authentication and authorization is performed internally by the ETICS web service implementation that checks, for each request, the identity of the caller using the DN in the digital certificate contained in the call (made in https) and finds out (by interrogating the ETICS database) whether the user is or is not allowed to execute the requested operation. This means that authentication and authorization processes can neither be intercepted nor overridden without changing system implementation.
IMS testbed Access to the SQS IMS testbed and the different tools it provides is acquired using a secure VPN connection. The access protocol for the IMS core is SIP. For this, the experimenter will use the username and password stored in the IMS core and provided by the testbed administrator during the experiment execution.
KyaTera In order to acquire a KyaTera account, the user needs to access the KyaTera website9.
From the site the user accesses the "registration" tab. Here, the user has a series of boxes to fill in:
• Full name: The full name of the user/researcher 9 A new address will be provided separately from the present document.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 23 of 88
• Access name: The user name to log on to the site or to use the network
• Password: The desired password to log on to the site or to use the network
• Confirm password: the password is repeated to check the spelling
• E-‐mail: The e-‐mail to confirm the registration
• Confirm e-‐mail: the same e-‐mail to spell check
• Company name: The name of the company, institution or university of the user/researcher
After the user fills in all the boxes (all fields are mandatory) the user hits the registration button.
Once a confirmation e-‐mail has been received, the user will need to follow the steps it contains, clicking on the confirmation link to start.
If the registration is not accepted by KyaTera, the user will receive the reasons and can contact the KyaTera team by e-‐mail.
After the user is properly registered in the system, access to the network is via the KyaTera website, where the user needs to enter the registered username and password. The KyaTera website uses a REST interface to access the entire set of network features, and so it is accessible to other applications not just the website.
In this way, after the user is registered, access can be provided directly by the TEFIS portal, linking the TEFIS account with the KyaTera site.
Botnia The access to the Botnia Living Lab resources and the Living Lab facility itself is managed manually. Resources available are mainly:
User-‐involvement expertise
These resources consist of research expertise in the field of user centred design and evaluation and they support experimenters in setting up and running user involvement activities.
Access mechanism: The experimenter gets access to these resources by signing a paper-‐based agreement which controls access.
Methodology for user-‐involvement
For user involvement, one resource provided by the Botnia Living lab is the Form-‐IT methodology.
This is an iterative and interactive process in several steps for user-‐engagement in all phases of the development of an IT-‐based service/product – from need finding to beta-‐trial and pre-‐market launch. Different methods and tools are used for the professional support of user involvement. Often we combine qualitative and quantitative methods for the best results like web-‐based questionnaires to
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 24 of 88
investigate specific areas across a bigger user-‐group and observations and interviews to go into greater depth around specific issues and to get answers on why and how. For user-‐involvement, it is very important to recruit the right users for the purposes of your experiment.
Access mechanism: The experimenter gets access to these resources by signing a a paper-‐based agreement which regulates access.
The Botnia Living Lab Users Data Base
This is a database of 6000 creative end-‐users (individuals) from 18 years of age and older in Sweden and access to end-‐users around the world via 3rd parties. The Botnia user database is currently implemented as a MySQL-‐database where basic end-‐user data for end-‐user involvement are stored.
Access mechanism: The experimenter can only get access to this resource via a local Living Lab host. No registration is possible.
2.1.4. Requirements The general requirement is to enable TEFIS users accessing both testbeds and TEFIS internal resources as much transparently as possible, while not impacting the identity management mechanisms in place on every testbed. The latter represents the major constraint when considering to identify a unified ID management solution for the whole TEFIS system, especially when the testbed does not allow any modification/extension to the existing mechanisms (either in the account management or in the authentication process).
More in detail, the following basic requirements for the TEFIS ID management have been identified:
1. It should enable the creation of an account on different testbeds/TEFIS services and linking them to the TEFIS account in the TEFIS portal
2. It should enable TEFIS user accessing all services/TESTBED with his/her own privileges by performing login only once at the TEFIS portal
3. Any Identity information must be protected within the TEFIS system
Furthermore, the following constraints must be considered: It is not possible to modify and/or add external interfaces (for authentication) to the testbeds:
• Every testbed should get as input a request with the specific credential (e.g. x509 cert for ETICS) of the user who generated the resource access request. Generally, these credential and those of TEFIS are not the same
The requests to different testbed in a workflow are not always interactive. Every test is a task of workflow: the general use case is given by a user who logs in the TEFIS Portal, defines a test sequence and starts it. During the tests execution there is no interaction with the user who, probably, has logged out from the portal and will return only to read the results. This makes impossible to interact with the user during the test execution, so, for example, it is impossible to ask for user's testbed credentials.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 25 of 88
2.1.5. Linking internal and external system accounts to TEFIS account The requirements set out above in Section 2.1.4 refer to the provision of access to both internal and external resources, preferrably via single sign-‐on authentication. Specific single sign-‐on technologies are discussed in the next section. Internal agreement between the TEFIS components should provide no specific obstacle, beyond any technical hurdles outlined in Section 2.1.2 for the individual TEFIS components and the technologies used to support them. The process would be roughly as follows:
Initial Registration
TEFIS portal User requests an account to be created. Once validated, the user will be able to select username and password.
Once registered, and the user account approved by the TEFIS administrator, then corresponding user accounts are created in the relevant TEFIS components: the Experiment Manager, the Supervision Manager and the Data Services.
Subsequent sign-‐on
TEFIS portal User logs into the TEFIS portal using the credentials previously created and approved by the TEFIS administrator.
The ID manager is now responsible for presenting to any of the TEFIS components accessed by the user, such as the Experiment Manager to create or modify an existing experiment, or the Data Services to review experimental data, the users credentials to identify the user’s account details in the appropriate component.
Within the TEFIS platform, therefore, the process of ID Management is relatively simple and straight forward: the experimenter is provided a single set of credentials, and thereby gains access to any TEFIS-‐internal sub-‐system. Credentials for each of those sub-‐systems are specific to the individual compoent and will have been created during user registration on the back of the TEFIS account creation.
The issue is slightly more complication for the TEFIS testbeds, external resources which may not reside with the TEFIS domain and which may impose their own authentication and access requirements. A number of different options may be possible:
1. The TEFIS portal maintains a single account on the testbed, and allows users to use this account as required. This may not, however, be allowed by all testbed providers, and may make accounting problematic in that it may not be easy to match usage on a testbed using the “TEFIS portal” account to individual TEFIS experimenters. Further, the TEFIS experimenter would have to delegate to TEFIS all and any activity, including access permissions to those sections of their work performed on the testbeds.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 26 of 88
2. The TEFIS experimenter may entrust to the TEFIS platform their testbed user credentials. This would mean that TEFIS would interact with the testbed(s) effectively as the user. However, this may be deemed impersonation and is open to potential misuse. Users may not be willing to give this level of trust over to TEFIS.
3. External resource access and exploitation may be enabled using an object capabilities model. This is based on the issue of an unforgeable and unguessable resource handle (or URL) by the resource administrator to allow the requester (in this case the TEFIS experimenter) access to the resource for a specified period and purpose. This would leave control of the user credentials with the TEFIS experimenter, allowing them to delegate if desired authority to other TEFIS components for a limited time and purpose.
At the present time, the latter solution based on the object capabilities model seems to be the most attractive. The final choice and implementation, however, is the responsibility of the Connectors and the TEFIS Connector Interface (TCI).
In this section, the issue of TEFIS internal and TEFIS external (to the testbeds) access is discussed and a number of different scenarios and options presented. In the following section, individual technologies to sastify these requirements are outlined.
2.1.6. Draft Analysis and Proposed approach In this section some considerations resulting from a first analysis of requirements and issues previously introduced are reported and, basing on them, an approach as the basis for the identification of an ID management solution for the whole TEFIS system is proposed.
A more detailed analysis of the Identity Management requirements and of the ID management testbed/TEFIS internal components as-‐is will be performed in the next phase of the project and, basing on the results of this study, a global Identity Management solution for whole TEFIS system will be provided.
The goal of that analysis will be to understand whether the Identity Federation can be applied in the TEFIS system. Identity federation, or Federated identity, is intended as a set of capabilities enabling the portability of a user identity information across different security domains (TEFIS testbed/internal components) and enabling users of one domain to securely access resources (data or services) of another domain in a seamless and transparent way, and without the need for the user to perform redundant authentications procedures (this is also referred as Single-‐Sign-‐On).
From a first analysis on the current ID management mechanisms in force on testbed/TEFIS internal components, it can be deduced that, some of the testbed are a sort of blackbox that cannot be neither modified nor extended in any part and, among other things, they cannot trust the authentication performed by TEFIS. This automatically excludes the possibility to have a unified federated approach for the whole TEFIS system. Solutions like Shibboleth and OpenID (introduced afterward) require that the service in the federation (that is both the TEFIS internal components and tesbed) trusts authentication of user performed in the TEFIS. In order to achieve this, it could be necessary to install some elements (e.g. a Shibboleth Service Provider in case Shibboleth Identity federation implementation is chosen, see Section 2.1.8), that grants the access right on behalf of the testbed's or TEFIS internal service's ID
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 27 of 88
module. This is not feasible at connector level and is not possible if we see the testbed or the service as a black box.
Given these assumptions, a possible solution based on the following hybrid approach is foreseen:
• Implement federation for those components/testbed which either “natively” allow bypassing its authentication process by trusting another Identity Provider or can be extended to accomplish this-‐
• for those components/testbed which don’t allow applying federation, implement an identity binding solution
For the ID management the following macro-‐ functionalities have been identified:
• Creation of the account and synchronization • Identity and/or credential propagation
Where both these items should be managed in a flexible way, in order to handle, as efficiently as possible, the heterogeneity of the user management systems of the services and testbeds.
Accounts creation and synchronization This macro-‐functionality deals with the registration of a new TEFIS user and the creation of an account on the other services and testbed. Two main classes of external components are considered:
• components that exposes user creation interfaces
• component that doesn't expose any automatic user creation method
In the first case, synchronous CRUD (Create, Read, Update, Delete) operations on user databases should be provided. This means that if a TEFIS user is created (i.e. a new user is added in TEFIS user database), an account bound to the TEFIS original user account is automatically created also in TEFIS services and testbeds, that means that new user accounts are added in the databases of the services or testbeds.
This approach applies only to those the external components which already (or will) expose user management interface accessible from TEFIS This goal could be reached using existing or innovative technologies of federation and user synchronization (that will be discussed later in this paragraph).
Whenever the first approach is not applicable, an user association (identity binding) module should be provided. With this module, it should be possible to associate, from TEFIS portal, a TEFIS user account to those user accounts already existing on external services or testbed. This implies that the user’s credentials must be stored in a repository inside the TEFIS system, where the repository must be secured with strong security mechanisms (high encryption).
Identity and/or credential propagation This macro-‐functionality deals with mechanisms for securing access to the TEFIS services and testbed.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 28 of 88
As for the account management, also in this case two different classes of services or testbeds are considered. For those components which support identity federation, i.e. they are able to trust users of another (federated) domain, the user identity could be propagated from the TEFIS system. For the other cases, the user credentials associated to the specific service or testbed could be fetched from the internal credential repository and then used to perform the required operations.
Finally, when the federation approach is applicable, it should be included also a mechanism for role mapping between the TEFIS heterogeneous systems and a distributed authorization service that provides consistent authorization decision based on roles throughout the TEFIS system (which allow defining policies and managing access request).
2.1.7. Outline of ID management functionalities For the ID management the following macro-‐ functionalities have been identified:
• Creation of the account and synchronization
• Identity and/or credential propagation
Where both these items should be managed in a flexible way, in order to handle, as efficiently as possible, the heterogeneity of the user management systems of the services and testbeds.
Accounts creation and synchronization
This macro-‐functionality deals with the registration of a new TEFIS user and the creation of an account on the other services and testbed. Two main classes of external components are considered:
• components that exposes user creation interfaces
• component that does not expose any automatic user creation method
In the first case, synchronous CRUD (Create, Read, Update, Delete) operations on user databases should be provided. This means that if a TEFIS user is created (i.e. a new user is added in TEFIS user database), an account bound to the TEFIS original user account is automatically created also in TEFIS services and testbeds, that means that new user accounts are added in the databases of the services or testbeds.
This approach applies only to those the external components (testbed) which already (or will) expose user management interface accessible from TEFIS This goal could be reached using existing or innovative technologies of federation and user synchronization (that will be discussed later in this paragraph).
Whenever the first approach is not applicable, an user association (identity binding) module should be provided. With this module, it should be possible to associate, from TEFIS portal, a TEFIS user account to those user accounts already existing on external services or testbed. This implies that the user’s credentials (that is all information necessary for authentication, either by login/password pairs, ssh private keys, or certificates,) must be stored in a repository inside the TEFIS system. This repository must be secured with strong security mechanisms (high encryption) and the access to credentials should be protected in order to prevent that malicious person unlawfully gain root privileges to the TEFIS server and impersonate some registered user.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 29 of 88
Identity and/or credential propagation
This macro-‐functionality deals with mechanisms for securing access to the TEFIS services and testbed.
As for the account management, also in this case two different classes of services or testbeds are considered. For those components which support identity federation, i.e. they are able to trust users of another (federated) domain, the user identity could be propagated from the TEFIS system. For the other cases, the user credentials associated to the specific service or testbed could be fetched from the internal credential repository and then used to perform the required operations.
Finally, when the federation approach is applicable, it should be included also a mechanism for role mapping between the TEFIS heterogeneous systems and a distributed authorization service that provides consistent authorization decision based on roles throughout the TEFIS system (which allow defining policies and managing access request).
2.1.8. Proposed Identity management and federation solutions Here is a brief summary of some solutions for the implemention of identity management and federation.
Single Sign On solutions A number of different single sign-‐on authentication options are available, as presented here:
Technology Description Applied to TEFIS Requirements
OpenID
This is an open standard allowing for the distributed authentication of users against traditional or other credentials. Now available from many different providers such as Google, the BBC, IBM and PayPal amongst others.
Would provide a simple mechanism for TEFIS-‐internal single sign-‐on access. It may also be extensible to TEFIS-‐external resources if they support the standard. It may be used readily in association with other mechanisms such as object capabilities.
1. Single sign-‐on
2. TEFIS internal access
3. Credential storage
(Less clear for TEFIS external resources)
Shibboleth®
A standards-‐based, open source software package allowing web-‐based signal sign-‐on inside and beyond individual organisations. Extended security features allow users to maintain control over what is shared with individual applications.
As above. There may be some added advantages for the user with respect to allowing added control of what information is passed and to whom. May also provide a simple means for handling TEFIS-‐internal and –external access.
1. Single sign-‐on
2. TEFIS internal access
3. TEFIS external access (if supported)
4. Credential storage
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 30 of 88
LDAP
Provides identity validation and access control to data and applications within an organisation.
Would provide straight-‐forward support for access control within the TEFIS domain. This is not so clear for TEFIS-‐external resources. In addition, access control to data and structures is already managed by iRODS for the data services in TEFIS.
1. Single sign-‐on
2. TEFIS internal access
3. Credential stoarge
(Not clear for TEFIS external resources)
Eduroam
Authentication of a user is performed at the home domain and the autorisation is performed at the guest institution. A user is identified by a username and a realm name, which is used to identify the home institution of the user. The authentication infrastructure is achieved by a pool of RADIUS servers.
Would allow single sign-‐on access for TEFIS-‐internal systems. If TEFIS-‐external systems support it, it may be extensible.
1. Single sign-‐on
2. TEFIS internal access
3. Tefis external access (if supported)
There are many different technologies available which may suit the TEFIS platform requirements. From the summary information above, the outstanding issue in most cases will be access to the TEFIS external testbed resources. At the very least, there is a requirement for the testbed to support the standard or technology selected.
Within the TEFIS domain, therefore, there are a number of different solutions available which support a single access mechanism for the TEFIS experimenter to be able to use the TEFIS-‐internal components. It is less clear how access to external resources, such as the testbeds can be managed within a single solution. The following sections outline some of the related issues.
Credential storage on TEFIS servers The simplest solution we can think of is to store all the credentials from the various systems on the TEFIS servers. Thus, information necessary for authentication, either by login/password pairs, ssh private keys, or certificates, are stored on the server separately for each user. When a user logs onto the TEFIS portal, his/her credentials are used to establish connections with the different systems and thus give access to all available resources.
The main drawback of this solution is that it allows a malicious person, who has unlawfully gained root privileges to the TEFIS server to impersonate any registered user. In that case, the malicious person can delete and alter data of all users on the different systems, and, more importantly, use the resources for malevolent acts (e.g. worm and denial-‐of-‐service attacks).
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 31 of 88
Credential Vault Service
This solution enables the secure storage of authentication information on TEFIS servers. However, this information is stored separately and encrypted. Decryption of information can be achieved only when the user enters his/her login/password pair on the TEFIS portal. Decryption is performed using that information. The authentication information, either ssh key, certificate, or login and password, is stored in files or in a database but is unusable if the user has not used his/her TEFIS credentials.
Although this solution cannot be effective if experimenter’s login and password are stolen, it prevents impersonation by someone who managed to gain root privileges on the TEFIS server.
2.2. Monitoring
2.2.1. Reasons for Monitoring Experiments on the TEFIS platform must be instrumented somehow to provide the measurements necessary to prove or disprove the test hypothesis. Figure 5 presents key elements and their relations to a monitoring process in the context of TEFIS.
In TEFIS, Experimental tests are executed upon a Virtual Customer Testbed (VCT), consisting of resources across administrative domains or testbeds. These experiment-‐specific resources vary ranging from physical hardware resources, virtual machines, operational software components and applications. Each resource has certain features and characteristics that an experimenter believes to be of interest and to need to be observed. Such features of interest and any related behaviour are regarded as the basis for the evaluation of a test run. The monitoring data by some measurement procedure are applied to the features of interest at some time or during some time period. Metrics are labels associated with monitoring data, denoting what feature of interest they refer to and, if appropriate, by which measurement procedure they are obtained. Constraints define bounds on the values that monitoring data should take, and also refer to metrics so it is clear to which data they pertain. Therefore the test is defined as a set of constraints and metrics which have have a significant, observable effect on the resource, resource features and any related behaviour. It is the responsibility of the testbed facility to apply the metrics and constraints to the resource features and measure quantifiable aspects of behaviour. Finally such measurable effects need to be made accessible to the experimenter in either a simple set of log files containing the monitoring data or some collated and processed form.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 32 of 88
Figure 5: Key elements relations of monitoring process.
2.2.2. Requirements The design of the whole monitoring process is based on the following assumptions or prerequisites:
• It is assumed that a testbed provider will publish resource offerings along with related monitoring capabilities.
• It is assumed that an experimenter will define desired monitoring configurations by aggregation of or derivation from monitoring capabilities supplied by testbed providers.
• It is assumed that testbed facilities will measure monitoring data during the runtime of an experiment and make them available to the TEFIS platform.
Based on these assumptions, the functional requirements relating for offering monitoring capabilities throughout the experiment lifecycle are identified as follows.
• Monitoring configuration management: TEFIS monitoring solutions must provide management facilities that allow testbed providers and experimenters to create, view, update, and revoke monitoring configurations related to shared resource types or experiments.
• Monitoring data collection: TEFIS monitoring solutions should be able to gather monitoring data and apply predefined monitoring configurations during an experiment execution.
• Monitoring data storage: Once monitoring data have been gathered from testbeds, these data need to be stored in a secure manner for later analysis.
• Monitoring data reporting: Monitoring data should be reportable to an experimenter for analysis. An experimenter can define the desired reporting strategies, for example, reporting
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 33 of 88
only events of interest by comparing with thresholds. Reported data, either raw monitoring data or filtered events, can also be reported using visualisation tools for example supplied by the TEFIS portal.
Table 2 summarises these requirements in relation to the stages of thec experiment lifecycle affected and the core services responsible as defined within the TEFIS platform. The experiment lifecycle is determined over three stages: the design stage, the execution stage, and the analysis stage, in the context of a monitoring process. Further details of individual experimental stages and data models will be given in Section 2.2.3.
Table 2: List of requirements with the stages of the experiment lifecycle affected and the entities responsible
Requirement Stage(s) Responsibility
Monitoring configuration management Experiment design • Supervison module
Monitoring data gathering Experiment execution • Supervision module
• Connector
Monitoring data storage Experiment execution • Supervision module
• RPRS
Monitoring data reporting Experiment execution and Experiment analysis
• TEFIS Portal ( especially the Experimental Data Interface)
• RPRS
2.2.3. Data Models This section discusses conceptual models of the proposed monitoring process throughout the experiment lifecycle. The data model in Figure 6 is a colour-‐coded: all filled shapes relate to entities which are relevant at design time, whereas framed, unfilled entities relate to execution time. Further, all orange entities, filled as well as framed, refer to resource management whereas all blue entities relate to execution management.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 34 of 88
NameDescription
ResourceSpec
Testbed
Testbed ProviderExperimenter
Metric
NameDescription
ResMonTemplate
supplies
managers
TriggerEvent Constraint
Has monitoring capabilities
Experiment
TestPlan
Task
Resource
NameDescription
VCT
Testrun
Execution
defines
contains
TaskMonTemplate
Monitoring requirements
Input Data Output Data
Input data spaceOutput data space
Refers to
Applies toApplies to
Monitoring Data
Instance of
Has instances
Derived from
contains
contains
Executes on
Executes on
Instance of
contains
Experiment Design Experiment DesignExperiment Execution
TestMonConfigInfo
Refers to
Measurement Procedure
generates
Monitoring data repository
measures
Experiment Analysis
Monitoring Report
Generated from
reports
Figure 6: Data model of experiment design and execution stages with monitoring capabilities
Experiment Design At the experiment design phase, an experimenter defines an execution requirement and a resourcing requirement. The execution requirement is represented as a testplan, a workflow consisting of a sequence of tasks that contribute to the overall test goal. Currently a task is defined as “an atomic step of an experiment test plan” [2] which means a task can only be scheduled for execution on a single testbed. The current implementation of user tools [1] also prohibits a user from defining a task resourced across multiple testbeds. It is worth noting that discussions are ongoing within the consortium to refine the term “task” as user-‐focused tasks as opposed to testbed-‐focused tasks because the workflow will be executed by the experimenter and the experimenter is concerned with logical steps, but not how they are executed. The monitoring solutions proposed in this section are based on the current definition and implementation: that is, a single task can only be executed on a single testbed. It is possible to update the proposed monitoring solutions accordingly at some later stage. During the design phase, an experimenter also needs to specify resourcing requirements by choosing appropriate resource types. Again, one task can only be run on resources from the same testbed.
Here we propose two types of monitoring configuration within the experiment design stage: the resource monitoring template and the task monitoring template, which are defined by testbed providers and experimenters respectively at experiment design phase.
As shown in Figure 6, the resource monitoring templates are provided by a testbed provider and are attached to a particular resource type or resource specification. A resource monitoring template
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 35 of 88
specifies monitoring capabilities, including the set of features of interest, presented as metric definitions, and events of interest, presented as triggering events. As the examples given in Table 3 and Table 4 show, a VM resource monitoring configuration defines two metrics, the average CPU load and the average memory load, the data value of which are to be gathered every 100 milliseconds; three trigger events to send alerts to the experimenter if the average CPU load is either too high (above 80%) or too low (below 30%) continuously for the previous three minutes and if memory usage is abnormally high when the value is continuously over 90% for the previous five minutes.
Table 3: Example metric and constraint definitions
Resource Type Metric Unit Data Type Constraint
Delay
VM cpu.load.avg % float 100 milliseconds
mem.load.avg % float 100 milliseconds
Table 4: Example trigger event and constraint definitions
Resource Type Trigger Event Constraint
Low High Alert period
VM The VM is overloaded -‐ 80% 3 minutes
VM is under utilisation 30% -‐ 3 minutes
The memory usage is abnormally high 90% 5 minutes
Based on the resource monitoring templates, experimenters can define monitoring configurations for their tasks. As illustrated in Figure 6, each task will have a single task monitoring template that aggregates the full set or subset of metrics, trigger events and constraints as defined with the resource monitoring template for the resources on which a task is to be executed. A task monitoring template can also define custom metrics that are derivable from those defined within the resource monitoring templates, custom triggering events and constraints. It is worth noting that a task monitoring template can only aggregate metrics, triggering events and constraints from the resource monitoring templates of the same testbed, as a task is an atomic schedulable entity in TEFIS. A task may have more than one monitoring template for different monitoring purposes, but only one monitoring template will be active at a time. These active task monitoring templates become the monitoring configuration of a test plan.
Experiment Execution After defining execution and resourcing requirements, a testplan can be submitted to start a testrun, or particular run of a testplan. In order to acquire testbed resources for a testrun, a resource procurement process is triggered to book resource instances according to resourcing requirements specified by an experimenter at experiment design phase. The output of the resource procurement process is the VCT,
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 36 of 88
an agreed resource booking consisting of resource instances and their connections. After that, a testrun can be orchestrated and scheduled to execute upon the VCT. A testrun is also allocated a space for input and output data. For each of the tasks defined within a testplan, there is a corresponding execution instance, which takes input data specified at the design stage from the input space and generates output data, if any, to be written to the allocated output space.
During the runtime of the an execution instance, measurement procedures take effect and generate monitoring data, metric values and triggering events, according to the constraints specified within the task monitoring configurations. These monitoring data are gathered and cached into a centralised monitoring data repository.
Experiment Analysis On completion of an execution instance, output data for the execution instance along with raw monitoring data values and/or registered events of interest are saved to the relevant output space. These monitoring data can be viewed and used for analysis by an experimenter to evaluate the testrun. An experimenter may also download and import monitoring data into third-‐party analysis tools through the portal.
2.2.4. Monitoring Process The supervision module is the key enabling component for monitoring processes in TEFIS. The main design objectives of the supervision module are to:
• Provide resource and task monitoring configuration management facilities for experimenters and testbed providers at design phase;
• Enable the monitoring process at runtime, including applying task monitoring configurations when an execution starts, gather registered features and events of interests, and save them at runtime, and report monitoring data values to an experimenter on completion of an execution instance.
Monitoring Configuration Management In order to enable monitoring processes while running a test, an experimenter needs to define task monitoring configurations for a testplan at experiment design phase. Figure 7 illustrates the flow for the definition of task monitoring configurations at the design phase. It highlights the fact that the definition of task monitoring configurations depends on the availability of resource monitoring configurations for the resource types available on the TEFIS platform.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 37 of 88
Testbed ProviderExperimenter Portal Experiment Manager
Supervision Module
request to upload a resource monitoring template
HTTP RESTful POST
Present a form for monitoring configuration
{ResourceSpec.name}
submit
Upload a resource monitoring template
{ResMonTemplate}
{ResSpecName/ResMonTemplate}
HTTP response 200
Confirmation message
Redirect user to a confirmation page
Create test monitoring configurations
View monitoring configurations of a resource spec per defined task
HTTP RESTful GET
{resSpec.Name}
{ResMonTemplate}
{ResMonTemplate}
Presents a list of metrics, trigger events, and associate constraints}
Create a monitoring config. spaceHTTP RESTFul POST
{testrun}
HTTP response 200
Configuration message
Redirect to configuration page
Get Resource monitoring configurations
{resSpec.name}
Customise ResMonSpec by select/deselect metrics, trigger events and define constraints
Save to new TaskMonTemplate
Create a new taskMonTemplate
{TaskMonTemplate} HTTP RESTful POST
{TaskMonTemplate}
HTTP response 200
Configuration messageRedirect
Loop for each resource spec per defined task
Figure 7: Monitoring configuration management at experiment design phase
Both testbed providers and experimenters can create resource/task monitoring configurations through the TEFIS portal. A testbed provider should be able to register a resource type along with its monitoring configurations as offerings to monitoring capabilities. An experimenter can then view resource monitoring capabilities of resource types to which a task is to be contracted, and define a task monitoring template by aggregating monitoring configurations for the target resource types. A task may have multiple task monitoring templates associated with it for different evaluation purposes, but only one template is active at any one time.
Therefore the supervision module should provide manageability interfaces for monitoring configuration management, in particular to create, update, view, list and delete resource/task monitoring templates. The monitoring configuration management features can be implemented in
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 38 of 88
many ways. As the example workflow illustrated in Figure 2.3, the supervision module exhibits RESTful manageability interfaces, which operate on a conceptual repository of resource/task monitoring templates. Security is an important non-‐functional feature yet to be implemented to protect task monitoring configurations from access by other experimenters than the owner.
Monitoring processing at experiment runtime The diagram depicted in Figure 8 gives an idea of the main interactions between the core services components and other TEFIS components involved in the monitoring process at experiment execution runtime.
When a testrun starts, the workflow scheduler orchestrates and spawns execution instances on a per task basis to procured resource instances across testbeds. During execution, the TEFIS Experiment and Workflow Scheduler needs to determine when to start the monitoring process. For example a monitoring process can be started immediately when scheduling an execution instance if the execution is short-‐lived and without a preceding execution instance, while a monitoring process can be delayed while waiting for the completion of a preceding execution instance. Once a monitoring process is triggered, it is the responsibility of testbed monitoring facilities to measure monitoring data. The main responsibility of the Supervision Module is then to apply task-‐specific monitoring configurations and gather monitoring data across testbeds through generic connector APIs. Runtime monitoring data and triggering events will be cached by the Supervision Module.
On completion of a testrun, the monitoring process needs to be revoked. The workflow scheduler keeps monitoring the state of an execution instance. When the “completed” state is observed, it contacts the supervision module to stop the monitoring process. The supervision module will then make monitoring data available to the experimenter and report either the whole bunch of raw monitoring data or only registered events of interest depending on the agreed reporting strategies defined in the active task monitoring template to the execution-‐specific output spaces.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 39 of 88
Experiment Manager
Supervision Module RPRS Connector-TBx TBx
Submit Execution
Workflow Scheduler
Testrun-‐id
Request execution
{testplan}
{task}execute
Execution-‐idCreate a workspace for execution-‐id
Create a workspace for testrun-‐id
{Testrun-‐id}
Workspace id
Execution-‐id, startMon:boolean
Workspace id
Start Monitoring
{Execution-‐id}
optional
Get Monitoring data
Get monitoring data
Monitoring data
Loop while execution
Get state
{Execution-‐id}
state
Stop Monitoring
{Execution-‐id} Send monitoring data to output space
Loop for each task
apply task monitoring template
Figure 8: Monitoring process and interactions at the experiment runtime phase
2.2.5. Monitoring for the eTravel Use Case The following table summarises the eTravel use case, step by step, and the associated Monitoring activity.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 40 of 88
Table 5: Proposed monitoring solutions for e-Travel use case
Step Description Comments Monitoring
1 Software developer creates a TEFIS account
Various TEFIS components are involved with the initial creation of a user account.
The Supervision Module collaborates with the other TEFIS components to associate the user account credentials with appropriate user credentials in the monitoring sub-‐system.
2 Specification of the experiment by the user (portal); there are two steps:
1. non-‐regression tests using ETICS,
2. stress test using PacaGrid and/or PlanetLab.
Description of the desired SLA, stress factors (number of resources and input load – bigger log files or several in a same time frame)
A new experiment is created within the TEFIS portal, using the Experiment Manager. The Experiment Manager creates the necessary structures on behalf of the experimenter as part of this process.
In conjunction with any SLA-‐management facility within TEFIS, the Supervision Module maintains a reference to the appropriate SLA as directed by the Experiment Manager. 10
3 Store experiment description in TEFIS portal
10 This is the platform monitoring scenario for SLA terms agreed between experimenters and the TEFIS platform provider. Further details are given in section 2.2.6.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 41 of 88
Step Description Comments Monitoring
4 Initial configuration of ETICS is done via TEFIS, the rest using the WS-‐API, reports are fed back to the portal
These steps are related to TEFIS workflow definition, testbeds configuration and experiment submission to TEFIS platform. We first need to define the set of resources we would like to use among the available testbeds, the order/conditions in which one we want to use them and configure the testbeds directly from our TEFIS workspace:
1. We first want to use ETICS to build the application directly from the SVN repository and run one or several test suites. We fill in the configuration of ETICS. The configuration contains every dependencies we need to build the application, either we specify where to get them or we upload them. We also fill in the ant command to launch the build/test processes and the SVN command to check out the project. During this step, only Java JARs/Applications dependencies are needed, no need for native applications.
2. Depending on the result of the build and test suite processes, we deploy the application on PACA Grid testbed. The configuration of the application is very dynamic. As depicted in D2.1.1, the e-‐Travel application is a multi-‐tier application which involves several separated application containers and database that directly exchange messages, i.e., on well-‐known URLs, ports... Hence we need to identify the configuration files which need
As resources are selected to support the testrun, any monitoring capabilities registered for the resources are presented to the experimenter. The experimenter thus defines the task monitoring configurations on a per test step basis to be provided in association with the resources defined by the experimenter. The configuration would typically include the metrics to be collected and any local treatment to be applied to the metrics.
5 Provide Application + software and non-‐regression tests to ETICS (configuration files – SVN repository, dependencies...)
N/A
6 Prepare a workflow to conduct the test (tasks and number of nodes)
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 42 of 88
Step Description Comments Monitoring
7 Send a job, which will be executed when resources become available
to be modified to make the deployment succeed, and we need to be able to replace the relevant pieces of information once they will be available (like the machine hosting the given services...).
Two points of view for this feature:
• TEFIS implements its own pattern matching engine with its own properties and characteristics, i.e. it implies that a separate branch on the SVN repository needs to be created and synchronized to be directly checked out and ran by TEFIS.
• TEFIS uses a plug-‐in mechanism to provide users with the capability to implement their own pattern matching mechanism to generate the appropriate configuration files once the deployment of the resources has been done.
The components which need a dynamic configuration are:
• an Intalio server, hosted in a Tomcat container
• the Parallel Services engine, also hosted in a Tomcat container
• a MySQL database (we could also use a native Java/XML database if a MySQL database cannot be deployed on nodes)
• and several worker nodes (ProActive nodes) directly connected to the MySQL database.
The supervision module then creates a monitoring workspace for caching incoming monitoring data from the execution instance. Once an execution instance, the build job, starts, the workflow scheduler instructs the supervision module to start a monitoring process, which will continue to gather monitoring data and events of interest from ETICS.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 43 of 88
Step Description Comments Monitoring
8 Get the source code package from ETICS if not currently available locally
Steps 8-‐12 are only TEFIS internal, we don't want to configures things manually whilst the application is deploying or already deployed. Every needed software, configurations files... need to be dynamically modified and provisioned on the target nodes. In detail it means:
1. ETICS build
If the build succeeds, book required nodes on PACA Grid; identify which nodes will host which services to be able to run the pattern matching mechanism and update the content of the configuration files. Afterward, upload relevant content (software+conf) to each nodes and start the services (certainly Intaglio and Parallel Services on separated nodes). At this point, the e-‐Travel deployment ends, and the entire application is finally started. At the end of the test, clean up everything
The workflow scheduler stops the monitoring process, and the Supervision module will report monitoring data to the output space within the RPRS.
9 Select PlanetLab and PACAGrid nodes
As above, the Supervision Module creates monitoring workspaces for the task to be scheduled on PlanetLab and PACAGrid. Active task monitoring templates will be applied.
10 Prepare Virtual machines ( install required software: database, libraries, and so forth. )
11 Deploy the code to the PlanetLab and PacaGRID nodes
12 Execute the tests
The Workflow Scheduler instructs the Supervision Module to start the monitoring process once it determines executions have started. The Supervision Module will keep gathering and caching monitoring data and triggering events from PACAGrid and PlanetLab by applying constraints defined within task monitoring templates.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 44 of 88
Step Description Comments Monitoring
13 Collect the data manually11 or from CoMon (network monitoring service – load, client ids, ...)
From steps 13 to 16 we are notified about the fact that some metrics are available. Depending on the results, we can decide interactively to run the step 2 of the experiment again (only PACA Grid deployment) or exit the experiment. In the case we want to run the application again, the experiment flow gets back to step 4 to 7 where we would like to change input of the experiment and refine the collected values.
14 The user is notified of the end of the experiment
When execution completes, the workflow scheduler stops the monitoring process through the supervision module’s APIs. Monitoring data are transferred by the supervision module to the output space within RPRS.
15 Experimenter can decide to re-‐run the experiment depending on the data collected. He can launch the test again and decide to include new worker nodes to the experiment, change experiment input, and refine the data set to be collected. At this point, he can also exit the experiment.
As above. New monitoring configuration settings would have to be generated via the Experiment Manager during resource definition, if required.
11 It may be possible for the TEFIS monitoring agent to contact CoMon directly to retrieve the monitoring output, such that no manual process will be required.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 45 of 88
Step Description Comments Monitoring
16 The experiment restarts from step 7.
As above. Note that the data for individual testruns would be retained locally or remotely until the experimenter releases them or transfers them elsewhere. The TEFIS platform would automatically clean up such data in accordance with the terms of any contract between the platform and the experimenter.
2.2.6. Types of Monitoring In TEFIS, we have identified three types of monitoring scenario:
Resource monitoring The proposed monitoring solutions discussed up so far focus on the resource monitoring scenario, where an experimenter utilises native monitoring capabilities supplied by participating testbeds in TEFIS. An experimenter can only define custom constraints, events of interest, and custom metrics by deriving them from existing monitoring capabilities supported by testbed resources.
Application instrumentation and monitoring In addition to the underlying testbed resources and their native monitoring capabilities, we must consider the application or test itself. The same conceptual operational architecture is assumed: the application will have been developed with suitable monitoring capabilities and use some form of local storage to keep a record or log file of the output from the application either in operational terms or as the result of user interaction. It is likely that TEFIS will need to offer some guidance or templates for experimenters to provide suitable instrumentation which may then be integrated with TIDS in a similar way to the testbed monitoring instruments. For now, it is important solely to be aware of the assumptions that both testbeds and applications will be instrumented to provide monitoring data and that there will be some integration to persistent storage using a Testbed Infrastructure Data Service (TIDS).
TEFIS platform monitoring As a service provider, the TEFIS platform aims at providing a virtual experiment environment in a Quality-‐of-‐Service (QoS) guaranteed manner. In the TEFIS platform monitoring scenario, a set of Key Performance Indicators (KPIs) will be defined to reflect overall TEFIS system performance. These KPIs form the basis of service-‐level agreements (SLA) that are contracted to the experimenter. A number of
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 46 of 88
recommended KPI metrics are discussed in [3]. A monitoring process is therefore required to take place continuously to monitor how the overall TEFIS system performed and to provide information to TEFIS platform administrators for performance improvement. As the contractor, the experimenter may also need to agree on service obligations in addition to service commitments offered by the TEFIS platform. Such obligations as maximum usage and number of testruns per day are useful for usage policing and accounting purposes. Therefore the TEFIS platform monitoring process also needs to measure the usage metrics of experimenters’ activities.
For now, we should focus on the resource monitoring scenario although efforts continue to be applied for the enablement of TEFIS platform monitoring.
Living Lab resource monitoring There is no monitoring needed for monitoring the usage of Living Lab assets. This may change in the future depending on the implementation of a service-‐based business model; but this is not in place for now.
2.3. Connector Improvements This section details the challenges facing the connector specification and implementation that will be undertaken during the second year of project. In particular we plan to focus on two areas:
• the realization of a backend web interface for connectors to allow connector administrators to control their functions;
• improvements in the submission of executions to testbeds through the connectors by realizing a better exchange of information between connectors and TEFIS Portal;
These improvements aim to make it easier for testbed owners to implement, run and manage the connector for their testbed.
2.3.1. Connector’s Backend Interface
2.3.1.1. Motivations The development of the connectors for the TEFIS testbeds during the first year of the project highlighted the fact that for some connectors it is difficult to implement some functionalities expected by the connector interface because they are not able to interact programmatically with the testbed, mainly because the latter does not support the automatic handling of some functionalities. This happens mainly in the areas of data management, identity management and execution management. For instance, it can happen that there is no way to automatically trigger the execution of an experiment on a testbed because the testbed has been designed to be manually accessed and run manually by human agents. An exemplar case is the BOTNIA Living Lab testbed where most of the activities are carried out manually through the interaction between humans and the exchange of documents.
In such cases, since the connector cannot communicate directly with the testbed, the implementation of the testbed connector should be based not on the interaction with the testbed but on a back-‐end interface that simulates testbed interaction. The BOTNIA connector design and implementation adopts this approach precisely.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 47 of 88
Such an implementation would not make any assumptions about the underlying testbed and potentially would work as it does for any testbed. Providing such an implementation publicly in the project would provide significant support to many other testbed administrators to start the connection with the TEFIS system. Moreover, a backend web interface could also be useful for those connectors that have a more standard implementation of the TCI, mainly for debugging and testing purposes.
2.3.2. Backend Interface Functionality For the design and implementation of the backend interface, the Botnia Connector will be taken as the starting point both for collecting requirements for features and for the implementation. Through the backend interface (web based) people at Botnia (who are actually the testbed) can manually update the testbed status in order to let TEFIS know the status of resources and executions in the testbed. Going in the opposite direction, TEFIS will call the connector requesting actions and the Botnia Team is notified by the backend interface (e.g. a new experiment has been requested, the execution of a given task has been submitted and should be started).
For instance, the two operations execute and get_execution_status in connectors allows TEFIS to submit a task for execution on the testbed and check its status. In the case of Botnia, the task consists of manual tests carried out by humans. In consequence, it can be neither programmatically started by the connector nor its status checked programmatically. These operations will be implemented in this way by the connector: the execute stores in a backend database all details of the submission. Then the Botnia team can, through the backend web interface, see from the database which executions are required and “run” them. The web interface will also allow users to update the execution status (stored on the database). The get_execution_status operation will read the status from the database. The same mechanism can be (and will be) used also to implement other operations in the connector.
In the example above, we have seen a backend web interface is used by the Botnia Connector to implement the connector's execution API. In the generic backend interface we plan to deliver for connectors, the same will be done for following connector's functions:
• add_resource(), get_resource(), set_configuration(), list_resource(), delete_resource(): these methods implement the resource management part of the connector. The backend interface implementation will store all information about resources created through the connector in a database that will be readable and updatable by the backend web interface giving the users the possibility of updating all information about resources in order to reflect the status of the testbed;
• execute() and get_execution_status(): the implementation will be the one presented in the example above for Botnia: executions are stored in a database and their status will be read from the database. The database can be manually updated using the backend web interface;
• get_input_file(): for a given execution, the web interface will present the list of input files and the user will be able to download them. Internally, it will interact with the TEFIS data management system;
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 48 of 88
• put_output_file(): for a given execution, the web interface will present a form where the user can upload a file that will be stored in the TEFIS data management system in the output folder.
Depending on the outcomes of the Identity Management study, connectors may provide a set of user and authorization management operations. In that case a backend web interface will also be provided for those operations. If this backend interface for a connector proves to be useful, mature and functional, it will also be included in the Connector Framework as a generic implementation that developers can customize and use as starting point for their own connector.
2.3.3. Task templates
Execution parameters Looking at the definition of execution in each of the testbeds, we can recognize that the meaning of execution varies in different testbeds. For example, on PACAGrid execution means run a script – or java class – in a virtual machine, for ETICS it means run the compilation of a software project. In general, each type of execution is characterized by a different set of parameters that the user (in our case the TEFIS experimenter) must provide to the testbed in order to execute the experiment appropriately. In the case of PACAGrid, for instance, the testbed needs the script to be executed, while for ETICS the testbed needs the name of the software project to be compiled.
The parameters required by each execution will be set by the experimenter in the TEFIS Portal (namely in the Experiment Manager module) and then passed by the latter to the different testbeds to run the experiment. Technically, the passage of parameters from the TEFIS Portal to the testbed is realized by the TEFIS Portal when it calls the execute operations of the connector. In fact, the execute operation has the following signature:
execute(resources: array, execution_context: dictionary):execution_id
The connector expects that the execution_context parameter is filled with the list of all required parameters for the execution in the form of a dictionary where the keys are the names of the parameters and the values are the values of the parameters.
The issue that has yet to be resolved for the current implementation is how to make the TEFIS Portal aware of which set of execution parameters are needed to be passed to a given connector while requesting an execution. At present, in the current implementation, the experimenter must know (perhaps by asking the connector administrator) exactly which parameters he/she must define before starting the experiment.
Task templates model
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 49 of 88
In the current implementation of TEFIS, the experimenter is not helped in any way by the system during the setting of execution parameters. (S)he has to know in advance which parameters are required by a testbed to run a given execution and set them in the TEFIS Experiment Manager module of the TEFIS Portal specifying the name and the value of each new parameter (as depicted by the two screenshots in
Figure 9 below.
The solution to this issue designed and expected in the next TEFIS Portal release is to allow connectors to publish this information (the parameters required by the execution). This information can be read by the Experiment Manager and used to present an appropriate view to the experimenter (for instance, a web form where a value must be inserted for each of the expected parameters).
We call a task template a data structure that defines the name and the type of parameters needed by a given execution. Furthermore, a task template also defines resources that the experimenter must specify to support the execution of that task. In general a connector could define multiple task templates meaning that the connector will accept different sets of parameters that could be translated into different types of executions on the testbed. It is up to the connector developers, depending on the capabilities of the testbed, at design time to define the task templates that the connector will support.
In order to make the connector capable of publishing the task template some changes and extensions have had to be done in the connector interface (TEFIS Connector Interface – TCI). In particular:
• a new operation will be added to publish task templates supported by the connector: get_task_templates(). It does not take any parameters and returns all the task templates
Figure 9: Entering task parameters in the TEFIS Experiment Manager
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 50 of 88
supported by the connector in the format specified in the section Task Template XML description below.
• the execute() operation will accept two more parameters: task_template and execution_parameters. The first will be a reference to one of the execution templates published by the connector, while the second will be filled with all parameters specified by the selected execution template.
The new operation get_execution_template() will be available as part of the connector REST interface, issuing a GET request to the following URL:
http://connector-host.org/rest/execution/get-task-templates
Task Template XML description The get_execution_templates() operation will return an XML document describing the templates supported by the connector. The format of the returned document is:
<task-templates connector-id=http://connector-host.org:80/>
<task-template name=”ttempl1”>
…
</task-template>
<task-template name=”ttempl2”>
…
</task-template>
…
</task-templates>
Each element <task-‐template> has the following structure:
<task-template name=”ttempl1”>
<resource type=”/resource-type”/>
<resource type=”/resource-type”/>
<parameter name=”pname1” type=”string” optional=”yes”/>
<parameter name=”pname2” type=”int” optional=”no”/>
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 51 of 88
<parameter name=”pname3” type=”boolean” optional=”yes”/>
…
</task-template>
The full DTD schema for task template xml files is:
<!ELEMENT execution-template ( resource, parameter+ ) >
<!ATTLIST execution-template name #REQUIRED >
<!ELEMENT execution-templates ( execution-template+ ) >
<!ATTLIST execution-templates connector CDATA #REQUIRED >
<!ELEMENT parameter EMPTY >
<!ATTLIST parameter name #REQUIRED >
<!ATTLIST parameter optional #REQUIRED >
<!ATTLIST parameter type #REQUIRED >
<!ELEMENT resource EMPTY >
<!ATTLIST resource type CDATA #REQUIRED >
2.4. Living lab integration
2.4.1. Typical flow A typical workflow of a Living lab experiment can be described as follows:
Experimenter creates a TEFIS user-‐account TEFIS components involved:
• TEFIS portal
Design: Experimenter task: Experimenter registers with the TEFIS platform to gain access to facilities and support for experimental lifecycle.
• TEFIS portal
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 52 of 88
• TEFIS Core services
Planning Experimenter task: Experimenter makes a request for Botnia Living Lab through the TEFIS portal
TEFIS components involved:
• TEFIS portal
• TEFIS Core services
• TEFIS Testbed Connector(Botnia)
Configuration Experimenter task: Experimenter specifies resources to be used and design the user-‐evaluation tasks
TEFIS components involved:
• TEFIS portal
• TEFIS Core services
• TEFIS Testbed Connector(Botnia)
• TEFIS Experiment Data manager
Execution Experimenter task: User evaluation tasks are performed using Botnia Living Lab resources. Experimenter follows up status of Living Lab tasks
TEFIS components involved:
• TEFIS portal
• TEFIS Testbed Connector(Botnia)
• TEFIS Experiment Data manager
Reporting Experimenter task: User evaluation is finalized and experimenter gets access to result.
TEFIS components involved:
• TEFIS portal
• TEFIS Testbed Connector(Botnia)
• TEFIS Experiment Data manager
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 53 of 88
Knowledge sharing Experimenter task: Experimenter wants to share experience from this experiment with other experimenters.
TEFIS components involved:
• TEFIS portal
• TEFIS Experiment Data manager
2.4.2. File sharing platform Data associated with experiments run at Botnia tend to be store as files. Providing access to these files for the experimenter is an issue which needs to be considered. We could, for instance, investigate the possibility to allow file sharing. For file sharing, we could consider the solution where files are stored in one place (in the Drupal™-‐platform at Botnia) and then they are accessible to experimenters via the TEFIS portal. This would avoid storing files in several instances but still keeping the control of access-‐rights in the hands of the Living lab staff. This particular solution is not implemented yet but the subject of how to improve the Drupal testing portal of Botnia and to give experimenter one access-‐point for results is currently under discussion.
2.4.3. Communication channel When setting up a communication channel between experimenter and Living Lab staff initially we have been implementing a simple e-‐mail alerts type of system. In the long-‐term there may be more scalable solutions such as an incident management solution used by help-‐desks where all interaction between test service providers and experimenters are managed.
2.4.4. Connector features The Botnia Connector, has initially to manage three different features:
Resource Management In the Botnia specific case, three kinds of resources exist. The resource types to be booked from Botnia are:
• Users Data Base: this is a Database with End-‐users for Future Internet service development and evaluation
• User Evaluation Expertise: this is expertise for human-‐centric design processes
• User Involvement Methodology: this is the Living Lab methodology for service innovation processes by user involvement in all phases from need finding to pre-‐market launch.
Resource management is based on the database created in the Living Lab. This database will keep all the information related to the Living Lab resource and any instance booking or execution query.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 54 of 88
The Resource Management Database has two tables, one for managing the resources and the other one for managing the number of instances.
Resource Execution The functionality for the current and future resources execution of the Botnia Living Lab includes warning the people in charge of the testbed. They should organize the execution of the previously created and booked resource, and this functionality is executed through sending an e-‐mail.
In order to achieve this goal, a database table has been created. In this table, the execution state is saved along with the corresponding folders used with the data of that execution for a given experiment.
Data Management For Living Lab supported experiments most of the data generated during the experiment will remain in the data store of the Living Lab environment and there will be no direct access by TEFIS or a experimenter except for metadata describing an experiment. This metadata should be generated by the TEFIS Experiment Manager when experimenters are planning their experiments. The same data can then be exported to the Botnia Drupal portal when Botnia Living Lab staff are setting up a new test in the Drupal portal.
2.5. Parallel task execution The TEFIS platform is about enabling complex testing which require resources across multiple, heterogeneous test facilities. The implicit assumption is that TEFIS must be able to support different sections of an individual test at different facilities. This in turn suggests that individual testruns can be subdivided into steps, or tasks; and therefore, there needs to be a way to map such tasks to the appropriate test resources12.
12 How to model these different aspects of tasks – on the one hand, an individual step within a testrun workflow, and on the other, the execution of that step on a test resource – is taken up in Chapter 4 of this deliverable.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 55 of 88
Figure 10: Mapping TEFIS Tests to Test Facilties
The actual instance of an experiment is called a testrun (Figure 10, and is discussed in Chapter 4 of this deliverable), and comprises a number of individual tasks as shown. A TEFIS experimenter is guided through the definition of individual tasks by the TEFIS Experiment Manager. Currently, this involves identifying the resources. The issue for TEFIS is how to manage resource requests when:
1. A single task will be run on multiple resources;
2. Multiple tasks will be run on a single resource; and
3. One task is dependent on the output from one or more other tasks.
In the following subsections, the different generic cases are discussed in relation to the initial TEFIS test cases.
2.5.1. Tasks running independently on separate testbeds Consider the eTravel use case as shown in Figure 11. There are a number of different tasks to be executed: first, the application logs must be queried, and the appropriate modules optimised and compiled. This is done on ETICS. In addition, the eTravel use case involves testing the performance of the application. This can be done on PACA Grid from a purely performance point of view; at the same time, it may be performed on PlanetLab, if the focus is more on networking and network performance or characterisation.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 56 of 88
Figure 11: TEFIS Tasks running independently
Logically, there is no reason why all three task, compilation and optimisation, performance testing and network operation, should not be tested independently on the different testing facilities available through TEFIS. As such, experiment definition is relatively straight forward: the experimenter defines three generic tasks, and associates each with separate test facilities.
There is a complication, however. Although the operational profiling carried out on PACA Grid and PlanetLab are indeed independent, in the sense that the one does not depend on the other. The same executables can be used in each case, and different results are produced by each, since the goal of the task is different in each case. However, the executables used for the operational profiling tasks were generated during the ETICS phase, and have to be.
2.5.2. Tasks with dependencies The situation is more akin to that shown in Figure 12. It is clear that the output from the ETICS compilation and optimisation task, the executables generated as a result of interrogating the application logs and combining appropriate SOA service components, are required as input to run both of the operational profiling tasks, which are themselves independent of each other.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 57 of 88
Figure 12: Tasks with Dependencies
For TEFIS, the front-‐end and core service components involved in the definition and execution of the workplan for a given testrun need to be able to capture and act on these temporal dependencies: task sequencing is important. In addition, it must be possible for TEFIS to be able to provide some mechanism for the transfer of output from one task to another. In this case, the executables are passed from ETICS to PACA Grid and/or PlanetLab. Currently, the job.xml file created by the Experiment Manager allows for these sorts of dependency to be captured. In addition, the TEFIS Data Services provide explicit support for in-‐ and output data for each individual tasks or workplan step. TEFIS can, therefore, cater both for sequencing dependencies as well as data transfer dependencies.
The issue here, though, is if the operational profile tasks (performance testing on PACA Grid and network operability on PlanetLab) are regarded as individual and separate tasks or whether they should be seen as a single task, with two separate and independent instances. This needs some further consideration12.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 58 of 88
2.5.3. Tasks running in parallel with dependencies
Figure 13: Parallel and mutually dependent tasks
A different set of requirements occurs for the IMS, multi-‐media services testcase. As shown in Figure 13, the application itself will be run on the SQS testbed13. Once running and available, then users from the BOTNIA Living Lab environment can begin to interact with the application, providing input to specific application steps, content for the multi-‐media side of the service, as well as requesting content be downloaded to them; the living lab users are exercising the features and functions of the services offered in the IMS application.
In consequence, there are complex dependencies between the tasks running on the two testbeds. The application must be running on the SQS testbed before people from the BOTNIA Living Labs can use it; and they need to interact with the application running on SQS, otherwise it will not generate any results. Such operation has implications across the TEFIS platform, and needs validation for the TEFIS components responsible for test planning and definition, as well as data management.
3. Use case integration
3.1. eTravel : current status The eTravel use case is based on a commercial application which operates in a large-‐scale SOA environment: different services are loosely integrated on demand and as the customer’s travel enquiries and bookings proceed. As such, the application requires database, financial and other tools to run, the 13 For the purposes of this discussion, only the service evaluation part of the testcase is presented.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 59 of 88
combination of those services being determined at runtime in accordance with the behaviour of the customer. During operation, the application generates very large log files. Should an error or fault occur, then these log files are interrogated as part of defect resolution; additionally, during normal operation, the log files may be reviewed in an attempt to predict behaviours and introduce appropriate fault-‐avoiding procedures, as well as introduce new and modified services.
The on-‐going maintenance and improvement of such a service therefore requires significant test resource across different test environments and targeted at different aspects of the service. To summarise the testing requirements, the eTravel application needs to establish:
• Review and evaluation of log information to compile different “versions” of the application using different SOA components and services. For testing purposes, this requires software compilation and quality evaluation for different sets of modules.
• The correctness of application behaviour, especially across different hardware configurations, ranging from small to medium and very large deployments. For testing purposes, this needs an environment which can be changed to mimic different sizes of deployment.
• Application performance measurements under different conditions. This requires system-‐test type facilities.
• Usability, for which user interaction or simulated interaction is required.
The TEFIS platform and the associated testbeds are ideally suited to offer support for this kind of multi-‐faceted testing. The testbeds offer different facilities for large-‐scale deployments, software compilation and quality services, and end-‐user evaluation. In collaboration with ActiveEon, who own the use case and developed the original application, this test has been run across different testbeds to test compilation, quality and performance.
ETICS Provides advanced compilation and software quality assessment.
The eTravel application, log files and individual components were loaded and different compilations completed and evaluated. Optimal compilations were used for performance testing.
PACA Grid Provide large scale configurations, supporting different operational environments and able to support of simulate incoming network traffic.
Compiled versions of the applications were presented within a typical operational environment (with respect to database, operating system, and so forth), and successfully run. Performance measures were gathered and presented to the experimenters for evaluation.
PlanetLab
Now the initial run has been demonstrated for performance testing on PACA Grid, the use case will also be run on PlanetLab to explore other load and environmental parameters.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 60 of 88
The successful running of the use case involved all components within TEFIS. The eTravel experimenter first had to create a user account via the TEFIS portal, and define the experiment using the TEFIS Experiment Manager. Connectors were developed in line with the TCI specification for ETICS and PACA Grid. The experiment as defined in the Experiment Manager was then processed via the Core Services and submitted to the appropriate connectors: ETICS and PACA grid in sequence. The input and output data for each stage were presented to and retrieved from the TEFIS Data Services, using the RPRS and TIDS.
The use case will continue to be exercised with potential modification and extension to the PlanetLab facilities both in support of the eTravel case as a baseline engagement for TEFIS, but also as a useful regression test as the underlying components modified to validate that the TEFIS platform continues to provide appropriate use case handling.
3.2. eHealth
3.2.1. Initial Experiment Template The eHealth case, in its updated form, may be summarised as follows:
• create, collect and update a patient medical record; and
• simulate a live video stream.
3.2.2. Experiment Overview
The experiment will focus on an eHealth case, specifically on the scenario of collecting patient medical record data including family medical history information.
Many of the problems in the medical patient record, nowadays, are related to the poor collection of patient data, where, in a large number of cases, they cannot be identified later. This leads to problems with their historical medical data, and makes it impossible to cross-‐reference this medical data for any statistical survey.
Besides a significant increase in long distance assistance and even surgeries, network demand is increasing in medical applications, and the need to ensure QoS in those applications.
Additionally, this case will help to shape the TEFIS architecture, by providing requirements for a particular experiment to be run on the platform. In the context of TEFIS this experiment will help to validate both the platform, as well as the test-‐beds that are used by the eHealth case.
3.2.3. Hypothesis Definition In this new version of the experiment the patient medical record (PMR), multimedia files such as videos and images have been added.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 61 of 88
To create the patient medical database, the necessary information is collected by a paramedic at the patient’s house (in which case the application is run on a mobile device) or by a receptionist at a hospital (when the application is run at a desktop station).
This record can be modified and updated by a doctor or a nurse to reflect the current status of the patient.
This application covers three main aspects of the PMR: patient personal data, patient medical history and the results of any tests, from simple text documents to large multimedia files, but this does not cover live activities such as long distance surgeries.
In consequence, a separate program will be run along with this application to handle issues associated with live activities.
3.2.4. Experimental Method and Procedure
The experiment will be tested using the test-‐beds available in the TEFIS platform, following the data flow in the experiment. In this new version, described in this document, the scenario will focus on two testbeds. But in the future, more testbeds may be added to the experiment to test other features of the application.
For the first version of this case, the ETICS system was designated to test the quality of the smart-‐phone software, and of the database where the medical data are to be stored. However, this has been removed from this version to simplify the case, and to concentrate more specifically on TEFIS validation.
PACA-‐Grid is now responsible for stress test of the system. The application is now being prepared to receive external inputs so that it is integrated more closely with the operational environment on this testbed.
KyaTera is a special kind of testbed, providing an environment to evaluate network performance. It is particularly beneficial when evaluating network features and capabilities, like bandwidth required, jitter, delay, and the extent of packet loss.
Figure 14 below shows the experiment workflow, in which the experiment phases can be viewed as well as the relationship and dataflow between the testbeds.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 62 of 88
Figure 14:The eHealth experiment workflow.
• Phase 1: Specification of the experiment;
• Phase 2: PACA-‐Grid and KyaTera configuration;
• Phase 3: Network evaluation loop;
• Phase 4: Display the final results.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 63 of 88
3.2.5. Variables
Independent variables:
• Network up and down time;
• Number of patient records in the data base;
• Time to transmit a form to the data base;
• Amount of packet loss;
• Total amount of data generated.
Dependent variables:
• System response time;
• Transmission bandwidth needed for operation;
• Percentage of network uptime;
• Percentage of the server used;
• Latency.
Control variable:
• Max network bandwidth;
• Jitter;
• Max delay.
3.2.6. Metrics and Measurements
Independent variables: Network up and down time: measured as the time the network connection is up or down. This is important to establish the reliability of the network; the unit of measurement is in seconds (up and down).
Number of patient records in the database: Shows the number of records in the database.
Time to transmit a form to the database: measures the time, in milliseconds, for a form to arrive at the database. This measure will help to gauge system response time, in the case of form sending.
Number of packets lost: measures the number of internet packets lost during transmission across the network.Total amount of data generated: is the amount of data (medical records) generated by the
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 64 of 88
PACAGrid testbed. It will show the size in bytes of the input data for the experiment, and changes to this value will affect the value of the dependent variables.
Dependent variables:
System response time: measured as the time the system takes to respond to an action. It is also important to estimate how this time is perceived, and provide feedback to the user of the system.
Transmission bandwidth needed for operation: measured by the sum of all bandwidths needed for sending the forms to the database. This will be an estimate based on the average bandwidth required for a standard form times the average number of forms sent per time period. This value is important to estimate the max network bandwidth, mainly the incoming network into the database centre.
Percentage of network uptime: measured by dividing uptime by the total time of network operation. The value is expressed as a percentage and represents an estimate of the daily uptime.
Percentage of the server used: reflects the percentage of the server used by the number of PMRs. This measure could be used to estimate the average size of a PMR if taken in conjunction with the numbers of patient records in system.
Latency: measured in milliseconds, and is meant to show the time taken for a packet to transfer to a node in the network and back again.
Controlled variable:
Max network bandwidth: is the max network available in the region, this number cannot be less than the transmission bandwidth needed for the operation.
Jitter: is the delay variation. In multimedia applications, the jitter needs to be constant so that a smoother transmission can be achieved. This variable is measured in milliseconds.
Max delay: is the maximum time allowed for the delay between two packets. It is important to be able to measure the quality of the system in a deterministic manner. It is measured in milliseconds.
3.2.7. Initial Data and Domain Knowledge
The data comes mainly from the medical domain. It contains types of disease and diagnoses. In addition, it contains geographic information, some personal information like sex and date of birth, and family medical history.
For the experiment, simulated data will be generated, because the use of real data for real patients may lead to a breach in privacy. So a hypothetical patient will be created with generic data to populate the experiment.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 65 of 88
3.2.8. Technical Feasibility and Testbed Resourcing/Configuration
Hardware requirements/ specification
There are three devices with some computational power. These are:
·∙ A mobile device or a desktop PC to run the data collection software, for instance, a smart-‐phone with the capability to connect to the web (3G, 4G or any other wireless internet connection)
·∙ A server to receive the data from the devices; these servers can be placed, physically, at one of the KyaTera entry points (Escola Politécnica).
·∙ A second server in another location to store all the data collected (simulating the hospital server). This server is connected to the KyaTera network and will receive the data over it.
Software requirements/ specification Apache Tomcat 6.0 as application server
Java virtual machine
MySQL 5.0
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 66 of 88
3.2.9. Steps 1. The software developer creates a TEFIS account;
2. Specification of the experiment (portal): stress test and image processing using Paca-Grid, book KyaTera physical network and network tests;
3. Create the description of the desired SLA, stressing factors (number of resources and input load – bigger log files or several in a similar time frame);
4. Store experiment description in the TEFIS portal;
5. Select the available PACA resources;
6. Create and upload the Job file to PACA-Grid;
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 67 of 88
7. Configure PACA-Grid to create the data input (PMR generator); 8. User selects the desired measurements in KyaTera from a list of available tests (like: total
bandwidth in use, latency, and jitter); 9. User receives the available machines; 10. User configures the KyaTera with the desired parameters;
11. KyaTera is set-up to receive the data packages; 12. Prepare a workflow to conduct the test (tasks and number of nodes);
13. Execute the tests workflow; 14. Collect the data generated by KyaTera; and 15. Collect the data generated by Paca-Grid;
3.3. IMS-‐Botnia
This section describes the experiment that to be evaluated. Involved in this experiment are the three main issues for the mobile applications of today.
• At first this experiment will check TEFIS as a tool to evaluate the user’s data, for example, user feedback to check if the application is acceptable to users.
• In the second step we will check TEFIS as a validation tool, for example, if this application can be validated.
• In the third step we will check TEFIS as a business model and user experience validation tool, for example, we would like to know what the correct business model for operator applications would be.
These three important steps will create an overall use case that should be able to check the entire TEFIS platform. In addition, for the SQS test beds as a validation tool we will use Q-‐Mobile, a mobile application certification tool that is normally is used by SQS.
On the one hand, Q-‐Mobile certification includes standard industry testing criteria (Java, Symbian, Windows Mobile Applications), de-‐facto standards and regulation standards. On the other hand, it is conceived and designed as a flexible process; SQS can design customized tests based on specific client goals using either internally developed standards or guidelines formally accepted by the industry.
Q-‐Mobile is one of the tools of the SQS test bed, a test set of test cases, divided into different test sets (usability Tests, functionality Tests, Integration Tests,….); these test cases will be executed manually over IMS.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 68 of 88
3.3.1. Experiment overview
The experiment will be focused on a mobile application over IMS, and divided in three different phases.
This experiment will be performed via the TEFIS platform, using the TEFIS portal, the call service, SQS test-‐beds, and a living-‐lab. It will also use all the resources from TEFIS. In the TEFIS context this experiment will help validate and test the platform, as well as the test-‐beds that are used in this experiment.
Hypothesis definition The experiment came from the idea of developing an application for content sharing over IMS. For example, Company A (an SME) has created a concept for a new application for content sharing over IMS. The experiment will be divided in three different stages; in the first one, before starting the development and showing the idea to an Operator, they want to evaluate the concept by gathering the feedback about their idea from potential users.
If the feedback is good, they will continue through to the second phase: Company A will develop the application and will perform system acceptance testing (including functional and non-‐functional aspects) on it together with the Operator. Once the application has been accepted, and before roll-‐out, they proceed to the third and final phase, in which the Operator needs to select the best business model for the application, and also the feedback from potential users about the business models selected.
TEFIS PLATFORM
LIVING LAB LIVING LAB + SQS
TESTBED
LIVING LAB + SQS TESTBED
Phase 1 Phase 2 Phase 3
- Number of users
- Users profiles
- Utility
- Usefulness
- Q-Mobile testing
- Mobile devices
- Functional correct tests
-Usability testing
-
- Max network bandwidth
- Usability
- Max network bandwidth
- Max network bandwidth by user
- Users profiles
- Number of users
- User experience
PHASE1: TEFIS as concept evaluation tool
PHASE2: TEFIS as validation and evaluation tool
PHASE3: TEFIS as business validation tool
Figure 15: Overview experimental procedure
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 69 of 88
Experimental Method and Procedure
Phase 1 Evaluation A Service Provider Company has an idea for a new application for content sharing over IMS. They want to evaluate the feedback from users about this application.
For this evaluation, Botnia resources will be used. Initially, a video with five different application demos will be sent for evaluation to twenty users, and they will fill in a questionnaire with their opinions, as a basis for a User Involvement Methodology document.
In addition, the video will be sent to a Business Model Creation and Evaluation Expert for his/her professional evaluation and conclusions.
REQUEST TO TEFIS: Concept evaluation
INPUT TO TEFIS: Video explaining the features of the new application
REPORT FROM TEFIS: Feedback on the usability and usefulness of the application from users and an expert.
Specification of the experiment to perform:
• Testing type: concept evaluation
• Technology: IMS
• Resources: User Involvement Methodology, User Database and User Evaluation Expertise
Configuration of the environment:
• Decide purpose of user involvement
• Decide who to involve
• Prepare involvement methodology
• Plan user involvement in detail
• Prepare user involvement activities
• Recruit users
Execution of the experiment:
• TEFIS distribute the video and evolution frameworks to the users and User Evaluation Expert.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 70 of 88
• Run user-‐involvement activities
Report:
• Get analysis of the data received from the users and expert.
Phase 2 Prototype Validation A Service Provider Company, with a positive response from the users and an expert, offers the application to the Operator. Once the Operator accepts, development is undertaken. When the application is delivered to the Operator, they decide to perform functional acceptance testing on it. For that the user will select from the Q-‐Mobile test set, what he wants to do, (usability Test, functionality Test, Integration Test,….); in this case we are going to choose to make the experiment a Functionality Test.
Specification of the experiment to perform:
• Testing type: functional and usability testing
• Technology: IMS
• SQS Resources: Emulated IMS core, Network monitoring Testing Tools, Customized Test plan, Q-‐Mobile, Human Team
• Botnia Resources : User Involvement Methodology, User Database and User Evaluation Expert
Configuration of the environment:
• Send the application to the SQS testbed, configuration of the IMS core, select the Q-‐Mobile test set for validation of the application.
• Expert evaluation of usability
Execution of the experiment:
• Send core configuration ( DNS, users, Application services)
• Receive connection information
• Execute Q-‐mobile test cases
Report:
• Monitoring files
• Validation test report
• Usability test report.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 71 of 88
Phase 3 Business model selection The application successfully passed the functional testing phase. The Operator decides to roll out the application; they have some concerns about the business model to use and to get final feedback of user experience in a real life setting. They have two charging models in mind:
1. Flat rate
2. Rate for 2Gb and then get throttled bandwidth
The Operator, jointly with the Service Provider, needs to evaluate the two business models in order to select the more appropriate. To this end, the operator will monitor selected users as potential users, and check the better business model in terms of:
1. Network usage (bit rate for each model)
2. User experience feedback : Preferences of the potential users
Specification of the experiment to perform:
• testing type: network usage, user feedback
• technology: IMS
• SQS Resources: Emulated IMS core, Network monitoring Testing Tools, Customized Test plan, Q-‐Mobile, Human Team
• Botnia Resources : User Involvement Methodology, User Database and User Evaluation Expertise, Business Model Expertise.
Configuration of the environment:
• Request connection between living lab and SQS test bed, configure living lab.
• Decide purpose of user involvement
• Decide who to involve
• Plan user involvement in detail
• Prepare user involvement activities
• Recruit users
• Run user-‐involvement activities
Execution of the experiment:
• Send application under test to TEFIS
• Monitor all the users’ feedback
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 72 of 88
Report
• Receive monitoring files on each business model and user experience feedback
• Get user data analysis on each business model and user experience feedback
Variables
These are the variable values for the experiment:
• Independent variables:
o Number of users on the Living Lab;
o User profile;
o Q-‐Mobile Test sets;
o Mobile devices
• Dependent variables:
o Functional Test in the correct manner.
o Max network bandwidth used by user;
o Utility;
o Usefulness;
o Usability
o User experience
• Control variable:
o Max network bandwidth
• User feedback variables
o System acceptability:
o Social acceptability:
o Practical acceptability:
o Usefulness:
o Utility:
o Usability:
§ Learnability;
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 73 of 88
§ Efficiency,
§ Memorability;
§ Errors;
§ Satisfaction;
o Attitude (likability):
o User experience:
§ Enjoyable, Fun
§ Engaging
§ Pleasurable
§ Motivating
§ Helpful
§ Exciting
§ Boring
§ Provocative
§ Rewarding
Metrics and Measurements
Number of users on the Living Lab:
• The Number of Users of the living Lab, measured numerically; the measurement takes place from the time when an application starts the process of the three phases.
User profile:
• It is very important to know what kind of users are providing feedback for the application.
Q-‐Mobile Test sets:
• Q-‐Mobile test sets are going to be selected for when the application is presented to the SQS Test bed. These test sets are provided as one of the tools available on the SQS test bed. Selecting one or other of the test sets will be a very important variable in the validation. Currently, we are only planning to use functional test sets to check the functional testing. This variable will be measured as the time spent executing those tests cases.
Mobile devices:
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 74 of 88
• As we are going to check a mobile application, we will need to control the number of mobile devices for each application. It will be measured as the number of mobiles used to check the application.
Functional Test in the correct manner:
• The validation of the application is dependent on whether or not the application passed the functional test.
Max network bandwidth used by user:
• In the last phase it will be important to decide what on the best business model, and therefore, we need to know how much bandwidth has been used by the users during a certain period of time.
Utility:
• The utility is regarded as an important facet of the evaluation by the user.
Usefulness:
• The usefulness as with utility is one of the variables of users data for analysis, in evaluating the application
Usability
• To check usability about the usability test set from Q-‐Mobile will be used; it will be measured as a function of whether or not the application passed or failed validation.
User experience:
• The user experience variable will be measured as a function the user experience using different business models.
Max network bandwidth:
• Max network bandwidth: is the max network available in the region; this number may not be less than the Transmission bandwidth needed for operation.
System acceptability:
• The ability of the system to meet all needs and requirements of all stakeholders, from direct users to customers, etc.
Social acceptability:
• The correspondence of the system to the social rules and norms that apply in a given context
Practical acceptability:
• The acceptability of the system as regards cost, reliability etc.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 75 of 88
Usefulness:
• The ability of the system to achieve a desired goal. This can be broken down into utility and usability
Utility:
• The ability of the system to do what is needed
Usability:
• The practical usability needed by the user of system functionality. This can be broken down into five categories
Learnability:
• How easy it is for users to accomplish basic tasks the first time they use the design
Efficiency:
• Refers to how quickly a user can perform their tasks once they have learned the system
Memorability:
• Refers to how easily the users can re-‐establish proficiency after not using the design for a while
Errors:
• Means how many errors users make, how easily they can recover from the errors and how severe the errors are
Satisfaction:
• How pleasant the user thinks it is to use the design
Attitude (likability):
• Attitude refers to the user’s perceptions, feelings and opinions of the product. This is usually captured in both written and oral interrogation.
3.3.2. Initial Data and Domain Knowledge The initial data will be a real mobile application selected for this specific purpose. Also, we will have the video that would be used in phase 1 as input in TEFIS.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 76 of 88
3.3.3. Technical Feasibility and Testbed Resourcing/Configuration The feasibility of the experiment considering the system under test and the available testbed resources is high. It is likely that for some experiments changes will be required to both the system under test and the testbed infrastructure.
4. New Experiments
4.1. Typology of Experiments When describing an experiment to TEFIS, there are various factors which should be specified :
Experiment title
Coordinator Name and organisation of the experiment’s coordinator
Summary
A brief description of what is going to be done, and why.
Experimental period In what timeframe will the experiment be performed?
Hypothesis definition/Purpose of the experiment
What is the real point of the experiment? What are you trying to find out with your experiment?
Involved testbeds Which Tefis testbeds are concerned by the experiment.
Involved internal systems Which TEFIS testbeds are to be used by the experiment?
Experimental method and procedure
What steps are involved in running the experiment.
Actors involved Who is going to take part in the experiment? Will there be end-‐users testing out some application? And what will they be doing? Or is it more about system set up and performance?
Variables What factors are of interest for the investigation? How will they be changed or controlled?
Metrics and Measurements
What will you be measuring and how?
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 77 of 88
Technical Requirements What (type of) system or systems are needed to support your experiment?
Any specific details
4.2. QUEENS (Dynamic Quality User Experience ENabling Mobile Multimedia Services)
Experiment title Dynamic Quality User Experience ENabling Mobile Multimedia Services (Queens)
Coordinator Prof. Symeon Papavassiliou
Institute of Communication and Computer Systems (ICCS)
Summary QUEENS aims at creating and prototyping a novel framework to extend Quality of Service (QoS) to Quality of Experience (QoE) in the case of mobile on-‐demand multimedia applications. QoE is usually considered as an off-‐line subjective user opinion on the service quality, unrelated to specific networking properties. QUEENS provides an opportunity to consider QoE provisioning as a dynamic process which allows users to express their feelings in real time.
Experimental period 12 months
Hypothesis definition/Purpose of the experiment
How will users react to multimedia service performance variations and what is the best way (UI) for dynamically collecting the above preferences? Fundamental UI options for reflecting dynamic QoE will be investigated, in terms of a) requested action feasibility indicator, b) single-‐step / scalable / multi-‐optional performance indicators of user preferences and c) cost of request indicator in terms of pricing or alternative approaches (e.g., coupon collection, etc).
How can users’ dynamic preferences be correlated to networking QoS metrics? To correlate QoS and QoE proficiently in a quantitative and pragmatic manner, service-‐aware dynamically adaptive utilities will be created that will correlate user’s dynamic reactions-‐expectations to network performance metrics (e.g., server-‐bit rate, video resolution, frames, etc.). These utilities will be used as feedback to the RRM mechanism of the wireless access network to optimize system performance (a novel top-‐down cross-‐layering).
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 78 of 88
Involved testbeds ETICS, PlanetLab, Botnia, IMS,
Involved internal systems All
Experimental method and procedure
• Laying the foundations and the overall concepts of the proposed Dynamic QoE Provisioning Mechanism thanks to Botnia.
• Test of the top-‐down cross layering QoE-‐aware resource allocation/management mechanism through ETICS and PlanetLab.
• Prototype and validation, through IMS, of mobile QoE-‐aware multimedia application.
• Enabling the IMS-‐aware application prototype over Botnia, in order to evaluate the proposed QoE-‐aware mechanism.
Actors involved Up to 100 real users with multimedia mobile phone devices in the age range 18–40 years.
Variables • Codec types
• Sender bit rate,
• Sender frame rate,
• Resolution based on users feedback
Metrics and Measurements • Users’ dynamic QoE interactions
• Peak Signal to Noise Ratio
• Moving Pictures Quality Metric
• Off line measurements of users’ experience will be made at the end of every session.
Technical Requirements • A specific client-‐server installation.
• Matlab-‐based emulated servers (for emulating CDMA cells operation).
• IMS mobile client capable of consuming video services and of being extended to support the proposed dynamic QoE scheme (alternatively a set of test cases will be implemented if the client cannot be properly altered, emulating end-‐user behaviour).
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 79 of 88
• Interconnection of the augmented video server with IMS emulated environment for functional testing (Refer to 1 of Metrics & Measurements of IMS Testbed description).
• Interconnection of the augmented video server with IMS testbed for performance testing scenarios (Refer to 2 and 3 of Metrics & Measurements of IMS Testbed description).
• Deployment of monitoring infrastructure for the collection of metrics in the IMS framework.
Any specific details None
4.3. Experiment 2 (Tefpol) Experiment title AugmenTed rEality collaborative workspace using Future Internet
videoconferencing Platform fOr remote education and Learning (TEFPOL)
Coordinator Krzysztof Kurowski
Instytut Chemii Bioorganicznej PAN
Summary TEFPOL aims at enhancing videoconferencing experience, as well as advanced visualization and real-‐time technologies. TEFPOL will extend existing videoconferencing capabilities to offer an integrated large-‐scale platform for online education and training purposes. The experiment will validate, thanks to the TEFIS platform and facilities, various hypotheses concerning networking, future videoconferencing, and real-‐time CPU/GPU processing. It will also provide knowledge of how to stimulate learning and discovery processes through more cognitive and new virtual collaborative experience.
Experimental period 12 months
Hypothesis definition/Purpose of the experiment
The goal of Tefpol is to integrate, deploy and test innovative videoconferencing, advanced visualization and real-‐time service-‐oriented computing technologies.
Based on the distributed TEFIS facilities, Tefpol plans to extend existing capabilities provided by videoconferencing and remote visualization solutions to offer an integrated large-‐scale platform for online education and training purposes.
Involved testbeds PacaGrid, Botnia, IMS, ETICS
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 80 of 88
Involved internal systems ProActive
Experimental method and procedure
• Adaptation of the Vitrall visualization system for PACAGrid
• Adaptation of the Vitrall visualization system communication modules in order to make them compatible with ProActive communication schemes
• Execution of the Vitrall visualization system on PACAGrid
• Test and validate, through IMS, a communication protocol between high-‐resolution videoconferencing and visualization services to establish real-‐time high-‐resolution sessions among all clients and remotely shared virtual 3D models.
• ETICS will help us to perform testing and software integration activities to improve visualization services quality.
• BOTNIA use will help presenting preliminary experiment outcomes by adding new web user interfaces and example education applications using high-‐resolution streaming and visualization to the TEFIS portal.
Actors involved About 50 end-‐users split into groups of 2 to 4 users.
Variables • Application layout, motion tracking interface functionality and sensitivity
• Audio/Video stream parameters (coded, bit rate, frame rate, resolution).
• Number of users simultaneously participating
• Level of detail of the rendered virtual object
• Network connections
Metrics and Measurements • Latency of audio, video, and virtual object streams in ms.
• Responsiveness of the motion tracking interface in ms.
• Maximum number of users that can be simultaneously using the application and that can participate in a given collaboration session.
• List of IMS services the application is compatible.
• Number of PACA-‐Grid resources (both CPU and GPU) required
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 81 of 88
for parallel rendering process to achieve the real-‐time ratio.
• Maximum number of images.
• Maximum resolution and level of details of rendered model.
• Maximum number of different clients’ type connected to the engine.
• Maximum delay that end users accept during a videoconferencing session enhanced with an augmented reality.
Technical Requirements • GPU based computing resources.
• Several machines in the grid environment.
• OpenGL library.
• CUDA software installed on computing resources.
• Access to about 50 end-‐users, each of them with an access to a PC with an HD camera and HD monitor.
• End-‐users need to have access to IMS infrastructure.
Any specific details None
4.4. Experimenting with Quagga Open API and Cross-‐layer Coordinated Networks
Experiment title Experimenting with Quagga Open API and Cross-‐layer Coordinated Networks
Coordinator Marcelo Yannuzzi
Universitat Politecnica de Catalunya
Summary
Experimental validation of an open API for Quagga (an open source software routing) in order to evaluate new open and programmable network infrastructures, which will allow researchers and network operators to experiment network and traffic management paradigms in multiple layers.
Experimental period 12 months
Hypothesis definition/Purpose
This experiment will study, under different constraints and topologies, the application of different offloading strategies, along with the
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 82 of 88
of the experiment
application of the Open-‐API over a large set of PlanetLab nodes in order to assess the scalability and the usefulness of the solution.
Involved testbeds PlanetLab, Kyatera
Involved internal systems Probably: TEFIS Data Services, Supervision Module for monitoring
Experimental method and procedure
We will guarantee the correctness of the tests by performing repetitions under the same conditions or by fixing all the parameters but the one under analysis in the different evaluations that we will perform. In all the tests evaluating the performance of offloading the reference data will be extracted from the case that no offloading is applied.
With reference to the evaluation of the Open-‐ API, we will centre the study on latencies and signalling traffic for the different tests.
Actors involved Universitat Politecnica de Catalunya and Technische Universitaet Braunschweig
No humans are involved directly in the experiment.
Variables • Performances of Open API in Quagga.
• The algorithmic core and coordinated control process that runs as a third-‐party application, supplied by the FP7 ONE project.14
Metrics and Measurements • Latency of offloading triggering process
• Required time for network convergence after applying the offloading, a
• Packet losses during the process
• Throughput increase when using offloading
Technical Requirements • 250 nodes on the PlanetLab testbed.
• 25 nodes on KyaTera
• Administrative privileges to run Quagga upgraded with Open API.
Any specific details None
14 E. Griliches. “The Junos SDK: A Market Update on Innovation”. Juniper Whitepaper
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 83 of 88
4.5. Experiment 4 (Smart Ski) Experiment title Smart Ski Resorts
Coordinator Jean-‐Marc Seigneur
University of Geneva
Summary
The smart ski resort experiment is intended to help improve some aspects of a mobile application dedicated to provide new services to Megève ski station services, because this application has frustrated its customers due to connectivity issues (expensive for foreign subscribers, infrastructure overload, etc).
Experimental period 6 months
Hypothesis definition/Purpose of the experiment
The mobile application will be connected to:
• PlanetLab: to measure, monitor, and analyse how it can be improved, from a network point of view.
• IMS: to validate and test the ski piste video streaming application over IMS, and
• Botnia: to study how to regain user satisfaction by better defining with them features for the next release.
Involved testbeds PlanetLab, IMS, Botnia
Involved internal systems Probably: TEFIS Data Services, Supervision Module for monitoring
Experimental method and procedure
• Instrumentation of the mobile application and the outdoor wireless network to be connected to the PlanetLab testbed to be able to measure and analyse the best network strategies to optimise the skiers on site access and user experience.
• Improvement of the current network performance
• Experiment with some volunteers in order to collect network data.
• Analysis of the gathered data
Actors involved • University of Geneva
• Eyeski Media Ltd
• Commune de Megève
• Tefis PlanetLab support members
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 84 of 88
• End-‐users that have agreed to test the application.
Variables • Application settings
• Mesh network deployment
• Network strategies
Metrics and Measurements • Network QoS
• Streaming performances
• Multimedia and video QoE
Technical Requirements • Instrumentation of the mobile application and the outdoor wireless network to the PlanetLab testbed
• Instrumentation of the mobile application and the outdoor wireless network to the IMS testbed
• Instrumentation of the mobile application may be needed for some users feedback aligned with the Botnia Living Lab methodologies.
Any specific details None
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 85 of 88
5. Conclusion This deliverable D2.1.2 “Global architecture and overall design” extends the deliverable D2.1.1, which defines the overall architecture of the system and its high-‐level design. It refines the first architecture by providing the following improvements:
• It addresses architectural design of core components (e.g. identity management) by revealing their requirements, eventual issues, and by offering a set of possible solutions,
• It extracts monitoring prerequisites, based on assumptions, in order to design the whole monitoring process,
• It describes revision of connector design which will allow easier integration of living lab testbeds,
• It revises the experiment manager in order to allow parallel task execution,
• It details integration of the planed testbeds, and
• It introduces open call experiments and their requirements.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 86 of 88
6. Glossary
API Application Programming Interface
CPU Central Processing Unit
GSI Grid Security Infrastructure
http Hypertext Transfer Protocol
iRODS Integrated Rule-‐Oriented Data System
KPI Key Performance Indicator
LDAP Lightweight Directory Access Protocol
MD5 Message-‐Digest (algorithm)
QoS Quality of Service
RPRS Research Platform Repository Service
RSA Rivest, Shamir and Adelman15
SIP Session Initiation Protocol
SLA Service Level Agreement
SME Small to Medium Enterprise
SOA Service Oriented Architecture
SSH Secure Shell
TCI TEFIS Connector Interface
TIDS Testbed Infrastructure Data Service
VCT Virtual Customer Testbed
VM Virtual Machine
VPN Virtual Private Network
15 First authors to describe the associated public-‐key cryptography
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 87 of 88
References [1] TEFIS Deliverable D3.3 User tools implementation
[2] TEFIS Deliverable D4.2.1 Experiment and Workflow Scheduler Prototype
[3] TEFIS Deliverable D4.4.1 : Test Toolkit Implementation.
TEFIS – D2.1.2 – Initial Global architecture and overall design
Page 88 of 88
Table of figures:
Figure 1: The TEFIS Software Architecture ................................................................................................... 6
Figure 2 The TEFIS account creation interface ............................................................................................ 16
Figure 3 The PlanetLab account creation form ........................................................................................... 20
Figure 4: The ETICS interface ...................................................................................................................... 22
Figure 5: Key elements relations of monitoring process. ........................................................................... 32
Figure 6: Data model of experiment design and execution stages with monitoring capabilities ............... 34
Figure 7: Monitoring configuration management at experiment design phase ......................................... 37
Figure 8: Monitoring process and interactions at the experiment runtime phase ..................................... 39
Figure 9: Entering task parameters in the TEFIS Experiment Manager ...................................................... 49
Figure 10: Mapping TEFIS Tests to Test Facilties ........................................................................................ 55
Figure 11: TEFIS Tasks running independently ........................................................................................... 56
Figure 12: Tasks with Dependencies ........................................................................................................... 57
Figure 13: Parallel and mutually dependent tasks ...................................................................................... 58
Figure 14:The eHealth experiment workflow. ............................................................................................ 62
Figure 15: Overview experimental procedure ............................................................................................ 68