Upload
vincenzo-ferme
View
213
Download
0
Embed Size (px)
Citation preview
VincenzoFerme,AnaIvanchikj,Prof.CesarePautassoFacultyofInformatics
UniversityofLugano(USI),Switzerland
A CONTAINER-CENTRIC METHODOLOGY FOR
BENCHMARKING WORKFLOW MANAGEMENT SYSTEMS
MarigiannaSkouradaki,Prof.FrankLeymann
InstituteofArchitectureofApplicationSystemsUniversityofStuttgart,Germany
Cloud Computing Patterns Fundamentals to Design, Build, and Manage Cloud Applications
Christoph Fehling Institute of Architecture of Application Systems
University of Stuttgart Universitätsstr. 38 70569 Stuttgart Germany
Phone +49-711-685-88 486 Fax +49-711-685-88 472 e-mail Fehling @iaas.uni-stuttgart.de
©Fehling
Vincenzo Ferme
2
The BenchFlow Project
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Design and implement the first benchmark to assess and compare the performance of WfMSs that are compliant with Business Process Model and Notation 2.0 standard.
”
“
Vincenzo Ferme
3
What is a Workflow Management System?
WfMS
Application Server
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
3
What is a Workflow Management System?
WfMS
Application Server
D
A
B
C
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
3
What is a Workflow Management System?
WfMS Users
Applications
Application Server
Web Service
D
A
B
C
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
3
What is a Workflow Management System?
WfMS Users
Applications
Application Server
InstanceDatabase
DBMS
Web Service
D
A
B
C
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
4
Many Vendors of BPMN 2.0 WfMSs
https://en.wikipedia.org/wiki/List_of_BPMN_2.0_engines
Jan 2011
BPMN 2.0
Jan 2014
BPMN 2.0.2ISO/IEC 19510
Aug 2009
BETABPMN 2.0
Year Number Sum2009 1 12010 5 62011 4 102012 1 112013 8 192014 2 212015 2 232016 0 23
Grand Total 23
Num
ber o
f BPM
N 2
.0 W
fMSs
0
5
10
15
20
25
Year of the First Version Supporting BPMN 2.0 2009 2010 2011 2012 2013 2014 2015 2016
�1
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
5
Benchmarking Requirements
• Relevant
• Representative
• Portable
• Scalable
• Simple
• Repeatable
• Vendor-neutral
• Accessible
• Efficient
• Affordable
• K. Huppler, The art of building a good benchmark, 2009 • J. Gray, The Benchmark Handbook for Database and Transaction Systems, 1993 • S. E. Sim, S. Easterbrook et al., Using benchmarking to advance research: A
challenge to software engineering, 2003
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
6
Why a new Methodology?
No available methodologies involving vendors for both defining a standard benchmark and
benchmarking production systems
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
6
Why a new Methodology?
Already standard benchmarks
No available methodologies involving vendors for both defining a standard benchmark and
benchmarking production systems
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
6
Why a new Methodology?
Already standard benchmarks
Research
No interaction with Vendors
No available methodologies involving vendors for both defining a standard benchmark and
benchmarking production systems
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
7
Why a Container-Centric Methodology?
• Relevant
• Representative
• Portable
• Scalable
• Simple
• Repeatable
• Vendor-neutral
• Accessible
• Efficient
• Affordable
Emerging Technology
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
7
Why a Container-Centric Methodology?
• Relevant
• Representative
• Portable
• Scalable
• Simple
• Repeatable
• Vendor-neutral
• Accessible
• Efficient
• Affordable
Emerging Technology
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Lightweight
Vincenzo Ferme
7
Why a Container-Centric Methodology?
• Relevant
• Representative
• Portable
• Scalable
• Simple
• Repeatable
• Vendor-neutral
• Accessible
• Efficient
• Affordable
Emerging Technology
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Lightweight
Negligible Performance Impact
Vincenzo Ferme
8
Why a Container-Centric Methodology?
• Relevant
• Representative
• Portable
• Scalable
• Simple
• Repeatable
• Vendor-neutral
• Accessible
• Efficient
• Affordable
Emerging Technology
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Lightweight
Negligible Performance Impact
Vincenzo Ferme
9
Why a Container-Centric Methodology?
• Relevant
• Representative
• Portable
• Scalable
• Simple
• Repeatable
• Vendor-neutral
• Accessible
• Efficient
• Affordable
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Emerging Technology
Lightweight
Negligible Performance Impact
Vincenzo Ferme
10
Why a Container-Centric Methodology?
• Relevant
• Representative
• Portable
• Scalable
• Simple
• Repeatable
• Vendor-neutral
• Accessible
• Efficient
• Affordable
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Emerging Technology
Lightweight
Negligible Performance Impact
Vincenzo Ferme
11
Why a Container-Centric Methodology?
• Relevant
• Representative
• Portable
• Scalable
• Simple
• Repeatable
• Vendor-neutral
• Accessible
• Efficient
• Affordable
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Emerging Technology
Lightweight
Negligible Performance Impact
Vincenzo Ferme
12
Why a Container-Centric Methodology?
• Relevant
• Representative
• Portable
• Scalable
• Simple
• Repeatable
• Vendor-neutral
• Accessible
• Efficient
• Affordable
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Emerging Technology
Lightweight
Negligible Performance Impact
Vincenzo Ferme
13
What about the other Requirements?
• Relevant
• Representative
• Portable
• Scalable
• Simple
• Repeatable
• Vendor-neutral
• Accessible
• Efficient
• Affordable
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
14
Benchmarking Choreography
Vendor
BenchFlow
Provide Benchmarking Methodology
Vendor
BenchFlow
Agree on adding Vendor's WfMS to the
Benchmark
Benchmarking Methodology Agreement Proposal
Signed Agreement
Vendor
BenchFlow
Provide Containerized Distribution of WfMS
Containerized WfMS Request
Containerized WfMS
Vendor
BenchFlow
Verify the Benchmark Results
Results Verification Outcome
Results Verification Request
Vendor
BenchFlow
Provide Draft Benchmark Results
Draft Benchmark ResultsVerified Benchmark Results
Not Valid
Are the Results Valid?
Community
BenchFlow
Publish Benchmark Results Valid
Figure 4: Benchmarking Methodology Choreography
pending on the WfMS’s architecture. The DBMSContainer can refer to an existing publicly availableContainer distributions. The containerized WfMSshould be publicly available (e.g., at the Docker Hubregistry8), or the Benchflow team should be grantedaccess to a private registry used by the vendor. Thesame applies to the Containers’ definition file, i.e., theDockerfile (Turnbull, 2014, ch. 4). While private reg-istries are a solution that can work with vendors ofclosed-source systems, they impact the reproducibil-ity of the results. For each WfMS version to be in-cluded in the benchmark, there must be a default con-figuration, i.e., the configuration in which the WfMSContainers start without modifying any configurationparameters, except the ones required in order to cor-rectly start the WfMS, e.g., the database connection.However, if vendors want to benchmark the perfor-mance of their WfMS with different configurations,for example, the configuration provided to users asa “getting started” configuration, or production-gradeconfigurations for real-life usage, they can also pro-vide different configurations. To do that, the Con-tainers must allow to issue the WfMS configurationsthrough additional environment variables9, and/or byspecifying the volumes to be exposed10 in order to
8https://hub.docker.com/
9https://docs.docker.com/reference/run/
#env-environment-variables
10https://docs.docker.com/
userguide/dockervolumes/
access the configuration files. Exposing the volumesallows to access the files defined inside the Contain-ers, on the host operating system. Precisely, the WESContainer has to, at least, enable the configuration of:1) the used DB, i.e., DB driver, url, username andpassword for connection; 2) the WES itself; and 3) thelogging level of the WES, and the application stacklayers on which it may depend on. Alternatively, in-stead of providing configuration details, the vendorscan provide different Containers for different config-urations. However, even in that case enabling the DBconfiguration is required.
In order to access relevant data, all WfMS Con-tainers have to specify the volumes to access theWfMS log files, and to access all the data useful tosetup the system. For instance, if the WfMS definesexamples as part of the deployment, we may wantto remove those examples by overriding the volumecontaining them. Moreover, the WES Container (orthe DBMS Container) has to create the WfMS’s DBand populate it with data required to run the WfMS.The vendors need to provide authentication configu-ration details of the WfMS components (e.g., the userwith admin privileges for the DBMS). Alternativelythe vendor may simply provide the files needed to cre-ate and populate the DB.
#mount-a-host-directory-as-a-data-volume
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
Vendor
BenchFlow
Provide Benchmarking Methodology
Vendor
BenchFlow
Agree on adding Vendor's WfMS to the
Benchmark
Benchmarking Methodology Agreement Proposal
Signed Agreement
Vendor
BenchFlow
Provide Containerized Distribution of WfMS
Containerized WfMS Request
Containerized WfMS
Vendor
BenchFlow
Verify the Benchmark Results
Results Verification Outcome
Results Verification Request
Vendor
BenchFlow
Provide Draft Benchmark Results
Draft Benchmark ResultsVerified Benchmark Results
Not Valid
Are the Results Valid?
Community
BenchFlow
Publish Benchmark Results Valid
Figure 4: Benchmarking Methodology Choreography
pending on the WfMS’s architecture. The DBMSContainer can refer to an existing publicly availableContainer distributions. The containerized WfMSshould be publicly available (e.g., at the Docker Hubregistry8), or the Benchflow team should be grantedaccess to a private registry used by the vendor. Thesame applies to the Containers’ definition file, i.e., theDockerfile (Turnbull, 2014, ch. 4). While private reg-istries are a solution that can work with vendors ofclosed-source systems, they impact the reproducibil-ity of the results. For each WfMS version to be in-cluded in the benchmark, there must be a default con-figuration, i.e., the configuration in which the WfMSContainers start without modifying any configurationparameters, except the ones required in order to cor-rectly start the WfMS, e.g., the database connection.However, if vendors want to benchmark the perfor-mance of their WfMS with different configurations,for example, the configuration provided to users asa “getting started” configuration, or production-gradeconfigurations for real-life usage, they can also pro-vide different configurations. To do that, the Con-tainers must allow to issue the WfMS configurationsthrough additional environment variables9, and/or byspecifying the volumes to be exposed10 in order to
8https://hub.docker.com/
9https://docs.docker.com/reference/run/
#env-environment-variables
10https://docs.docker.com/
userguide/dockervolumes/
access the configuration files. Exposing the volumesallows to access the files defined inside the Contain-ers, on the host operating system. Precisely, the WESContainer has to, at least, enable the configuration of:1) the used DB, i.e., DB driver, url, username andpassword for connection; 2) the WES itself; and 3) thelogging level of the WES, and the application stacklayers on which it may depend on. Alternatively, in-stead of providing configuration details, the vendorscan provide different Containers for different config-urations. However, even in that case enabling the DBconfiguration is required.
In order to access relevant data, all WfMS Con-tainers have to specify the volumes to access theWfMS log files, and to access all the data useful tosetup the system. For instance, if the WfMS definesexamples as part of the deployment, we may wantto remove those examples by overriding the volumecontaining them. Moreover, the WES Container (orthe DBMS Container) has to create the WfMS’s DBand populate it with data required to run the WfMS.The vendors need to provide authentication configu-ration details of the WfMS components (e.g., the userwith admin privileges for the DBMS). Alternativelythe vendor may simply provide the files needed to cre-ate and populate the DB.
#mount-a-host-directory-as-a-data-volume
15
Benchmarking Methodologyprovide the methodology
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
16
The Benchmarking Process
Workload Model WfMS
ConfigurationsTest
TypesWorkload Metrics
KPIsDerive
Input/Process/Output Model
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
16
The Benchmarking Process
Workload Model WfMS
ConfigurationsTest
TypesWorkload Metrics
KPIsDerive
Input/Process/Output Model
Workload Mix
80%C
A
B
20%D
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
16
The Benchmarking Process
Workload Model WfMS
ConfigurationsTest
TypesWorkload Metrics
KPIsDerive
Input/Process/Output Model
Workload Mix
80%C
A
B
20%D
Load FunctionsTest Data
Unsaved diagramUnsaved diagram
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
16
The Benchmarking Process
Workload Model WfMS
ConfigurationsTest
TypesWorkload Metrics
KPIsDerive
Input/Process/Output Model
Workload Mix
80%C
A
B
20%D
Load FunctionsTest Data
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Containers
Vincenzo Ferme
16
The Benchmarking Process
Workload Model WfMS
ConfigurationsTest
TypesWorkload Metrics
KPIsDerive
Input/Process/Output Model
Workload Mix
80%C
A
B
20%D
Load FunctionsTest Data
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Performance Data
Workflow Instance Duration
Throughput
Containers
Vincenzo Ferme
17
Benchmarking Methodologyagreement with vendors
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vendor
BenchFlow
Provide Benchmarking Methodology
Vendor
BenchFlow
Agree on adding Vendor's WfMS to the
Benchmark
Benchmarking Methodology Agreement Proposal
Signed Agreement
Vendor
BenchFlow
Provide Containerized Distribution of WfMS
Containerized WfMS Request
Containerized WfMS
Vendor
BenchFlow
Verify the Benchmark Results
Results Verification Outcome
Results Verification Request
Vendor
BenchFlow
Provide Draft Benchmark Results
Draft Benchmark ResultsVerified Benchmark Results
Not Valid
Are the Results Valid?
Community
BenchFlow
Publish Benchmark Results Valid
Figure 4: Benchmarking Methodology Choreography
pending on the WfMS’s architecture. The DBMSContainer can refer to an existing publicly availableContainer distributions. The containerized WfMSshould be publicly available (e.g., at the Docker Hubregistry8), or the Benchflow team should be grantedaccess to a private registry used by the vendor. Thesame applies to the Containers’ definition file, i.e., theDockerfile (Turnbull, 2014, ch. 4). While private reg-istries are a solution that can work with vendors ofclosed-source systems, they impact the reproducibil-ity of the results. For each WfMS version to be in-cluded in the benchmark, there must be a default con-figuration, i.e., the configuration in which the WfMSContainers start without modifying any configurationparameters, except the ones required in order to cor-rectly start the WfMS, e.g., the database connection.However, if vendors want to benchmark the perfor-mance of their WfMS with different configurations,for example, the configuration provided to users asa “getting started” configuration, or production-gradeconfigurations for real-life usage, they can also pro-vide different configurations. To do that, the Con-tainers must allow to issue the WfMS configurationsthrough additional environment variables9, and/or byspecifying the volumes to be exposed10 in order to
8https://hub.docker.com/
9https://docs.docker.com/reference/run/
#env-environment-variables
10https://docs.docker.com/
userguide/dockervolumes/
access the configuration files. Exposing the volumesallows to access the files defined inside the Contain-ers, on the host operating system. Precisely, the WESContainer has to, at least, enable the configuration of:1) the used DB, i.e., DB driver, url, username andpassword for connection; 2) the WES itself; and 3) thelogging level of the WES, and the application stacklayers on which it may depend on. Alternatively, in-stead of providing configuration details, the vendorscan provide different Containers for different config-urations. However, even in that case enabling the DBconfiguration is required.
In order to access relevant data, all WfMS Con-tainers have to specify the volumes to access theWfMS log files, and to access all the data useful tosetup the system. For instance, if the WfMS definesexamples as part of the deployment, we may wantto remove those examples by overriding the volumecontaining them. Moreover, the WES Container (orthe DBMS Container) has to create the WfMS’s DBand populate it with data required to run the WfMS.The vendors need to provide authentication configu-ration details of the WfMS components (e.g., the userwith admin privileges for the DBMS). Alternativelythe vendor may simply provide the files needed to cre-ate and populate the DB.
#mount-a-host-directory-as-a-data-volume
• Production Stable Release
• Provide defined APIs
• Share Containerised WfMS
• Authorise Publishing of Results
Main Agreement Points:
Vincenzo Ferme
18
Benchmarking Methodologycontainerised WfMSs
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vendor
BenchFlow
Provide Benchmarking Methodology
Vendor
BenchFlow
Agree on adding Vendor's WfMS to the
Benchmark
Benchmarking Methodology Agreement Proposal
Signed Agreement
Vendor
BenchFlow
Provide Containerized Distribution of WfMS
Containerized WfMS Request
Containerized WfMS
Vendor
BenchFlow
Verify the Benchmark Results
Results Verification Outcome
Results Verification Request
Vendor
BenchFlow
Provide Draft Benchmark Results
Draft Benchmark ResultsVerified Benchmark Results
Not Valid
Are the Results Valid?
Community
BenchFlow
Publish Benchmark Results Valid
Figure 4: Benchmarking Methodology Choreography
pending on the WfMS’s architecture. The DBMSContainer can refer to an existing publicly availableContainer distributions. The containerized WfMSshould be publicly available (e.g., at the Docker Hubregistry8), or the Benchflow team should be grantedaccess to a private registry used by the vendor. Thesame applies to the Containers’ definition file, i.e., theDockerfile (Turnbull, 2014, ch. 4). While private reg-istries are a solution that can work with vendors ofclosed-source systems, they impact the reproducibil-ity of the results. For each WfMS version to be in-cluded in the benchmark, there must be a default con-figuration, i.e., the configuration in which the WfMSContainers start without modifying any configurationparameters, except the ones required in order to cor-rectly start the WfMS, e.g., the database connection.However, if vendors want to benchmark the perfor-mance of their WfMS with different configurations,for example, the configuration provided to users asa “getting started” configuration, or production-gradeconfigurations for real-life usage, they can also pro-vide different configurations. To do that, the Con-tainers must allow to issue the WfMS configurationsthrough additional environment variables9, and/or byspecifying the volumes to be exposed10 in order to
8https://hub.docker.com/
9https://docs.docker.com/reference/run/
#env-environment-variables
10https://docs.docker.com/
userguide/dockervolumes/
access the configuration files. Exposing the volumesallows to access the files defined inside the Contain-ers, on the host operating system. Precisely, the WESContainer has to, at least, enable the configuration of:1) the used DB, i.e., DB driver, url, username andpassword for connection; 2) the WES itself; and 3) thelogging level of the WES, and the application stacklayers on which it may depend on. Alternatively, in-stead of providing configuration details, the vendorscan provide different Containers for different config-urations. However, even in that case enabling the DBconfiguration is required.
In order to access relevant data, all WfMS Con-tainers have to specify the volumes to access theWfMS log files, and to access all the data useful tosetup the system. For instance, if the WfMS definesexamples as part of the deployment, we may wantto remove those examples by overriding the volumecontaining them. Moreover, the WES Container (orthe DBMS Container) has to create the WfMS’s DBand populate it with data required to run the WfMS.The vendors need to provide authentication configu-ration details of the WfMS components (e.g., the userwith admin privileges for the DBMS). Alternativelythe vendor may simply provide the files needed to cre-ate and populate the DB.
#mount-a-host-directory-as-a-data-volume
Vincenzo Ferme
19
Benchmarking MethodologyWfMS
Configurations
requirements from the WfMS
NON-CORECORE
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
19
Benchmarking MethodologyWfMS
Configurations
requirements from the WfMS
NON-CORECORE
WfMS
Initialisation APIs
Deploy Process
Start Process Instance
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
19
Benchmarking MethodologyWfMS
Configurations
requirements from the WfMS
NON-CORECORE
WfMS
Initialisation APIs
Deploy Process
Start Process Instance
+ WfMS
Claim Task Complete Task
User APIs
Create User Create Group
Pending User Tasks
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
19
Benchmarking MethodologyWfMS
Configurations
requirements from the WfMS
NON-CORECORE
WfMS
Initialisation APIs
Deploy Process
Start Process Instance
+ WfMS
Claim Task Complete Task
User APIs
Create User Create Group
Pending User Tasks
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Invoke WS
Web Service APIs
Vincenzo Ferme
19
Benchmarking MethodologyWfMS
Configurations
requirements from the WfMS
NON-CORECORE
WfMS
Initialisation APIs
Deploy Process
Start Process Instance
Event APIs
Issue Event
WfMS
Pending Event Tasks
+ WfMS
Claim Task Complete Task
User APIs
Create User Create Group
Pending User Tasks
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Invoke WS
Web Service APIs
Vincenzo Ferme
20
Benchmarking MethodologyWfMS
Configurations
requirements from the WfMSTable 1: Summary of Core and Non-core APIs to be imple-mented by the WfMS
Functionality Min Response Data
Cor
eAP
Is
InitialisationAPIs
Deploy a process Deployed process IDStart a process instance Process instance ID
Non
-cor
eAP
Is
User APIs Create a user User IDCreate a group of users User group IDAccess pending tasks Pending tasks IDsClaim a task*Complete a task
Event APIs Access pending events Pending events IDsIssue events
Web serviceAPIs
Map tasks to Webservice endpoints
*Optional depending on the WfMS implementation
2.1.2 System Under Test
The SUT refers to the system that is being tested forperformance. In our case a WfMS which, accord-ing to the WfMC (Hollingsworth, 1995), includes theWorkflow Enactment Service (WES) and any envi-ronments and systems required for its proper func-tioning, e.g., application servers, virtual machines(e.g., Java Virtual Machine) and DBMSs. The WESis a complex middleware component which, in addi-tion to handling the execution of the process models,interacts with users, external applications, and Webservices. This kind of SUT offers a large set of con-figuration options and deployment alternatives. TheDBMS configuration as well as the WES configura-tions (e.g., the granularity level of history logging,the DB connection options, the order of asynchronousjobs acquisition), may affect its performance. More-over, WfMSs often offer a wide range of deploymentalternatives (e.g., standalone, in a clustered deploy-ment, behind a load balancer in an elastic cloud in-frastructure) and target different versions of applica-tion server stacks (e.g., Apache Tomcat, JBoss).
To add a WfMS in the benchmark, we require theavailability of certain Core and Non-core APIs. Asummary of the same is presented in Table 1.
The Core APIs are Initialisation APIs necessaryfor automatic issuing of the simplest workload to theSUT in order to test SUT’s performance. They in-clude: 1) deploy a process and return as a response anidentifier of the deployed process (PdID); and 2) starta process instance by using the PdID and return as aresponse the new instance identifier (PiID).
Depending on the execution language of theWfMS and the constructs that it supports, other Non-
core APIs might be necessary for testing more com-plex workloads. For instance, if we are targetingBPMN 2.0 WfMSs we might also require the follow-
ing APIs:For applying workloads involving human tasks,
the following User APIs are necessary: 1) createa user and return the identifier of the created user(UsID); 2) create a group of users, return the cre-ated group identifier (UgID), and enable adding usersby using their UsIDs; 3) pending user/manual tasks:access all the pending user/manual task instances ofa given user/manual task identified by its id (Jordanand Evdemon, 2011, sec. 8.2.1) as specified in themodel serialization. We want to obtain all the pend-ing tasks with the given id of all the process instances(PiIDs) of a given deployed process (PdID). The APIhas to respond with data, enabling the creation of acollection that maps the process instances to the listof their pending tasks <PiID, UtIDs> and <PiID,MtIDs>; 4) claim a user/manual task identified byUtID/MtID, if tasks are not automatically assigned bythe WfMS; and 5) complete a user/manual task iden-tified by UtID/MtID by submitting the data requiredto complete the task.
To issue a workload containing process mod-els with catching external events, the followingEvent APIs are necessary: 1) pending catchingevents/receive tasks: access the pending catchingevent/receive task instances of a given event/taskidentified by its id (Jordan and Evdemon, 2011, sec.8.2.1) specified in the model serialization. We wantto obtain all the pending catching events/receive taskswith the given id of all the process instances (Pi-IDs) of a given deployed process (PdID). The APIhas to respond with data enabling the creation of acollection that maps the process instances to the listof their pending catching events/receive tasks <PiID,CeIDs> and <PiID, RtIDs>; and 2) issue an event toa pending catching event/receive task identified by us-ing CeID/RtID. We require the APIs to accept the datanecessary to correlate the issued event to the correctprocess instance, e.g., a correlation key.
Finally, to be able to issue a workload defining in-teraction with Web services and/or containing throw-ing events, the WfMS has to support a binding mech-anism to map each Web service task/throwing eventto the corresponding Web service/throwing event end-point. The WfMS should preferably allow to specifythe mapping in the serialized version of the model,so that the binding can be added before deploying theprocess.
Since many WfMSs are offered as a service, it issafe to assume that many WfMSs expose, what wecall, the Core APIs. In our experience with systemswe have evaluated so far (e.g., Activiti, Bonita BPM,Camunda, Imixs Workflow, jBPM), they support notonly the core APIs, but also the non-core APIs. The
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
21
Benchmarking Methodology
DBMS
WfMS• At least two containers
• DBMS can refer to existing one publicly available
• Provide a ready to use default configuration (at least)
• Configurability of: DBMS, WfMS, Logging Level (at least)
containerised WfMSs
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
22
Benchmarking Methodology
Vendor
BenchFlow
Provide Benchmarking Methodology
Vendor
BenchFlow
Agree on adding Vendor's WfMS to the
Benchmark
Benchmarking Methodology Agreement Proposal
Signed Agreement
Vendor
BenchFlow
Provide Containerized Distribution of WfMS
Containerized WfMS Request
Containerized WfMS
Vendor
BenchFlow
Verify the Benchmark Results
Results Verification Outcome
Results Verification Request
Vendor
BenchFlow
Provide Draft Benchmark Results
Draft Benchmark ResultsVerified Benchmark Results
Not Valid
Are the Results Valid?
Community
BenchFlow
Publish Benchmark Results Valid
Figure 4: Benchmarking Methodology Choreography
pending on the WfMS’s architecture. The DBMSContainer can refer to an existing publicly availableContainer distributions. The containerized WfMSshould be publicly available (e.g., at the Docker Hubregistry8), or the Benchflow team should be grantedaccess to a private registry used by the vendor. Thesame applies to the Containers’ definition file, i.e., theDockerfile (Turnbull, 2014, ch. 4). While private reg-istries are a solution that can work with vendors ofclosed-source systems, they impact the reproducibil-ity of the results. For each WfMS version to be in-cluded in the benchmark, there must be a default con-figuration, i.e., the configuration in which the WfMSContainers start without modifying any configurationparameters, except the ones required in order to cor-rectly start the WfMS, e.g., the database connection.However, if vendors want to benchmark the perfor-mance of their WfMS with different configurations,for example, the configuration provided to users asa “getting started” configuration, or production-gradeconfigurations for real-life usage, they can also pro-vide different configurations. To do that, the Con-tainers must allow to issue the WfMS configurationsthrough additional environment variables9, and/or byspecifying the volumes to be exposed10 in order to
8https://hub.docker.com/
9https://docs.docker.com/reference/run/
#env-environment-variables
10https://docs.docker.com/
userguide/dockervolumes/
access the configuration files. Exposing the volumesallows to access the files defined inside the Contain-ers, on the host operating system. Precisely, the WESContainer has to, at least, enable the configuration of:1) the used DB, i.e., DB driver, url, username andpassword for connection; 2) the WES itself; and 3) thelogging level of the WES, and the application stacklayers on which it may depend on. Alternatively, in-stead of providing configuration details, the vendorscan provide different Containers for different config-urations. However, even in that case enabling the DBconfiguration is required.
In order to access relevant data, all WfMS Con-tainers have to specify the volumes to access theWfMS log files, and to access all the data useful tosetup the system. For instance, if the WfMS definesexamples as part of the deployment, we may wantto remove those examples by overriding the volumecontaining them. Moreover, the WES Container (orthe DBMS Container) has to create the WfMS’s DBand populate it with data required to run the WfMS.The vendors need to provide authentication configu-ration details of the WfMS components (e.g., the userwith admin privileges for the DBMS). Alternativelythe vendor may simply provide the files needed to cre-ate and populate the DB.
#mount-a-host-directory-as-a-data-volume
executing the benchmark and providing results
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
23
BenchFlow Frameworkarchitecture
InstanceDatabase
DBMSFaban Drivers
ContainersServers
DATA TRANSFORMERSANALYSERS
Performance Metrics
Performance KPIs
WfMS
Test
Exe
cuti
onA
naly
ses
Faban+
Web Service
Minio
harness
benchflowContext » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
[BPM ’15] [ICPE ’16]
Vincenzo Ferme
24
Performance Metrics and KPIs
WfMS UsersLoad Driver
InstanceDatabase
Application Server
DBMS
Web Service
D
A
BC
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
24
Performance Metrics and KPIs
WfMS UsersLoad Driver
InstanceDatabase
Application Server
DBMS
Web Service
D
A
BC
Metrics and KPIs• Engine Level
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
24
Performance Metrics and KPIs
WfMS UsersLoad Driver
InstanceDatabase
Application Server
DBMS
Web Service
D
A
BC
Metrics and KPIs• Engine Level• Process Level
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
24
Performance Metrics and KPIs
WfMS UsersLoad Driver
InstanceDatabase
Application Server
DBMS
Web Service
D
A
BC
Metrics and KPIs• Engine Level• Process Level• Feature Level
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
24
Performance Metrics and KPIs
WfMS UsersLoad Driver
InstanceDatabase
Application Server
DBMS
Web Service
D
A
BC
Metrics and KPIs• Engine Level• Process Level• Feature Level
• Interactions
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
24
Performance Metrics and KPIs
WfMS UsersLoad Driver
InstanceDatabase
Application Server
DBMS
Web Service
D
A
BC
Metrics and KPIs• Engine Level• Process Level• Feature Level
• Interactions
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
• Environments
Vincenzo Ferme
25
Executing the Benchmarkminimal data requirements
Accessibility of the Data
DBMS
WfMS
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
25
Executing the Benchmarkminimal data requirements
• Workflow & Construct:
• Start Time
• End Time
• [Duration]
Availability of Timing DataAccessibility of the Data
DBMS
WfMS
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
25
Executing the Benchmarkminimal data requirements
• Workflow & Construct:
• Start Time
• End Time
• [Duration]
Availability of Timing DataAccessibility of the Data
DBMS
WfMS
Availability of Execution StateState of the workflow execution. E.g., running, completed, error
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
26
Benchmarking Methodologymock example of benchmark results
Workload Model
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
26
Benchmarking Methodologymock example of benchmark results
Workload Model
Ap. Tomcat 7.0.62
WfMS A
MySQL
MySQL: Community Server 5.6.26
O.S.: Ubuntu 14.04.01J.V.M.: Oracle Serv. 7u79
App. Server:
WfMS A: v7.0.1
+
Hardware Configuration
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
26
Benchmarking Methodologymock example of benchmark results
Workload Model
Ap. Tomcat 7.0.62
WfMS A
MySQL
MySQL: Community Server 5.6.26
O.S.: Ubuntu 14.04.01J.V.M.: Oracle Serv. 7u79
App. Server:
WfMS A: v7.0.1
+
Hardware Configuration
+Metrics and KPIs
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Vincenzo Ferme
27
Benchmarking Methodology
Vendor
BenchFlow
Provide Benchmarking Methodology
Vendor
BenchFlow
Agree on adding Vendor's WfMS to the
Benchmark
Benchmarking Methodology Agreement Proposal
Signed Agreement
Vendor
BenchFlow
Provide Containerized Distribution of WfMS
Containerized WfMS Request
Containerized WfMS
Vendor
BenchFlow
Verify the Benchmark Results
Results Verification Outcome
Results Verification Request
Vendor
BenchFlow
Provide Draft Benchmark Results
Draft Benchmark ResultsVerified Benchmark Results
Not Valid
Are the Results Valid?
Community
BenchFlow
Publish Benchmark Results Valid
Figure 4: Benchmarking Methodology Choreography
pending on the WfMS’s architecture. The DBMSContainer can refer to an existing publicly availableContainer distributions. The containerized WfMSshould be publicly available (e.g., at the Docker Hubregistry8), or the Benchflow team should be grantedaccess to a private registry used by the vendor. Thesame applies to the Containers’ definition file, i.e., theDockerfile (Turnbull, 2014, ch. 4). While private reg-istries are a solution that can work with vendors ofclosed-source systems, they impact the reproducibil-ity of the results. For each WfMS version to be in-cluded in the benchmark, there must be a default con-figuration, i.e., the configuration in which the WfMSContainers start without modifying any configurationparameters, except the ones required in order to cor-rectly start the WfMS, e.g., the database connection.However, if vendors want to benchmark the perfor-mance of their WfMS with different configurations,for example, the configuration provided to users asa “getting started” configuration, or production-gradeconfigurations for real-life usage, they can also pro-vide different configurations. To do that, the Con-tainers must allow to issue the WfMS configurationsthrough additional environment variables9, and/or byspecifying the volumes to be exposed10 in order to
8https://hub.docker.com/
9https://docs.docker.com/reference/run/
#env-environment-variables
10https://docs.docker.com/
userguide/dockervolumes/
access the configuration files. Exposing the volumesallows to access the files defined inside the Contain-ers, on the host operating system. Precisely, the WESContainer has to, at least, enable the configuration of:1) the used DB, i.e., DB driver, url, username andpassword for connection; 2) the WES itself; and 3) thelogging level of the WES, and the application stacklayers on which it may depend on. Alternatively, in-stead of providing configuration details, the vendorscan provide different Containers for different config-urations. However, even in that case enabling the DBconfiguration is required.
In order to access relevant data, all WfMS Con-tainers have to specify the volumes to access theWfMS log files, and to access all the data useful tosetup the system. For instance, if the WfMS definesexamples as part of the deployment, we may wantto remove those examples by overriding the volumecontaining them. Moreover, the WES Container (orthe DBMS Container) has to create the WfMS’s DBand populate it with data required to run the WfMS.The vendors need to provide authentication configu-ration details of the WfMS components (e.g., the userwith admin privileges for the DBMS). Alternativelythe vendor may simply provide the files needed to cre-ate and populate the DB.
#mount-a-host-directory-as-a-data-volume
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
executing the benchmark and providing results
Vincenzo Ferme
28
Benchmarking Methodology
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Signed Agreement
Ven
Bench
Verify the BRes
Results Verific
Results VerificVerified Benchmark Results
Are the Results Valid?
Community
BenchFlow
Publish Benchmark Results Valid
publish benchmark results
Vincenzo Ferme
29
Advantages of using Containers
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
• Accomplish some Benchmarking Requirement:
Portability, Repeatability, Accessibility, Efficiency
Vincenzo Ferme
29
Advantages of using Containers
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
• Accomplish some Benchmarking Requirement:
Portability, Repeatability, Accessibility, Efficiency
• Common way to deploy systems provided by different vendors
Docker Compose Docker Swarm
Vincenzo Ferme
29
Advantages of using Containers
• Standard APIs to access Environment Metrics
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
• Accomplish some Benchmarking Requirement:
Portability, Repeatability, Accessibility, Efficiency
• Common way to deploy systems provided by different vendors
Docker Compose Docker Swarm
Vincenzo Ferme
30
First Application of the Methodology
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
[CAiSE ’16]M. Skouradaki, V. Ferme, C. Pautasso, F. Leymann, A. van Hoorn. Micro-Benchmarking BPMN 2.0 Workflow Management Systems with Workflow Patterns . In Proc. of CAiSE ’16, June, 2016.
Vincenzo Ferme
30
First Application of the Methodology
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
[CAiSE ’16]M. Skouradaki, V. Ferme, C. Pautasso, F. Leymann, A. van Hoorn. Micro-Benchmarking BPMN 2.0 Workflow Management Systems with Workflow Patterns . In Proc. of CAiSE ’16, June, 2016.
Workload
sequencePattern
EmptyScript 1
EmptyScript 2
…
0
750
1500
0:00 4:00 10:00
Vincenzo Ferme
30
First Application of the Methodology
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
[CAiSE ’16]M. Skouradaki, V. Ferme, C. Pautasso, F. Leymann, A. van Hoorn. Micro-Benchmarking BPMN 2.0 Workflow Management Systems with Workflow Patterns . In Proc. of CAiSE ’16, June, 2016.
WfMS
MySQL
3 WfMSsWorkload
sequencePattern
EmptyScript 1
EmptyScript 2
…
0
750
1500
0:00 4:00 10:00
Vincenzo Ferme
30
First Application of the Methodology
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
[CAiSE ’16]M. Skouradaki, V. Ferme, C. Pautasso, F. Leymann, A. van Hoorn. Micro-Benchmarking BPMN 2.0 Workflow Management Systems with Workflow Patterns . In Proc. of CAiSE ’16, June, 2016.
WfMS
MySQL
3 WfMSsWorkload
sequencePattern
EmptyScript 1
EmptyScript 2
…
0
750
1500
0:00 4:00 10:00
Metrics
• Engine Level
• Process Level
• Environment
Vincenzo Ferme
30
First Application of the Methodology
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
[CAiSE ’16]M. Skouradaki, V. Ferme, C. Pautasso, F. Leymann, A. van Hoorn. Micro-Benchmarking BPMN 2.0 Workflow Management Systems with Workflow Patterns . In Proc. of CAiSE ’16, June, 2016.
Results: relevant differences among WfMSs
WfMS
MySQL
3 WfMSsWorkload
sequencePattern
EmptyScript 1
EmptyScript 2
…
0
750
1500
0:00 4:00 10:00
Metrics
• Engine Level
• Process Level
• Environment
Vincenzo Ferme
31
Future Work
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
• Continue to Apply and Improve the Methodology
Vincenzo Ferme
31
Future Work
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
• Continue to Apply and Improve the Methodology
• Involve more Vendors and Researchers as part of the Benchmarking Effort
Vincenzo Ferme
32
Future Work
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
• Continue to Apply and Improve the Methodology
• Involve more Vendors and Researchers as part of the Benchmarking Effort
Vincenzo Ferme
32
Future Work
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
• Continue to Apply and Improve the Methodology
• Involve more Vendors and Researchers as part of the Benchmarking Effort
Vincenzo Ferme
33
Future Work
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
• Continue to Apply and Improve the Methodology
• Involve more Vendors and Researchers as part of the Benchmarking Effort
Vincenzo Ferme
33
Future Work
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
• Continue to Apply and Improve the Methodology
• Involve more Vendors and Researchers as part of the Benchmarking Effort
Workshop September 5th, 2016
http://uniba-dsg.github.io/peace-ws/
1st International Workshop onPerformance and Conformance of Workflow Engines
Vincenzo Ferme
34
Highlights
Vincenzo Ferme
34
Highlights
Vincenzo Ferme
5
Benchmarking Requirements
• Relevant• Representative• Portable• Scalable• Simple
• Repeatable• Vendor-neutral• Accessible• Efficient• Affordable
• K. Huppler, The art of building a good benchmark, 2009 • J. Gray, The Benchmark Handbook for Database and Transaction Systems, 1993 • S. E. Sim, S. Easterbrook et al., Using benchmarking to advance research: A
challenge to software engineering, 2003
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Benchmarking Requirements
Vincenzo Ferme
34
Highlights
Vincenzo Ferme
5
Benchmarking Requirements
• Relevant• Representative• Portable• Scalable• Simple
• Repeatable• Vendor-neutral• Accessible• Efficient• Affordable
• K. Huppler, The art of building a good benchmark, 2009 • J. Gray, The Benchmark Handbook for Database and Transaction Systems, 1993 • S. E. Sim, S. Easterbrook et al., Using benchmarking to advance research: A
challenge to software engineering, 2003
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Benchmarking RequirementsVincenzo Ferme
14
Benchmarking Choreography
Vendor
BenchFlow
Provide Benchmarking Methodology
Vendor
BenchFlow
Agree on adding Vendor's WfMS to the
Benchmark
Benchmarking Methodology Agreement Proposal
Signed Agreement
Vendor
BenchFlow
Provide Containerized Distribution of WfMS
Containerized WfMS Request
Containerized WfMS
Vendor
BenchFlow
Verify the Benchmark Results
Results Verification Outcome
Results Verification Request
Vendor
BenchFlow
Provide Draft Benchmark Results
Draft Benchmark ResultsVerified Benchmark Results
Not Valid
Are the Results Valid?
Community
BenchFlow
Publish Benchmark Results Valid
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Benchmarking Methodology
Vincenzo Ferme
34
Highlights
Vincenzo Ferme
5
Benchmarking Requirements
• Relevant• Representative• Portable• Scalable• Simple
• Repeatable• Vendor-neutral• Accessible• Efficient• Affordable
• K. Huppler, The art of building a good benchmark, 2009 • J. Gray, The Benchmark Handbook for Database and Transaction Systems, 1993 • S. E. Sim, S. Easterbrook et al., Using benchmarking to advance research: A
challenge to software engineering, 2003
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Benchmarking RequirementsVincenzo Ferme
14
Benchmarking Choreography
Vendor
BenchFlow
Provide Benchmarking Methodology
Vendor
BenchFlow
Agree on adding Vendor's WfMS to the
Benchmark
Benchmarking Methodology Agreement Proposal
Signed Agreement
Vendor
BenchFlow
Provide Containerized Distribution of WfMS
Containerized WfMS Request
Containerized WfMS
Vendor
BenchFlow
Verify the Benchmark Results
Results Verification Outcome
Results Verification Request
Vendor
BenchFlow
Provide Draft Benchmark Results
Draft Benchmark ResultsVerified Benchmark Results
Not Valid
Are the Results Valid?
Community
BenchFlow
Publish Benchmark Results Valid
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Benchmarking Methodology
Vincenzo Ferme
29
Advantages of using Containers
• Standard APIs to access Environment Metrics
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
• Accomplish some Benchmarking Requirement:
Portability, Repeatability, Accessibility, Efficiency
• Common way to deploy systems provided by different vendors
Docker Compose Docker Swarm
Advantages of Containers
Vincenzo Ferme
34
Highlights
Vincenzo Ferme
5
Benchmarking Requirements
• Relevant• Representative• Portable• Scalable• Simple
• Repeatable• Vendor-neutral• Accessible• Efficient• Affordable
• K. Huppler, The art of building a good benchmark, 2009 • J. Gray, The Benchmark Handbook for Database and Transaction Systems, 1993 • S. E. Sim, S. Easterbrook et al., Using benchmarking to advance research: A
challenge to software engineering, 2003
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Benchmarking RequirementsVincenzo Ferme
14
Benchmarking Choreography
Vendor
BenchFlow
Provide Benchmarking Methodology
Vendor
BenchFlow
Agree on adding Vendor's WfMS to the
Benchmark
Benchmarking Methodology Agreement Proposal
Signed Agreement
Vendor
BenchFlow
Provide Containerized Distribution of WfMS
Containerized WfMS Request
Containerized WfMS
Vendor
BenchFlow
Verify the Benchmark Results
Results Verification Outcome
Results Verification Request
Vendor
BenchFlow
Provide Draft Benchmark Results
Draft Benchmark ResultsVerified Benchmark Results
Not Valid
Are the Results Valid?
Community
BenchFlow
Publish Benchmark Results Valid
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
Benchmarking Methodology
Vincenzo Ferme
29
Advantages of using Containers
• Standard APIs to access Environment Metrics
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
• Accomplish some Benchmarking Requirement:
Portability, Repeatability, Accessibility, Efficiency
• Common way to deploy systems provided by different vendors
Docker Compose Docker Swarm
Advantages of ContainersVincenzo Ferme
31
Future Work
Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work
• Continue to Apply and Improve the Methodology
• Involve more Vendors and Researchers as part of the Benchmarking Effort
Future Work
benchflowbenchflow
http://benchflow.inf.usi.ch
A CONTAINER-CENTRIC METHODOLOGY FOR
BENCHMARKING WORKFLOW MANAGEMENT SYSTEMS
VincenzoFerme(@VincenzoFerme),AnaIvanchikj,Prof.CesarePautasso
FacultyofInformaticsUniversityofLugano(USI),Switzerland
MarigiannaSkouradaki,Prof.FrankLeymann
InstituteofArchitectureofApplicationSystemsUniversityofStuttgart,Germany
Cloud Computing Patterns Fundamentals to Design, Build, and Manage Cloud Applications
Christoph Fehling Institute of Architecture of Application Systems
University of Stuttgart Universitätsstr. 38 70569 Stuttgart Germany
Phone +49-711-685-88 486 Fax +49-711-685-88 472 e-mail Fehling @iaas.uni-stuttgart.de
©Fehling
BACKUP SLIDES
VincenzoFerme,AnaIvanchikj,Prof.CesarePautassoFacultyofInformatics
UniversityofLugano(USI),Switzerland
MarigiannaSkouradaki,Prof.FrankLeymann
InstituteofArchitectureofApplicationSystemsUniversityofStuttgart,Germany
Cloud Computing Patterns Fundamentals to Design, Build, and Manage Cloud Applications
Christoph Fehling Institute of Architecture of Application Systems
University of Stuttgart Universitätsstr. 38 70569 Stuttgart Germany
Phone +49-711-685-88 486 Fax +49-711-685-88 472 e-mail Fehling @iaas.uni-stuttgart.de
©Fehling
Vincenzo Ferme
37
Published Work
[BTW ’15]C. Pautasso, V. Ferme, D. Roller, F. Leymann, and M. Skouradaki. Towards workflow benchmarking: Open research challenges. In Proc. of the 16th conference on Database Systems for Business, Technology, and Web, BTW 2015, pages 331–350, 2015.
[SSP ’14]M. Skouradaki, D. H. Roller, F. Leymann, V. Ferme, and C. Pautasso. Technical open challenges on benchmarking workflow management systems. In Proc. of the 2014 Symposium on Software Performance, SSP 2014, pages 105–112, 2014.
[ICPE ’15]M. Skouradaki, D. H. Roller, L. Frank, V. Ferme, and C. Pautasso. On the Road to Benchmarking BPMN 2.0 Workflow Engines. In Proc. of the 6th ACM/SPEC International Conference on Performance Engineering, ICPE ’15, pages 301–304, 2015.
Vincenzo Ferme
38
Published Work
[CLOSER ’15]M. Skouradaki, V. Ferme, F. Leymann, C. Pautasso, and D. H. Roller. “BPELanon”: Protect business processes on the cloud. In Proc. of the 5th International Conference on Cloud Computing and Service Science, CLOSER 2015. SciTePress, 2015.
[SOSE ’15]M. Skouradaki, K. Goerlach, M. Hahn, and F. Leymann. Application of Sub-Graph Isomorphism to Extract Reoccurring Structures from BPMN 2.0 Process Models. In Proc. of the 9th International IEEE Symposium on Service-Oriented System Engineering, SOSE 2015, 2015.
[BPM ’15]V. Ferme, A. Ivanchikj, C. Pautasso. A Framework for Benchmarking BPMN 2.0 Workflow Management Systems. In Proc. of the 13th International Conference on Business Process Management, BPM ’15, pages 251-259, 2015.
Vincenzo Ferme
39
Published Work
[BPMD ’15]A. Ivanchikj, V. Ferme, C. Pautasso. BPMeter: Web Service and Application for Static Analysis of BPMN 2.0 Collections. In Proc. of the 13th International Conference on Business Process Management [Demo], BPM ’15, pages 30-34, 2015.
[ICPE ’16]V. Ferme, and C. Pautasso. Integrating Faban with Docker for Performance Benchmarking. In Proc. of the 7th ACM/SPEC International Conference on Performance Engineering, ICPE ’16, 2016.
[CAiSE ’16]M. Skouradaki, V. Ferme, C. Pautasso, F. Leymann, A. van Hoorn. Micro-Benchmarking BPMN 2.0 Workflow Management Systems with Workflow Patterns . In Proc. of the 28th International Conference on Advanced Information Systems Engineering, CAiSE ’16, 2016.
Vincenzo Ferme
40
Docker Performance
[IBM ’14]W. Felter, A. Ferreira, R. Rajamony, and J. Rubio. An updated performance comparison of virtual machines and Linux containers. IBM Research Report, 2014.
Although containers themselves have almost no overhead, Docker is not without performance gotchas. Docker volumes have noticeably better performance than files stored in AUFS. Docker’s NAT also introduces overhead for workloads with high packet rates. These features represent a tradeoff between ease of management and performance and should be considered on a case-by-case basis.
”
“Our results show that containers result in equal or better performance than VMs in almost all cases.
“”
BenchFlow Configures Docker for Performance by Default