Upload
others
View
11
Download
0
Embed Size (px)
Citation preview
The support of Decision Modeling features and concepts in tooling
Jan Vanthienen [email protected]
Thibaut Bender
Faruk Hasić
Department of Decision Sciences
and Information Management
Leuven Institute for Research on
Information Systems (LIRIS)
17/09/2018 1
Presented by
17/09/2018 2
Jan VanthienenKU LeuvenFaculty of Economics and Business Business Information Systems Group
Research and teaching:• Business rules, processes and information systems• Decision models & tables• Business intelligence & Analytics• Information & Knowledge Management
• IBM Faculty Award
• Belgian Francqui Chair 2009 at FUNDP
- Bpost bank Research Chair
- Colruyt-Symeta Research Chair Smart Data and Decisions
- IBM Fund Intelligent Business Decision Making
- Microsoft Research Chair on Intelligent Environments
- PricewaterhouseCoopers Chair on E-Business
Email: [email protected]
Outline
1. Research context, problem statement, analysis methodology
2. Tools and tests
3. Results
4. Limitations and conclusions
317/09/2018
Research context
• Decision Modeling and Notation (DMN)
17/09/2018 4
Problem Statement
• To which extent are important Decision Model and Notation (DMN) features and concepts supported by tooling?
• Not a tool comparison
• Not an alternative for the automated DMN Technology Compatibility Kit TCK (https://github.com/agilepro/dmn-tck)
• Goal• Which DMN elements (decision requirements diagrams, decision
logic specifications, the expression language) are commonly present in current decision modeling/execution tools.
• Which modeling features are considered important by tool vendors.
17/09/2018 5
Analysis methodology
• 13 tool vendors responded positively to participate in the research and provided access (and documentation) to their tool. We promised anonymity of test results.
• We built/executed a number of decision models in each of the tools.
• We manually modeled in the 13 tools:• Decision Requirements Diagram = graphical model
• Decision Tables
• Friendly Enough Expression Language (FEEL) = data types and functions
17/09/2018 6
13 out of 19 tools (Decision Management Community)
• Actico• AlfrescoActivity• Avola• BizzDesign• Blueriq• Camunda• DecisonsFirstModeler• Drools• Fico• Flexrule• IBM• IDIOM• Onedecision• OpenRules• RapidGen• Sapiens• Signavio• Sparkling logic Pencil modeller• TrisoTech
17/09/2018 7
Decision requirements (10/13 tools)
17/09/2018 8
Figure 1: Decision Requirement Diagram support per element
,000%
20,000%
40,000%
60,000%
80,000%
100,000%
75,000%
DRD elements
Decision object Input object
Knwoledge source Business knowledge object
Information Requirement Knwoledge Requirement
Authority Requirement (Text Annotation) + Association
• Results based on tools supporting DRD
• Link between object & requirement is 100%
• Support relatively good (AVG 75%)
Tools are elements of this category when they comply with at least one
feature of the standard developed for modeling decisions.
Decision tables
17/09/2018 9
Figure 2: Decision Table elements: % support over all DMN tools
0%10%20%30%40%50%60%70%80%90%
Unique Any First Priority Output
Order
Rule
order
Collect
single hit Multiple hit
Decision Table elements • Decision table hit policies
• No general (uniform) 100% supported policy
Decision table features
17/09/2018 10
Decisio
n Logic
Decision
Table
Unique 83,33% 79,17%
Any 83,33%
First 75,00%
Priority 75,00%
Output Order 75,00% 73,81%
Rule order 83,33%
Collect list 58,33%
+ 75,00%
< (min) 75,00%
> (max) 75,00%
# (count) 75,00%
Average 50,00%
rules as rows 100,00%
rules as colums 0,00%
crosstab 0,00%
Multiple
output83,33%
Standard Hit policy table notation 41,67%
Multiple
hit
single hit
S-Feel
17/09/2018 11
Figure 3: S-Feel % Adoption
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Average Emptysymbol (-)
number string boolean days &time
duration months &years
date
S-Feel
FEEL syntax
17/09/2018 12
Figure 4: FEEL elements and their % support by DMN applications
0%10%20%30%40%50%60%70%80%90%
100%
co
mp
ariso
n o
f ra
ng
es
do
ub
le v
alu
es
exp
ress
ion
=
FOR
/ IF
dis
jun
ctio
n "
or"
co
nju
nc
tio
n "
an
d"
co
mp
ariso
n
ad
ditio
n +
sub
stra
ctio
n -
mu
ltip
lica
tio
n *
div
isio
n /
exp
on
en
tia
l
arit
hm
etic
ne
ga
tio
n !
= < >
<=
>=
op
en
inte
rva
l sta
rt =
…
clo
sed
inte
rva
l sta
rt =
…
op
en
inte
rva
l en
d =
…
clo
sed
inte
rva
l en
d =
…
Bo
ole
an
lite
ral =
…
if e
xpre
ssio
n =
"if"
,…
exp
ress
ion
,…
exp
ress
ion
, "
in"
,…
da
te t
ime
lite
ral =
(…
FEEL syntax
FEEL functions
17/09/2018 13
Figure 5: FEEL functions average support for each function category
0%
20%
40%
60%
80%
100%
Boolean
Functions
String
Functions
List Functions
Numeric
Functions
FEEL functions
not(negand)
substring(string, startposition, length?)
string length(string)
upper case(string)
lower case(string)
substring before (string, match)
substring after (string, match)
replace(input, pattern,replacement, flags?)
contains(string, match)
starts with(string, match)
ends with(string, match)
matches(input, pattern, flags?)
list contains(list, element)
count(list)
min(list)
max(list)
sum(list)
mean(list)
and(list)
or(list)
sublist(list, start position,length?)
append(list, item…)
concatenate(list…)
insert before(list,position, newItem)
remove(list, position)
reverse(list)
index of(list, match)
union(list…)
distinct values(list)
flatten(list)
decimal(n, scale)
floor(n)
ceiling(n)
String functions List functions Numeric functions
Clustering
17/09/2018 14
Figure 6: Clustering k=5 implementation Cluster 2(modelling) & 3 (decision table ) elements
0%
20%
40%
60%
80%
100%
Clustering: modeling and decision table elements
Cluster 2 Cluster 3
cluster 0: most FEEL functions
are covered.
cluster 1: covers extra
functionalities that vendors
implemented and text
annotations.
cluster 2: is mostly focused on
the modeling functions of DMN
cluster 3: decision table
functionalities like most hit
policies are also covered in this
cluster.
cluster 4: The last cluster
implements data elements
specified in the S-FEEL and
FEEL standard for basic
calculations or representations of
intervals.
Overall results
17/09/2018 15
Tool A Tool B Tool C Tool D Tool E Tool F Tool G Tool H Tool I Tool J Tool K Tool L Tool M Average
total 83,4% 84,0% 30,4% 91,0% 47,7% 48,9% 40,3% 92,3% 70,5% 31,0% 46,1% 77,0% 44,4% 60,5%
DRD 100,0% 93,2% 22,7% 100,0% 47,7% 0,0% 0,0% 100,0% 93,2% 70,5% 70,5% 93,2% 0,0% 60,8%
Decision Table 96,8% 77,8% 36,5% 96,8% 42,9% 96,8% 93,7% 96,8% 93,7% 0,0% 11,1% 81,0% 96,8% 70,8%
S-FEEL 72,8% 87,7% 72,8% 100,0% 74,1% 87,7% 87,7% 87,7% 0,0% 0,0% 74,1% 100,0% 87,7% 71,7%
FEEL 18,1% 69,5% 18,1% 51,2% 43,8% 68,9% 22,8% 65,5% 0,0% 0,0% 34,1% 14,4% 41,5% 34,4%
XML 100% 100% 0% 100% 0% 100% 0% 100% 0% 0% 0% 0% 100% 46,2%
Limitations
• Only 13 tools
• Manual testing
17/09/2018 16
Summary
17/09/2018 17
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
total
DRD
Decision Table
S-FEEL
FEEL
XML
DMN category support
Tool A Tool B Tool C Tool D Tool ETool F Tool G Tool H Tool I Tool JTool K Tool L Tool M Average
Figure 9: support by tool against DMN categories
Conclusions
• FEEL support is low
• Still a gap between Requirements modeling tools and Decision table execution tools
• No hit policy has a 100% support
• Vertical and crosstab formats are not supported
• Hit policy is only indicated in 42% of the tools
• 5 tools do well
17/09/2018 18
Final verdict
17/09/2018 19
0
7
1
5
[0-25%[ [25-50%[ [50-75%[ [75-100%[
Distribution Tool scoring
Thank you
17/09/2018 20