View
242
Download
6
Category
Preview:
Citation preview
Data Mining:6. Algoritma Asosiasi
Romi Satria Wahonoromi@romisatriawahono.net
http://romisatriawahono.net/dmWA/SMS: +6281586220090
1
Romi Satria Wahono
• SD Sompok Semarang (1987)• SMPN 8 Semarang (1990)• SMA Taruna Nusantara Magelang (1993)• B.Eng, M.Eng and Ph.D in Software Engineering from
Saitama University Japan (1994-2004)Universiti Teknikal Malaysia Melaka (2014)
• Research Interests: Software Engineering,Machine Learning
• Founder dan Koordinator IlmuKomputer.Com• Peneliti LIPI (2004-2007)• Founder dan CEO PT Brainmatics Cipta Informatika
2
Course Outline
1. Pengantar Data Mining
2. Proses Data Mining
3. Persiapan Data
4. Algoritma Klasifikasi
5. Algoritma Klastering
6. Algoritma Asosiasi
7. Algoritma Estimasi
3
6. Algoritma Asosiasi
6.1 Basic Concepts
6.2 Frequent Itemset Mining Methods
6.2 Pattern Evaluation Methods
4
6.1 Basic Concepts
5
What Is Frequent Pattern Analysis?• Frequent pattern: a pattern (a set of items, subsequences,
substructures, etc.) that occurs frequently in a data set • First proposed by Agrawal, Imielinski, and Swami [AIS93] in
the context of frequent itemsets and association rule mining• Motivation: Finding inherent regularities in data
• What products were often purchased together?— Beer and diapers?!
• What are the subsequent purchases after buying a PC?• What kinds of DNA are sensitive to this new drug?• Can we automatically classify web documents?
• Applications• Basket data analysis, cross-marketing, catalog design, sale
campaign analysis, Web log (click stream) analysis, and DNA sequence analysis.
6
Why Is Freq. Pattern Mining Important?• Freq. pattern: An intrinsic and important property of
datasets • Foundation for many essential data mining tasks
• Association, correlation, and causality analysis• Sequential, structural (e.g., sub-graph) patterns• Pattern analysis in spatiotemporal, multimedia, time-
series, and stream data • Classification: discriminative, frequent pattern analysis• Cluster analysis: frequent pattern-based clustering• Data warehousing: iceberg cube and cube-gradient • Semantic data compression: fascicles• Broad applications
7
8
Basic Concepts: Frequent Patterns
• itemset: A set of one or more items
• k-itemset X = {x1, …, xk}• (absolute) support, or, support
count of X: Frequency or occurrence of an itemset X
• (relative) support, s, is the fraction of transactions that contains X (i.e., the probability that a transaction contains X)
• An itemset X is frequent if X’s support is no less than a minsup threshold
Customerbuys diaper
Customerbuys both
Customerbuys beer
Tid
Items bought
10 Beer, Nuts, Diaper
20 Beer, Coffee, Diaper
30 Beer, Diaper, Eggs
40 Nuts, Eggs, Milk
50 Nuts, Coffee, Diaper, Eggs, Milk
9
Basic Concepts: Association Rules
• Find all the rules X Y with minimum support and confidence• support, s, probability that a
transaction contains X Y• confidence, c, conditional
probability that a transaction having X also contains Y
Let minsup = 50%, minconf = 50%
Freq. Pat.: Beer:3, Nuts:3, Diaper:4, Eggs:3, {Beer, Diaper}:3
Customerbuys diaper
Customerbuys both
Customerbuys beer
Nuts, Eggs, Milk40Nuts, Coffee, Diaper, Eggs, Milk
50
Beer, Diaper, Eggs30
Beer, Coffee, Diaper20
Beer, Nuts, Diaper10
Items boughtTid
• Association rules: (many more!)• Beer Diaper (60%, 100%)• Diaper Beer (60%, 75%)
Closed Patterns and Max-Patterns• A long pattern contains a combinatorial number of sub-patterns,
e.g., {a1, …, a100} contains (1001) + (100
2) + … + (11
00
00) = 2100 – 1 =
1.27*1030 sub-patterns!
• Solution: Mine closed patterns and max-patterns instead
• An itemset X is closed if X is frequent and there exists no super-pattern Y כ X, with the same support as X (proposed by Pasquier, et al. @ ICDT’99)
• An itemset X is a max-pattern if X is frequent and there exists no frequent super-pattern Y כ X (proposed by Bayardo @ SIGMOD’98)
• Closed pattern is a lossless compression of freq. patterns• Reducing the # of patterns and rules
10
Closed Patterns and Max-Patterns• Exercise. DB = {<a1, …, a100>, < a1, …, a50>}
• Min_sup = 1.
• What is the set of closed itemset?• <a1, …, a100>: 1
• < a1, …, a50>: 2
• What is the set of max-pattern?• <a1, …, a100>: 1
• What is the set of all patterns?• !!
11
Computational Complexity of Frequent Itemset Mining• How many itemsets are potentially to be generated in the
worst case?• The number of frequent itemsets to be generated is senstive
to the minsup threshold• When minsup is low, there exist potentially an exponential
number of frequent itemsets• The worst case: MN where M: # distinct items, and N: max
length of transactions
• The worst case complexty vs. the expected probability Ex. Suppose Walmart has 104 kinds of products
• The chance to pick up one product 10-4• The chance to pick up a particular set of 10 products: ~10-40• What is the chance this particular set of 10 products to be
frequent 103 times in 109 transactions?12
6.2 Frequent Itemset Mining Methods
13
Scalable Frequent Itemset Mining Methods• Apriori: A Candidate Generation-and-Test
Approach• Improving the Efficiency of Apriori• FPGrowth: A Frequent Pattern-Growth
Approach• ECLAT: Frequent Pattern Mining with Vertical
Data Format
14
The Downward Closure Property and Scalable Mining Methods
• The downward closure property of frequent patterns
• Any subset of a frequent itemset must be frequent• If {beer, diaper, nuts} is frequent, so is {beer, diaper}• i.e., every transaction having {beer, diaper, nuts} also
contains {beer, diaper}
• Scalable mining methods: Three major approaches• Apriori (Agrawal & Srikant@VLDB’94)• Freq. pattern growth (FPgrowth—Han, Pei & Yin
@SIGMOD’00)• Vertical data format approach (Charm—Zaki & Hsiao
@SDM’02)
15
Apriori: A Candidate Generation & Test Approach• Apriori pruning principle: If there is any itemset
which is infrequent, its superset should not be generated/tested! (Agrawal & Srikant @VLDB’94, Mannila, et al. @ KDD’ 94)
• Method: 1. Initially, scan DB once to get frequent 1-itemset2. Generate length (k+1) candidate itemsets from length
k frequent itemsets3. Test the candidates against DB4. Terminate when no frequent or candidate set can be
generated
16
The Apriori Algorithm—An Example
17
Database TDB
1st scan
C1L1
L2
C2 C2
2nd scan
C3 L33rd scan
Tid Items
10 A, C, D
20 B, C, E
30 A, B, C, E
40 B, E
Itemset sup
{A} 2
{B} 3
{C} 3
{D} 1
{E} 3
Itemset sup
{A} 2
{B} 3
{C} 3
{E} 3
Itemset
{A, B}
{A, C}
{A, E}
{B, C}
{B, E}
{C, E}
Itemset sup{A, B} 1{A, C} 2{A, E} 1{B, C} 2{B, E} 3{C, E} 2
Itemset sup{A, C} 2{B, C} 2{B, E} 3{C, E} 2
Itemset
{B, C, E}
Itemset sup
{B, C, E} 2
Supmin = 2
The Apriori Algorithm (Pseudo-Code)Ck: Candidate itemset of size kLk : frequent itemset of size k
L1 = {frequent items};for (k = 1; Lk !=; k++) do begin Ck+1 = candidates generated from Lk; for each transaction t in database do
increment the count of all candidates in Ck+1 that are contained in t
Lk+1 = candidates in Ck+1 with min_support endreturn k Lk;
18
Implementation of Apriori
• How to generate candidates?
• Step 1: self-joining Lk
• Step 2: pruning
• Example of Candidate-generation
• L3={abc, abd, acd, ace, bcd}
• Self-joining: L3*L3
• abcd from abc and abd• acde from acd and ace
• Pruning:• acde is removed because ade is not in L3
• C4 = {abcd}
19
How to Count Supports of Candidates?• Why counting supports of candidates a problem?
• The total number of candidates can be very huge• One transaction may contain many candidates
• Method:• Candidate itemsets are stored in a hash-tree• Leaf node of hash-tree contains a list of itemsets and
counts• Interior node contains a hash table• Subset function: finds all the candidates contained in a
transaction
20
Counting Supports of Candidates Using Hash Tree
21
1,4,7
2,5,8
3,6,9Subset function
2 3 45 6 7
1 4 51 3 6
1 2 44 5 7 1 2 5
4 5 81 5 9
3 4 5 3 5 63 5 76 8 9
3 6 73 6 8
Transaction: 1 2 3 5 6
1 + 2 3 5 6
1 2 + 3 5 6
1 3 + 5 6
Candidate Generation: An SQL Implementation• SQL Implementation of candidate generation
• Suppose the items in Lk-1 are listed in an order• Step 1: self-joining Lk-1
insert into Ck
select p.item1, p.item2, …, p.itemk-1, q.itemk-1
from Lk-1 p, Lk-1 qwhere p.item1=q.item1, …, p.itemk-2=q.itemk-2, p.itemk-1 < q.itemk-1
• Step 2: pruningforall itemsets c in Ck do
forall (k-1)-subsets s of c doif (s is not in Lk-1) then delete c from Ck
• Use object-relational extensions like UDFs, BLOBs, and Table functions for efficient implementation(S. Sarawagi, S. Thomas, and R. Agrawal. Integrating association rule mining with relational database systems: Alternatives and implications. SIGMOD’98)
22
Pattern-Growth Approach: Mining Frequent Patterns Without Candidate Generation• Bottlenecks of the Apriori approach
• Breadth-first (i.e., level-wise) search
• Candidate generation and test
• Often generates a huge number of candidates
• The FPGrowth Approach (J. Han, J. Pei, and Y. Yin, SIGMOD’ 00)
• Depth-first search
• Avoid explicit candidate generation
• Major philosophy: Grow long patterns from short ones using local frequent items only
• “abc” is a frequent pattern
• Get all transactions having “abc”, i.e., project DB on abc: DB|abc
• “d” is a local frequent item in DB|abc abcd is a frequent pattern23
Construct FP-tree from a Transaction Database
24
{}
f:4 c:1
b:1
p:1
b:1c:3
a:3
b:1m:2
p:2 m:1
Header Table
Item frequency head f 4c 4a 3b 3m 3p 3
min_support = 3
TID Items bought (ordered) frequent items100 {f, a, c, d, g, i, m, p} {f, c, a, m, p}200 {a, b, c, f, l, m, o} {f, c, a, b, m}300 {b, f, h, j, o, w} {f, b}400 {b, c, k, s, p} {c, b, p}500 {a, f, c, e, l, p, m, n} {f, c, a, m, p}
1. Scan DB once, find frequent 1-itemset (single item pattern)
2. Sort frequent items in frequency descending order, f-list
3. Scan DB again, construct FP-tree
F-list = f-c-a-b-m-p
25
Partition Patterns and Databases• Frequent patterns can be partitioned into subsets
according to f-list• F-list = f-c-a-b-m-p• Patterns containing p• Patterns having m but no p• …• Patterns having c but no a nor b, m, p• Pattern f
• Completeness and non-redundency
26
Find Patterns Having P From P-conditional Database
• Starting at the frequent item header table in the FP-tree• Traverse the FP-tree by following the link of each frequent item p• Accumulate all of transformed prefix paths of item p to form p’s
conditional pattern base
Conditional pattern basesitem cond. pattern basec f:3a fc:3b fca:1, f:1, c:1m fca:2, fcab:1p fcam:2, cb:1
{}
f:4 c:1
b:1
p:1
b:1c:3
a:3
b:1m:2
p:2 m:1
Header Table
Item frequency head f 4c 4a 3b 3m 3p 3
From Conditional Pattern-bases to Conditional FP-trees • For each pattern-base
• Accumulate the count for each item in the base• Construct the FP-tree for the frequent items of the
pattern base
27
m-conditional pattern base:fca:2, fcab:1
{}
f:3
c:3
a:3m-conditional FP-tree
All frequent patterns relate to mm, fm, cm, am, fcm, fam, cam, fcam
{}
f:4 c:1
b:1
p:1
b:1c:3
a:3
b:1m:2
p:2 m:1
Header TableItem frequency head f 4c 4a 3b 3m 3p 3
28
Recursion: Mining Each Conditional FP-tree
{}
f:3
c:3
a:3m-conditional FP-tree
Cond. pattern base of “am”: (fc:3)
{}
f:3
c:3am-conditional FP-tree
Cond. pattern base of “cm”: (f:3){}
f:3
cm-conditional FP-tree
Cond. pattern base of “cam”: (f:3)
{}
f:3
cam-conditional FP-tree
A Special Case: Single Prefix Path in FP-tree
• Suppose a (conditional) FP-tree T has a shared single prefix-path P
• Mining can be decomposed into two parts• Reduction of the single prefix path into one node• Concatenation of the mining results of the two parts
29
a2:n2
a3:n3
a1:n1
{}
b1:m1C1:k1
C2:k2 C3:k3
b1:m1C1:k1
C2:k2 C3:k3
r1
+a2:n2
a3:n3
a1:n1
{}
r1 =
Benefits of the FP-tree Structure• Completeness
• Preserve complete information for frequent pattern mining
• Never break a long pattern of any transaction
• Compactness• Reduce irrelevant info—infrequent items are gone• Items in frequency descending order: the more
frequently occurring, the more likely to be shared• Never be larger than the original database (not count
node-links and the count field)
30
The Frequent Pattern Growth Mining Method• Idea: Frequent pattern growth
• Recursively grow frequent patterns by pattern and database partition
• Method 1. For each frequent item, construct its conditional
pattern-base, and then its conditional FP-tree2. Repeat the process on each newly created conditional
FP-tree 3. Until the resulting FP-tree is empty, or it contains only
one path—single path will generate all the combinations of its sub-paths, each of which is a frequent pattern
31
Scaling FP-growth by Database Projection• What about if FP-tree cannot fit in memory? DB projection
• First partition a database into a set of projected DBs
• Then construct and mine FP-tree for each projected DB
• Parallel projection vs. partition projection techniques• Parallel projection
• Project the DB in parallel for each frequent item• Parallel projection is space costly• All the partitions can be processed in parallel
• Partition projection• Partition the DB based on the ordered frequent items• Passing the unprocessed parts to the subsequent partitions
32
Partition-Based Projection• Parallel projection needs a lot of disk space
• Partition projection saves it
33
Tran. DB fcampfcabmfbcbpfcamp
p-proj DB fcamcbfcam
m-proj DB fcabfcafca
b-proj DB fcb…
a-proj DBfc…
c-proj DBf…
f-proj DB …
am-proj DB fcfcfc
cm-proj DB fff
…
FP-Growth vs. Apriori: Scalability With the Support Threshold
34
0
10
20
30
40
50
60
70
80
90
100
0 0.5 1 1.5 2 2.5 3
Support threshold(%)
Ru
n t
ime
(se
c.)
D1 FP-grow th runtime
D1 Apriori runtime
Data set T25I20D10K
FP-Growth vs. Tree-Projection: Scalability with the Support Threshold
35
0
20
40
60
80
100
120
140
0 0.5 1 1.5 2
Support threshold (%)
Ru
nti
me
(sec
.)
D2 FP-growth
D2 TreeProjection
Data set T25I20D100K
Advantages of the Pattern Growth Approach• Divide-and-conquer:
• Decompose both the mining task and DB according to the frequent patterns obtained so far
• Lead to focused search of smaller databases
• Other factors• No candidate generation, no candidate test• Compressed database: FP-tree structure• No repeated scan of entire database • Basic ops: counting local freq items and building sub FP-tree,
no pattern search and matching
• A good open-source implementation and refinement of FPGrowth
• FPGrowth+ (Grahne and J. Zhu, FIMI'03)
36
37
Further Improvements of Mining Methods• AFOPT (Liu, et al. @ KDD’03)
• A “push-right” method for mining condensed frequent pattern (CFP) tree
• Carpenter (Pan, et al. @ KDD’03)• Mine data sets with small rows but numerous columns• Construct a row-enumeration tree for efficient mining
• FPgrowth+ (Grahne and Zhu, FIMI’03)• Efficiently Using Prefix-Trees in Mining Frequent Itemsets,
Proc. ICDM'03 Int. Workshop on Frequent Itemset Mining Implementations (FIMI'03), Melbourne, FL, Nov. 2003
• TD-Close (Liu, et al, SDM’06)
Extension of Pattern Growth Mining Methodology • Mining closed frequent itemsets and max-patterns
• CLOSET (DMKD’00), FPclose, and FPMax (Grahne & Zhu, Fimi’03)
• Mining sequential patterns• PrefixSpan (ICDE’01), CloSpan (SDM’03), BIDE (ICDE’04)
• Mining graph patterns• gSpan (ICDM’02), CloseGraph (KDD’03)
• Constraint-based mining of frequent patterns• Convertible constraints (ICDE’01), gPrune (PAKDD’03)
• Computing iceberg data cubes with complex measures • H-tree, H-cubing, and Star-cubing (SIGMOD’01, VLDB’03)
• Pattern-growth-based Clustering• MaPle (Pei, et al., ICDM’03)
• Pattern-Growth-Based Classification• Mining frequent and discriminative patterns (Cheng, et al, ICDE’07)
38
Tahapan Algoritma FP Growth1. Penyiapan Dataset2. Pencarian Frequent Itemset (Item yang sering muncul)3. Dataset diurutkan Berdasarkan Priority4. Pembuatan FP-Tree Berdasarkan Item yang sudah
diurutkan5. Pembangkitan Conditional Pattern Base6. Pembangkitan Conditional FP-tree 7. Pembangkitan Frequent Pattern8. Mencari Support9. Mencari Confidence
39
1. Penyiapan Dataset
40
2. Pencarian Frequent Itemset
41
3. Dataset diurutkan Berdasarkan Priority
42
4. Pembuatan FP-Tree
43
5. Pembangkitan Conditional Pattern Base
44
6. Pembangkitan Conditional FP-tree
45
7. Pembangkitan Frequent Pattern
46
Frequent 2 Itemset
47
8. Mencari Support 2 Itemset
48
9. Mencari Confidence 2 Itemset
49
6.2 Pattern Evaluation Methods
50
Interestingness Measure: Correlations (Lift)• play basketball eat cereal [40%, 66.7%] is misleading
• The overall % of students eating cereal is 75% > 66.7%.
• play basketball not eat cereal [20%, 33.3%] is more accurate, although with lower support and confidence
• Measure of dependent/correlated events: lift
51
89.05000/3750*5000/3000
5000/2000),( CBlift
Basketball
Not basketball
Sum (row)
Cereal 2000 1750 3750
Not cereal
1000 250 1250
Sum(col.) 3000 2000 5000
)()(
)(
BPAP
BAPlift
33.15000/1250*5000/3000
5000/1000),( CBlift
Are lift and 2 Good Measures of Correlation?
• “Buy walnuts buy milk [1%, 80%]” is misleading if 85% of customers buy milk
• Support and confidence are not good to indicate correlations
• Over 20 interestingness measures have been proposed (see Tan, Kumar, Sritastava @KDD’02)
• Which are good ones?
52
Null-Invariant Measures
53
Comparison of Interestingness Measures
• Null-(transaction) invariance is crucial for correlation analysis
• Lift and 2 are not null-invariant
• 5 null-invariant measures
54 54
Milk No Milk Sum (row)
Coffee m, c ~m, c c
No Coffee
m, ~c ~m, ~c ~c
Sum(col.)
m ~m Null-transactions w.r.t.
m and c Null-invariant
Subtle: They disagree
Kulczynski measure (1927)
Analysis of DBLP Coauthor Relationships
55
Advisor-advisee relation: Kulc: high, coherence: low, cosine: middle
Recent DB conferences, removing balanced associations, low sup, etc.
Tianyi Wu, Yuguo Chen and Jiawei Han, “Association Mining in Large Databases: A Re-Examination of Its Measures”, Proc. 2007 Int. Conf. Principles and Practice of Knowledge Discovery in Databases (PKDD'07), Sept. 2007
Which Null-Invariant Measure Is Better? • IR (Imbalance Ratio): measure the imbalance of two
itemsets A and B in rule implications
• Kulczynski and Imbalance Ratio (IR) together present a clear picture for all the three datasets D4 through D6
• D4 is balanced & neutral• D5 is imbalanced & neutral• D6 is very imbalanced & neutral
56
Referensi1. Jiawei Han and Micheline Kamber, Data Mining: Concepts and Techniques
Third Edition, Elsevier, 20122. Ian H. Witten, Frank Eibe, Mark A. Hall, Data mining: Practical Machine
Learning Tools and Techniques 3rd Edition, Elsevier, 20113. Markus Hofmann and Ralf Klinkenberg, RapidMiner: Data Mining Use Cases
and Business Analytics Applications, CRC Press Taylor & Francis Group, 20144. Daniel T. Larose, Discovering Knowledge in Data: an Introduction to Data
Mining, John Wiley & Sons, 20055. Ethem Alpaydin, Introduction to Machine Learning, 3rd ed., MIT Press, 20146. Florin Gorunescu, Data Mining: Concepts, Models and Techniques, Springer,
2011 7. Oded Maimon and Lior Rokach, Data Mining and Knowledge Discovery
Handbook Second Edition, Springer, 20108. Warren Liao and Evangelos Triantaphyllou (eds.), Recent Advances in Data
Mining of Enterprise Data: Algorithms and Applications, World Scientific, 2007
57
Recommended