189
1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and Engineering/ École d’ingénierie et de technologie de l’information University of Ottawa Canada

1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

Embed Size (px)

Citation preview

Page 1: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

1

Advanced Topics in Computer Systems: Machine Learning and

Data Mining Systems

Winter 2007

Stan MatwinProfessor

School of Information Technology and Engineering/ École d’ingénierie et de technologie de l’information

University of Ottawa

Canada

Page 2: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

2

Goals of this course

• Dual seminar/tutorial structure

• The tutorial part will teach basic concepts of Machine Learning (ML) and Data Mining (DM)

• The seminar part will – introduce interesting areas of current and future research

– Introduce successful applications

• Preparation to enable advanced self-study on ML/DM

Page 3: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

3

Course outline

1. Machine Learning/Data Mining: basic terminology.

2. Symbolic learning: Decision Trees;

3. Basic Performance Evaluation

4. Introduction to the WEKA system

5. Probabilistic learning: Bayesian learning.

6. Text classification

Page 4: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

4

7. Kernel-based methods: Support Vector Machines

8. Ensemble-based methods: boosting

9. Advanced Performance Evaluation: ROC curves

10. Applications in bioinformatics

11. Data mining concepts and techniques: Association Rules

12. Feature selection and discretization

Page 5: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

5

Machine Learning / Data Mining: basic terminology

• Machine Learning: – given a certain task, and a data set that constitutes the

task,

– ML provides algorithms that resolve the task based on the data, and the solution improves with time

• Examples:– predicting lottery numbers next Saturday

– detecting oil spills on sea surface

– assigning documents to a folder

– identifying people likely to want a new credit card (cross selling)

Page 6: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

6

• Data Mining: extracting regularities from a VERY LARGE dataset/database as part of a business/application cycle

• examples: – cell phone fraud detection

– customer churn

– direct mail targeting/ cross sell

– prediction of aircraft component failures

Page 7: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

7

Basic ML tasks

• Supervised learning– classification/concept learning

– estimation: essentially, extrapolation

• Unsupervised learning:– clustering: finding groups of “similar” objects

– associations: in a database, finding that some values of attributes go with some other

Page 8: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

8

Concept learning (also known as classification): a definition

• the concept learning problem:

• given– a set E = {e1, e2, …, en} of training instances of concepts,

each labeled with the name of a concept C1, …,Ck to which it belongs

• determine– definitions of each of C1, …,Ck which correctly cover E.

Each definition is a concept description

Page 9: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

9

Dimensions of concept learning• representation;

– data

» symbolic

» numeric

– concept description

» attribute-value (propositional logic)

» relational (first order logic)

• Language of examples and hypotheses– Attribute-value (AV) = propositional representation

– Relational (ILP) = first-order logic representation

• method of learning– top-down

– bottom-up (covering)

– different search algorithms

Page 10: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

10

2. Decision Trees

A decision tree as a concept representation:

wage incr. 1st yr

working hrsstatutory holidays

contribution to hlth plan wage incr. 1st yrgood

good

good

goodbad bad bad

Page 11: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

11

building a univariate (single attribute is tested) decision tree from a set T of training cases for a concept C with classes C1,…Ck

Consider three possibilities:

• T contains 1 or more cases all belonging to the same class Cj. The decision tree for T is a leaf identifying class Cj

• T contains no cases. The tree is a leaf, but the label is assigned heuristically, e.g. the majority class in the parent of this node

Page 12: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

12

• T contains cases from different classes. T is divided into subsets that seem to lead towards collections of cases. A test t based on a single attribute is chosen, and it partitions T into subsets {T1,…,Tn}. The decision tree consists of a decision node identifying the tested attribute, and one branch for ea. outcome of the test. Then, the same process is applied recursively to ea.Ti

Page 13: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

13

Choosing the test

• why not explore all possible trees and choose the simplest (Occam’s razor)? But this is an NP complete problem. E.g. in the ‘union’ example there are millions of trees consistent with the data

• notation: S: set of the training examples; freq(Ci, S) = number of examples in S that belong to Ci;

• information measure (in bits) of a message is - log2 of the probability of that message

• idea: to maximize the difference between the info needed to identify a class of an example in T, and the the same info after T has been partitioned in accord. with a test X

Page 14: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

14

selecting 1 case and announcing its class has info meas. - log2(freq(Ci, S)/|S|) bits

to find information pertaining to class membership in all

classes: info(S) = -(freq(Ci, S)/|S|)*log2(freq(Ci, S)/|S|)

after partitioning according to outcome of test X:

infoX(T) = |Ti|/|T|*info(Ti)

gain(X) = info(T) - infoX(T) measures the gain from partitioning T according to X

We select X to maximize this gain

Page 15: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

15

Day Outlook Temp Humidity Wind Play? 1 Sunny Hot High Weak No 2 Sunny Hot High Strong No 3 Ovcst Hot High Weak Yes 4 Rain Mild High Weak Yes 5 Rain Col Normal Weak yes 6 Rain Cool Normal Strong No 7 Ovcst Cool Normal Strong Yes 8 Sunny Mild High Weak No 9 Sunny Cool Normal Weak Yes 10 Rain Mild Normal Weak Yes 11 Sunny Mild Normal Strong Yes 12 Ovcst Mild High Strong Yes 13 Ovcst Hot Normal Weak Yes 14 Rain Mild High Strong No

Data for learning the weather (play/don’t play) concept (Witten p. 10)

Info(S) = 0.940

Page 16: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

16

Selecting the attribute

• Gain(S, Outlook) = 0.246

• Gain(S, Humidity) = 0.151

• Gain(S, Wind) = 0.048

• Gain(S, Temp) = 0.029

• Choose Outlook as the top test

Page 17: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

17

How does info gain work?

Page 18: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

18

Gain ratio

• info gain favours tests with many outcomes (patient id example)

• consider split info(X) = |Ti|/|T|*log(|Ti|/|T|)

measures potential info. generated by dividing T into n classes (without considering the class info)

gain ratio(X) = gain(X)/split info(X)

shows the proportion of info generated by the split that is useful for classification: in the example (Witten p. 96), log(k)/log(n)

maximize gain ratio

Page 19: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

19

Partition of Partition of cases and cases and corresp. treecorresp. tree

Page 20: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

20

In fact, learning DTs with the gain ratio heuristic is a search:

Page 21: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

21

continuous attrs

• a simple trick: sort examples on the values of the attribute considered; choose the midpoint between ea two consecutive values. For m values, there are m-1 possible splits, but they can be examined linearly

• cost?

Page 22: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

22

From trees to rules:

traversing a decision tree from root to leaf gives a rule, with the path conditions as the antecedent and the leaf as the class

rules can then be simplified by removing conditions that do not contribute to discriminate the nominated class from other classes

rulesets for a whole class are simplified by removing rules that do not contribute to the accuracy of the whole set

Page 23: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

23

Geometric interpretation of decision trees: axis-parallel area

a1a1

b1b1

b > b1b > b1

a > a1a > a1

a < a2a < a2

yy

a2a2

nn

Page 24: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

24

b > b1b > b1

a > a1a > a1

a < a2a < a2

yy

(1)(1)

(2)(2)

nn

(3)(3)(4)(4)

Decision rules can be obtained from decision treesDecision rules can be obtained from decision trees

(1)(1)if b>b1 then class is -if b>b1 then class is -

(2)(2)if b <= b1 and a > a1 then if b <= b1 and a > a1 then class is +class is +

(3)(3)if b <= b1 a < a2 then class is +if b <= b1 a < a2 then class is +

(4)(4)if b <= b1 and a2 <= a <= a1 then if b <= b1 and a2 <= a <= a1 then class is - class is -

notice the inference involved in rule (3)notice the inference involved in rule (3)

Page 25: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

25

1R

For each attribute, For each value of that attribute, make a rule as follows: count how often each class appears find the most frequent class make the rule assign that class to this attribute-value. Calculate the error rate of the rules.Choose the rules with the smallest error rate.

Page 26: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

26

Page 27: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

27

lots of datasets can be obtained from ftp ics.uci.educd pub/machine-learning-databasescontents are described in the file README in

thedir machine-learning-databases at Irvine

Page 28: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

28

Empirical evaluation of accuracy in classification tasks

• the usual approach: – partition the set E of all labeled examples (examples

with their classification labels) into a training set and a testing set

– use the training set for learning, obtain a hypothesis H, set acc := 0

– for ea. element t of the testing set,

apply H on t; if H(t) = label(t) then acc := acc+1

– acc := acc/|testing set|

Page 29: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

29

Testing - cont’d

• Given a dataset, how do we split it between the training set and the test set?

• cross-validation (n-fold)– partition E into n groups

– choose n-1 groups from n, perform learning on their union

– repeat the choice n times

– average the n results

– usually, n = 3, 5, 10

• another approach - learn on all but one example, test that example.

“Leave One Out”

Page 30: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

30

Confusion matrix

classifier-determined classifier-determined

positive label negative label

true positive a b

label

true negative c d

label

Accuracy = (a+d)/(a+b+c+d)a = true positivesb = false negatives

c = false positivesd = true negatives

Page 31: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

31

• Precision = a/(a+c)

• Recall = a/(a+b)

• F-measure combines Recall and Precision:

• F = (2+1)*P*R / (2 P + R)

• Refelects importance of Recall versus Precision; eg F0 = P

Page 32: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

32

Cost matrix

• Is like confusion matrix, except costs of errors are assigned to the elements outside the diagonal (mis-classifications)

• this may be important in applications, e.g. when the classifier is a diagnosis rule

• see http://ai.iit.nrc.ca/bibliographies/cost-sensitive.html

for a survey of learning with misclassification costs

Page 33: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

33

Bayesian learning

• incremental, noise-resistant method

• can combine prior Knowledge (the K is probabilistic)

• predictions are probabilistic

Page 34: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

34

Bayes’ law of conditional probability:

)(

)()|()|(

DP

hPhDPDhP

results in a simple “learning rule”: choose the most likely (Maximum APosteriori)hypothesis

hMAP arg maxhH

P(D|h)P(h)

Example:Two hypo:(1) the patient has cancer(2) the patient is healthy

Page 35: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

35

P(cancer) = .008

P( + |cancer) = .98

P(+|not cancer) = .03

P(not cancer) = .992

P( - |cancer) = .02

P(-|not cancer) = .97

is 98% reliable: it returns positive in 98% of cases when the

the disease is present , and returns 97% negative

when the disease is actually absent .

Priors: 0.8% of the population has cancer;

We observe a new patient with a positive test. How should they be diagnosed?

P(cancer|+) = P(+|cancer)P(cancer) = .98*.008 = .0078P(not cancer|+) = P(+|not cancer)P(not cancer) = .03*.992=.0298

Page 36: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

36

Minimum Description Length

revisiting the def. of hMAP:

we can rewrite it as:

or

But the first log is the cost of coding the data given the theory, and the second - the cost of coding the theory

hMAP arg maxhH

P(D|h)P(h)

hMAP arg maxhH

log2 P(D|h) log2 P(h)

hMAP arg minhH

log2 P(D|h) log2 P(h)

Page 37: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

37

Observe that: for data, we only need to code the

exceptions; the others are correctly predicted by the theory

MAP principles tells us to choose the theory which encodes the data in the shortest manner

the MDL states the trade-off between the complexity of the hypo. and the number of errors

Page 38: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

38

Bayes optimal classifier

• so far, we were looking at the “most probable hypothesis, given a priori probabilities”. But we really want the most probable classification

• this we can get by combining the predictions of all hypotheses, weighted by their posterior probabilities:

• this is the bayes optimal classifier BOC:

P(v j | D) P(v jh i

|hi )P(hi | D)

arg maxv j V

P(v jh i H |hi )P(hi | D)

Example of hypothesesExample of hypothesesh1, h2, h3 with posterior probabilitiesh1, h2, h3 with posterior probabilities.4, .3. .3.4, .3. .3A new instance is classif. pos. by h1 andA new instance is classif. pos. by h1 andneg. by h2, h3 neg. by h2, h3

Page 39: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

39

Bayes optimal classifier

V = {+, -}

P(h1|D) = .4, P(-|h1) = 0, P(+|h1) = 1

Classification is ” –” (show details!)

Page 40: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

40

• Captures probability Captures probability dependenciesdependencies• ea node has probability ea node has probability distribution: the task is distribution: the task is to determine the join to determine the join probability on the dataprobability on the data• In an appl. a model is In an appl. a model is designed manually and designed manually and forms of probability forms of probability distr. Are givendistr. Are given•Training set is used to Training set is used to fut the model to the datafut the model to the data•Then probabil. Then probabil. Inference can be carried Inference can be carried out, eg for predictionout, eg for prediction

First five variables are observed, and the model is First five variables are observed, and the model is Used to predict diabetesUsed to predict diabetes

P(A, N, M, I, G, D)=P(A)*P(n)*P(M|A, n)*P(D|M, A, N)*P(I|D)*P(G|I,D)P(A, N, M, I, G, D)=P(A)*P(n)*P(M|A, n)*P(D|M, A, N)*P(I|D)*P(G|I,D)

Page 41: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

41

• how do we specify how do we specify prob. distributions?prob. distributions?• discretize variables discretize variables and represent and represent probability distributions probability distributions as a table as a table •Can be approximated Can be approximated from frequencies, eg from frequencies, eg table P(M|A, N) requires table P(M|A, N) requires 24parameters24parameters•For prediction, we want For prediction, we want (D|A, n, M, I, G): we need (D|A, n, M, I, G): we need a large table to do thata large table to do that

Page 42: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

42

Page 43: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

43

• no other classifier using the same hypo. spac e and prior K can outperform BOC

• the BOC has mostly a theoretical interest; practically, we will not have the required probabilities

• another approach, Naive Bayes Classifier (NBC)

under a simplifying assumption of independence of the attribute values given the class value:

i

jijVv

NB vavPvj

)|()(maxarg

)()|,(maxarg

),(

)()|,(maxarg),|(maxarg

1

1

11

jjnVv

n

jjn

Vvnj

VvMAP

vPvaaP

aaP

vPvaaPaavPv

j

jj

To estimate this, we need (#of possible To estimate this, we need (#of possible values)*(#of possible instances) examplesvalues)*(#of possible instances) examples

Page 44: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

44

Page 45: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

45

• in NBC, the conditional probabilities are estimated from training data simply as normalized frequencies: how many times a given attribute value is associated with a given class

• no search!• example• m-estimate

Page 46: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

46

Example (see the Dec. Tree sec. in these notes):

we are trying to predict yes or no for Outlook=sunny, Temperature=cool, Humidity=high, Wind=strong

)|()|()|(

)|()(maxarg)|()(maxarg],[],[

jjj

jjnoyesvi

jijnoyesv

NB

vstrongWindPvhighHumidityPvcooleTemperaturP

vsunnyOutlookPvPvavPvjj

P(yes)=9/14 P(no)=5/14P(yes)=9/14 P(no)=5/14

P(Wind=strong|yes)=3/9 P(Wind=strong|no)=3/5 etc.P(Wind=strong|yes)=3/9 P(Wind=strong|no)=3/5 etc.

P(yes)P(sunny|yes)P(cool|yes)P(high|yes)Pstrong|yes)=.0053P(yes)P(sunny|yes)P(cool|yes)P(high|yes)Pstrong|yes)=.0053

P(yes)P(sunny|no)P(cool|no)P(high|no)Pstrong|no)=.0206P(yes)P(sunny|no)P(cool|no)P(high|no)Pstrong|no)=.0206

so we will predictso we will predict no no compare to 1R!compare to 1R!

Page 47: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

47

• Further, we can not only have a decision, but also the prob. of that decision:

• we rely on for the conditional probability

• if the conditional probability is very small, and n is small too, then we should assume that nc is 0. But this biases too strongly the NBC.

• So: smoothen; see textbook p. 85• Instead, we will use the estimate

where p is the prior estimate of probability,m is equivalent sample size. If we do not know otherwise, p=1/k for k values of the attribute; m has the effect of augmenting the number of samples of class ;large value of m means that priors p are important wrt training data when probability estimates are computed, small – less important

795.0053.0206.

0206.

n

nc

mnmpnc

Page 48: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

48

Text Categorization

• Representations of text are very high dimensional (one feature for each word).

• High-bias algorithms that prevent overfitting in high-dimensional space are best.

• For most text categorization tasks, there are many irrelevant and many relevant features.

• Methods that sum evidence from many or all features (e.g. naïve Bayes, KNN, neural-net) tend to work better than ones that try to isolate just a few relevant features (decision-tree or rule induction).

Page 49: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

49

Naïve Bayes for Text• Modeled as generating a bag of words for a

document in a given category by repeatedly sampling with replacement from a vocabulary V = {w1, w2,…wm} based on the probabilities P(wj | ci).

• Smooth probability estimates with Laplace m-estimates assuming a uniform distribution over all words (p = 1/|V|) and m = |V|

– Equivalent to a virtual sample of seeing each word in each category exactly once.

Page 50: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

50

Text Naïve Bayes Algorithm(Train)

Let V be the vocabulary of all words in the documents in DFor each category ci C

Let Di be the subset of documents in D in category ci

P(ci) = |Di| / |D|

Let Ti be the concatenation of all the documents in Di

Let ni be the total number of word occurrences in Ti

For each word wj V Let nij be the number of occurrences of wj in Ti

Let P(wi | ci) = (nij + 1) / (ni + |V|)

Page 51: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

51

Text Naïve Bayes Algorithm(Test)

Given a test document XLet n be the number of word occurrences in XReturn the category:

where ai is the word occurring the ith position in X

argmaxc

iC

P ci

i 1

n

P ai

ci

Page 52: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

52

Naïve Bayes Time Complexity• Training Time: O(|D|Ld + |C||V|)) where Ld is the

average length of a document in D.– Assumes V and all Di , ni, and nij pre-computed in O(|D|Ld) time during

one pass through all of the data.

– Generally just O(|D|Ld) since usually |C||V| < |D|Ld

• Test Time: O(|C| Lt) where Lt is the average length of a test document.

• Very efficient overall, linearly proportional to the time needed to just read in all the data.

• Similar to Rocchio time complexity.

Page 53: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

53

Underflow Prevention• Multiplying lots of probabilities, which are

between 0 and 1 by definition, can result in floating-point underflow.

• Since log(xy) = log(x) + log(y), it is better to perform all computations by summing logs of probabilities rather than multiplying probabilities.

• Class with highest final un-normalized log probability score is still the most probable.

Page 54: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

54

Naïve Bayes Posterior Probabilities

• Classification results of naïve Bayes (the class with maximum posterior probability) are usually fairly accurate.

• However, due to the inadequacy of the conditional independence assumption, the actual posterior-probability numerical estimates are not.– Output probabilities are generally very close to 0 or 1.

Page 55: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

55

Textual Similarity Metrics

• Measuring similarity of two texts is a well-studied problem.

• Standard metrics are based on a “bag of words” model of a document that ignores word order and syntactic structure.

• May involve removing common “stop words” and stemming to reduce words to their root form.

• Vector-space model from Information Retrieval (IR) is the standard approach.

• Other metrics (e.g. edit-distance) are also used.

Page 56: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

56

The Vector-Space Model• Assume t distinct terms remain after preprocessing; call

them index terms or the vocabulary.• These “orthogonal” terms form a vector space.

Dimension = t = |vocabulary|

• Each term, i, in a document or query, j, is given a real-valued weight, wij.

• Both documents and queries are expressed as t-dimensional vectors:

dj = (w1j, w2j, …, wtj)

Page 57: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

57

Graphic Representation

Example:

D1 = 2T1 + 3T2 + 5T3

D2 = 3T1 + 7T2 + T3

Q = 0T1 + 0T2 + 2T3

T3

T1

T2

D1 = 2T1+ 3T2 + 5T3

D2 = 3T1 + 7T2 + T3

Q = 0T1 + 0T2 + 2T3

7

32

5

• Is D1 or D2 more similar to Q?• How to measure the degree of

similarity? Distance? Angle? Projection?

Page 58: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

58

Document Collection

• A collection of n documents can be represented in the vector space model by a term-document matrix.

• An entry in the matrix corresponds to the “weight” of a term in the document; zero means the term has no significance in the document or it simply doesn’t exist in the document.

T1 T2 …. Tt

D1 w11 w21 … wt1

D2 w12 w22 … wt2

: : : : : : : :Dn w1n w2n … wtn

Page 59: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

59

Term Weights: Term Frequency

• More frequent terms in a document are more important, i.e. more indicative of the topic.

fij = frequency of term i in document j

• May want to normalize term frequency (tf) by dividing by the frequency of the most common term in the document:

tfij = fij / maxi{fij}

Page 60: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

60

Term Weights: Inverse Document Frequency

• Terms that appear in many different documents are less indicative of overall topic.

df i = document frequency of term i

= number of documents containing term i

idfi = inverse document frequency of term i,

= log2 (N/ df i)

(N: total number of documents)• An indication of a term’s discrimination power.• Log used to dampen the effect relative to tf.

Page 61: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

61

TF-IDF Weighting

• A typical combined term importance indicator is tf-idf weighting:

wij = tfij idfi = tfij log2 (N/ dfi)

• A term occurring frequently in the document but rarely in the rest of the collection is given high weight.

• Many other ways of determining term weights have been proposed.

• Experimentally, tf-idf has been found to work well.

Page 62: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

62

Computing TF-IDF -- An Example

Given a document containing terms with given frequencies:

A(3), B(2), C(1)

Assume collection contains 10,000 documents and

document frequencies of these terms are:

A(50), B(1300), C(250)

Then:

A: tf = 3/3; idf = log(10000/50) = 5.3; tf-idf = 5.3

B: tf = 2/3; idf = log(10000/1300) = 2.0; tf-idf = 1.3

C: tf = 1/3; idf = log(10000/250) = 3.7; tf-idf = 1.2

Page 63: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

63

Similarity Measure• A similarity measure is a function that computes the degree

of similarity between two vectors.

• Using a similarity measure between the query and each document:

– It is possible to rank the retrieved documents in the order of presumed relevance.

– It is possible to enforce a certain threshold so that the size of the retrieved set can be controlled.

Page 64: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

64

Similarity Measure - Inner Product

• Similarity between vectors for the document di and query q can be computed as the vector inner product:

sim(dj,q) = dj•q = wij · wiq

where wij is the weight of term i in document j and wiq is the weight of term i in the query

• For binary vectors, the inner product is the number of matched query terms in the document (size of intersection).

• For weighted term vectors, it is the sum of the products of the weights of the matched terms.

i 1

t

Page 65: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

65

Properties of Inner Product

• The inner product is unbounded.

• Favors long documents with a large number of unique terms.

• Measures how many terms matched but not how many terms are not matched.

Page 66: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

66

Inner Product -- ExamplesBinary:

– D = 1, 1, 1, 0, 1, 1, 0

– Q = 1, 0 , 1, 0, 0, 1, 1

sim(D, Q) = 3

retri

eval

database

archite

cture

computer

textmanagem

ent

informatio

n

Size of vector = size of vocabulary = 70 means corresponding term not found

in document or query

Weighted: D1 = 2T1 + 3T2 + 5T3 D2 = 3T1 + 7T2 + 1T3

Q = 0T1 + 0T2 + 2T3

sim(D1 , Q) = 2*0 + 3*0 + 5*2 = 10 sim(D2 , Q) = 3*0 + 7*0 + 1*2 = 2

Page 67: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

67

Cosine Similarity Measure

• Cosine similarity measures the cosine of the angle between two vectors.

• Inner product normalized by the vector lengths.

D1 = 2T1 + 3T2 + 5T3 CosSim(D1 , Q) = 10 / (4+9+25)(0+0+4) = 0.81D2 = 3T1 + 7T2 + 1T3 CosSim(D2 , Q) = 2 / (9+49+1)(0+0+4) = 0.13

Q = 0T1 + 0T2 + 2T3

t3

t1

t2

D1

D2

Q

D1 is 6 times better than D2 using cosine similarity but only 5 times better using

inner product.

d j qd j q

i 1

t

wij wiq

i 1

t

wij

2

i 1

t

wiq

2CosSim(dj, q) =

Page 68: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

68

K Nearest Neighbor for TextTraining:For each each training example <x, c(x)> D Compute the corresponding TF-IDF vector, dx, for document x

Test instance y:Compute TF-IDF vector d for document yFor each <x, c(x)> D Let sx = cosSim(d, dx)Sort examples, x, in D by decreasing value of sx

Let N be the first k examples in D. (get most similar neighbors)Return the majority class of examples in N

Page 69: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

69

Illustration of 3 Nearest Neighbor for Text

Page 70: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

70

3 Nearest Neighbor Comparison

• Nearest Neighbor tends to handle polymorphic categories better.

Page 71: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

71

Nearest Neighbor Time Complexity

• Training Time: O(|D| Ld) to compose TF-IDF vectors.

• Testing Time: O(Lt + |D||Vt|) to compare to all training vectors.

– Assumes lengths of dx vectors are computed and stored during training, allowing cosSim(d, dx) to be computed in time proportional to the number of non-zero entries in d (i.e. |Vt|)

• Testing time can be high for large training sets.

Page 72: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

72

Nearest Neighbor with Inverted Index

• Determining k nearest neighbors is the same as determining the k best retrievals using the test document as a query to a database of training documents.

• Use standard VSR inverted index methods to find the k nearest neighbors.

• Testing Time: O(B|Vt|) where B is the average number of training documents in which a test-document word appears.

• Therefore, overall classification is O(Lt + B|Vt|) – Typically B << |D|

Page 73: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

73

Support Vector Machines (SVM)

• a new classifier

• Attractive because– Has sound mathematical foundations

– Performs very well in diverse and difficuly applications

– See textbook (ch. 6.3) and papers by Scholkopf placed on the class website

Page 74: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

74

Review of basic analytical geometry

• Dot product of vectors by coordinates and with the angle

• If vectors a, b are perpendicular, then (a b) = 0 (e.g. (0, c) (d, 0) = 0

• A hyperplane in an n-dimensional space has the property {x| (w x) + b = 0}; w is the weight vector, b is the threshold; x = (x1, …, xn); w = (w1, …, wn)

• A hyperplane divides the n-dimensional space into two subspaces: one is {y| y((w x) + b) > 0}, the other is complementary (y| y((w x) + b) <0)

Page 75: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

75

• Lets revisit the general classification problem.

• We want to estimate an unknown function f, all we know about it is the training set (x1,y1),… (xn,yn)

• The objective is to minimize the expected error (risk)

where l is a loss function, eg

and (z) = 0 for z<0 and (z)=1 otherwise

• Since we do not know P, we cannot measure risk

• We want to approximate the true error (risk) by the empirical error (risk):

))(()),(( xyfyxfl

n

iiiemp yfl

nfR

1

)),x((1

][

),x()),x((][ ydPyflfR

Page 76: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

76

• We know from the PAC theory that conditions can be given on the learning task so that the empirical risk converges towards the true risk

• We also know that the difficulty of the learning task depends on the complexity of f (VC dimension)

• It is known that the following relationship between the empirical risk and the complexity of the language (h denotes VC dimension of the class of f) :

is true with probability at least for n> h

nhn

hfRfR emp

)4/ln()12

(ln][][

Page 77: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

77

SRM

Structural Risk Minimization (SRM) chooses the class of F to find a balance between the simplicity of f (very simple may result in a large empirical risk) and and the empirical risk (small may require a class function with a large h)

Page 78: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

78

Points lying on the margin are called support vectors;

w can be constructed efficiently – quadratic optimization problem.

Page 79: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

79

Basic idea of SVM

• Linearly separable problems are easy (quadratic), but of course most problems are not l. s.

• Take any problem and transform it into a high-dimensional space, so that it becomes linearly separable, but

• Calculations to obtain the separability plane can be done in the original input space (kernel trick)

Page 80: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

80

Basic idea of SVM

Page 81: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

81

Original data is mapped into another dot product space called feature space F via a non-linear map :

Then linear separable classifier is performed in F

Note that the only operations in F are dot products:

Consider e.g.

FRN :

))()((),( yxyxk

32: RR

),2,(),,(),( 2221

2132121 xxxxzzzxx

Page 82: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

82

Lets see that geometrically, and that it does what we want it to do : transform a hard classification problem into an easy one, albeit in a higher dimension

Page 83: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

83

But in general quadratic optimization in the feature space could be very expensive

Consider classifying 16 x 16 pixel pictures, and 5th order monomials

Feature space dimension in this example is O( ) 1010

5

256

Page 84: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

84

Here we show that the that transformation from ellipsoidal decision space to a linear one, requiring dot product in the the feature space, can be performed by a kernel function in the input space:

in general, k(x,y) =(x y)d computes in the input spacekernels replace computation in FS by computation in the input spacein fact, the transformation needs not to be applied when a kernel is used!

)y,x(:)yx()),)(,((

),2,)(,2,()y()x(22

2121

2221

21

2221

21

kyyxx

yyyyxxxx

Page 85: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

85

Some common kernels used:

Using different kernels we in fact use different classifiers in the input space: gaussian, polynomial, 3-layer neural nets, …

Page 86: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

86

Simplest kernel

• Is the linear kernel (w x) + b

• But this only works if the training set is linearly separable. This may not be the case– For the linear kernel, or even

– In the feature space

Page 87: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

87

The solution for the non-separable case is to optimize not just the margin, but the margin plus the influence of training errors i:

n

ii

bwCw

1

2

,, 2

1min

Page 88: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

88

Classification with SVMs:

• Convert each example x to (x)

• Perform optimal hyperplane algorithm in F; but since we use the kernel all we need to do is to compute

where xi, yi are training instances, i are computed as the solution to the quadratic programming problem

Page 89: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

89

Examples of classifiers in the input space

Page 90: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

90

Overall picture

Page 91: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

91

Geometric interpretation of SVM classifier

• Normalize the weight vector to 1 ( ) and set the threshold b = 0

• The set of all w that separate training set is

• But this is the Version Space

• VS has a geometric centre (Bayes Optimal Classifier) near the gravity point

• If VS has a shape in which SVM solution is far from the VS centre, SVM works poorly

12w

Page 92: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

92

Page 93: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

93

Page 94: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

94

Applications

• Text classification

• Image analysis – face recognition

• Bioinformatics – gene expression

• Can the kernel reflect domain knowledge?

Page 95: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

95

SVM cont’d

• A method of choice when examples are represented by vectors or matrices

• Input space cannot be readily used as attribute-vector (e.g. too many attrs)

• Kernel methods: map data from input space to feature space (FS); perform learning in FS provided that examples are only used within dot point (the kernel trick –

• SVM but also Perceptron, PCA, NN can be done on that basis

• Collectively – kernel-based methods

• The kernel defines the classifier

• The classifier is independent of the dimensionality of the FS – can even be infinite (gaussian kernel)

• LIMITATION of SVMs: they only work for two-class problems

• Remedy: use of ECOC

)x'x,()'(xx)( k

Page 96: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

96

Applications – face detection [IEEE

INTELLIGENT SYSTEMS]

• The task: to find a rectangle containing a face in an image applicable in face recognition, surveillance, HCI etc. Also in medical image processing and structural defects

• Difficult task – variations that are hard toi represent explicitly (hair, moustache, glasses)

• Cast as a classification problem: image regions that are faces and non-faces

• Scanning the image in multiple scales, dividing it into (overlapping) frames and classifying the frames with an SVM:

Page 97: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

97

Face detection cont’d

SVM performing face detection –support vectors are faces and non-faces

Examples are 19x19 pixels, class +1 or -1

SVM: 2nd degree polynomial with slack variables

Representation tricks: masking out near-boundary area - 361->283, removes noise

illumination correction: reduction of light and shadow

Discretization of pixel brightness by histogram equalization

Page 98: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

98

Face detection – system architecture

Page 99: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

99

Bootstrapping: using the system on images with no faces and storing false positives to use as negative examples in later training

Page 100: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

100

Performance on 2 test sets:Set A = 313 high qualityImages with 313 faces, set B= 23 images with 155 facesThis results in >4M frames for A and >5M frames for B.SVM achieved recall of 97%on A and 74% on B, with4 and 20 false positives, resp.

Page 101: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

101

SVM in text classification

• Example of classifiers (the Reuters corpus – 13K stories, 118 categories, time split)

• Essential in document organization (emails!), indexing etc.

• First comes from a PET; second from and SVM

• Text representation: BOW: mapping docs to large vectors indicating which word occurs in a doc; as many dimensions as words in the corpus (many more than in a given doc);

• often extended to frequencies (normalized) of stemmed words

Page 102: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

102

Text classification

• Still a large number of features, so a stop list is applied, and some form of feature selection (e.g. based on info gain, or tf/idf) is done, down to 300 features

• Then a simple, linear SVM is used (experiments with poly. and RDF kernels indicated they are not much better than linear). One against all scheme is used

• What is a poly (e.g. level 2) kernel representing in text classification?

• Performance measured with micro-averaged break even point (explain)

• SVM obtained the best results, with DT second (on 10 cat.) and Bayes third. Other authors report better IB performance (findSim) than here

Page 103: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

103

A ROC for the above experiments (class = “grain”)ROC obtained by varying the thresholdthreshold is learnedfrom values of and discriminates between classeswx

Page 104: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

104

How about another representation?

• N-grams = sequences of N consecutive characters, eg 3-grams is ‘support vector’ = sup, upp, ppo, por, …, tor

• Language-independent, but a large number of features (>>|words|)

• The more substrings in common between 2 docs, the more similar the 2 docs are

• What if we make these substring non-contiguous? With weight measuring non-contiguity? car – custard

• We will make ea substring a feature, with value depending on how frequently and how compactly a substring occurs in the text

Page 105: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

105

• The latter is represented by a decay factor • Example: cat, car, bat, bar

• Unnormalized K(car,cat)= 4, K(car,car)=K(cat,cat)=2 4 + 6,normalized K(car,cat)= 4/( 24+ 6)= 1/(2+ 2)

• Impractical (too many) for larger substrings and docs, but the kernel using such features can be calculated efficiently (‘substring kernel’ SSK) – maps strings (a whole doc) to a feature vector indexed by all k-tuples

Page 106: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

106

• Value of the feature = sum over the occurrences of the k-tuple of a decay factor of the length of the occurrence

• Def of SSK: is an alphabet; string = finite sequence of elems of . |s| = length of s; s[i:j] = substring of s. u is a subsequence of s if there exist indices i=(i1,…,i|u| ) with 1≤i1<…< i|u| ≤|s| such that uj =sij

for j=1,…,|u| (u=s[i] for short).

• Length l(i) of of the subsequence in s is i|u| - i1 +1 (span in s)• Feature space mapping for s is defined by

for each u n (set of all finite strings of length n): features measure the number of occurrences of subsequences in s weighed by their length (1)

• The kernel can be evaluated in O(n|s||t|) time (see Lodhi paper)

]i[:i

)(i)(su

lu s

Page 107: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

107

Experimental results with SSK

• The method is NOT fast, so a subset of Reuters (n=470/380) was used, and only 4 classes: corn, crude, earn, acquisition

• Compared to the BOW representation (see earlier in these notes) with stop words removed, features weighed by tf/idf=log(1+tf)*log(n/df)

• F1 was used for the evaluation, C set experimentally• Best k is between 4 and 7• Performance comparable to a classifier based on k-grams

(contiguous), and also BOW controls the penalty for gaps in substrings: best precision for high

= 0.7. This seems to result in high similarity score for docs that share the same but semantically different words - WHY?

• Results on full Reuters not as good as with BOW, k-grams; the conjecture is that the kernel performs something similar to stemming, which is less important onlarge datasets where there is enough data to learn the ‘samness’ of different inflections

Page 108: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

108

Bioinformatics application

• Coding sequences in DNA encode proteins.• DNA alphabet: A, C, G, T. Codon = triplet of adjacent nucleotides,

codes for one aminoacid.• Task: identify where in the genome the coding starts (Translation

Initiation Sites). Potential start codon is ATG.• Classification task: does a sequence window around the ATG

indicate a TIS?• Each nucleotide is encoded by 5 bits, exactly one is set to 1,

indicating whether the nucleotide is A, C, G, T, or unknown. So the dimension n of the input space = 1000 for window size 100 to left and right of the ATG sequence.

• Positive and negaite windows are provided as the training set• This representation is typical for the kind of problem where SVMs do

well

Page 109: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

109

• What is a good feature space for this problem? how about including in the kernel some prior – domain – knowledge? Eg:

• Dependencies between distant positions are not important or are known not to exist

• Compare, at each sequence position, two sequences locally in a window of size 2l+1 around that position, with decreasing weight away from the centre of the window:

• Where d1 is the order of importance of local (within the window) correlations, and is 1 for matching nucleotides at position p+j, 0 otherwise

1))yx,(match()yx,(win dl

ljjpjp p

jpmatch

Page 110: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

110

• Window scores are summed over the length of the sequence, and correlations between up to d2 windows are taken into account:

• Also it is known that the codon below the TIS is a CDS: CDS shifted by 3 nucleotides is still a TDS

• Trained with 8000 patterns and tested with 3000

2)y)(x,win(y)k(x,1

dl

pp

Page 111: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

111

Results

Page 112: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

112

Further results on UCI benchmarks

Page 113: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

113

Ensembles of learners

• not a learning technique on its own, but a method in which a family of “weakly” learning agents (simple learners) is used for learning

• based on the fact that multiple classifiers that disagree with one another can be together more accurate than its component classifiers

• if there are L classifiers, each with an error rate < 1/2, and the errors are independent, then the prob. That the majority vote is wrong is the area under binomial distribution for more than L/2 hypotheses

Page 114: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

114

Boosting as ensemble of learners

• The very idea: focus on ‘difficult’ parts of the example space

• Train a number of classifiers

• Combine their decision in a weighed manner

Page 115: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

115

Page 116: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

116

• Bagging [Breiman] is to learn multiple hypotheses from different subset of the training set, and then take majority vote. Each sample is drawn randomly with replacement (a boostratrap). Ea. Bootstrap contains, on avg., 63.2% of the training set

• boosting is a refinement of bagging, where the sample is drawn according to a distribution, and that distribution emphasizes the misclassified examples. Then a vote is taken.

Page 117: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

117

Page 118: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

118

• Let’s make sure we understand the makeup of the final classifier

Page 119: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

119

• AdaBoost (Adaptive Boosting) uses the probability distribution. Either the learning algorithm uses it directly, or the distribution is used to produce the sample.

• See http://www.research.att.com/~yoav/adaboost/index.html

for a Web-based demo.

Page 120: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

120

Page 121: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

121

Intro. to bioinformatics

• Bioinformatics = collection, archiving, organization and interpretation of biological data

• integrated in vitro, in vivo, in silico

• Requires understanding of basic genetics

• Based on – genomics,

– proteomics,

– transriptomics

Page 122: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

122

What is Bioinformatics?

• Bioinformatics is about integrating biological themes together with the help of computer tools and biological database.

• It is a “New” field of Science where mathematics, computer science and biology combine together to study and interpret genomic and proteomic information

Page 123: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

123

Intro. to bioinformatics

• Bioinformatics = collection, archiving, organization and interpretation of biological data

• integrated in vitro, in vivo, in silico

• Requires understanding of basic genetics

• Based on – genomics,

– proteomics,

– transriptomics

Page 124: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

124

Basic biology

• Information in biology: DNA• Genotype (hereditary make-up of an organism) and

phenotype (physical/behavioral characteristics) (late 19th century)

• Biochemical structure of DNA – double helix – 1953; nucleotides A, C, G, T

• Progress in biology and IT made it possible to map the entire genomes: total genetic material of a species written with DNA code

• For a human, 3*109 long• Same in all the cells of a person

Page 125: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

125

What is a gene?

Page 126: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

126

• Interesting to see if there are genes (functional elements of the genome) responsible for some aspects of the phenotype (e.g. an illness)– Testing

– Cure

• Genes result in proteins:

• Gene proteinRNA (transcription)

Page 127: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

127

What is gene expression?

Page 128: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

128

• We say that genes code for proteins

• In simple organisms (prokaryotes), high percentage of the genome are genes (85%)

• Is eukaryotes this drops: yeast 70%, fruit fly 25%, flowers 3%

• Databases with gene information: GeneBank/DDBL, EMBL

• Databases with Protein information:

SwissProt, GenPept, TREMBL, PIR…

Page 129: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

129

• Natural interest to find repetitive and/or common subsequences in genome: BLAST

• For this, it is interesting to study genetic expression (clustering):

• Activation + and Inhibition –

deltaX

Gene X

Y is activated by X

Time

Expression levels

Gene Y

Page 130: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

130

Microarrays

• Micro-array give us information about the rate of production protein of gene during a experiment. Those technologies give us a lot of information,

• Analyzing microarray data tells us how the gene protein production evolve.

• Each data point represents log expression ratio of a particular gene under two different experimental conditions. The numerator of each ratio is the expression level of the gene in the varying condition, whereas the denominator is the expression level of the gene in some reference condition. The expression measurement is positive if the gene expression is induced with respect to the reference state and negative if it is repressed. We use those values as derivatives.

Page 131: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

131

Microarrays

DNA

Cell

Kernel

Micro array

Test-tube with a solution

Synthesized DNA strand

Stranded DNA to analyze

Page 132: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

132

Microarray technology

Page 133: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

133

Scanning

Page 134: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

134

Scanning (cont’d)

Page 135: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

135

Scanning (cont’d)

Page 136: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

136

Hybridization simulation

Page 137: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

137

9. Data mining

• definition

• basic concepts

• applications

• challenges

Page 138: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

138

Definition - Data Mining

• extraction of [unknown] patterns from data

• combines methods from:– databases

– machine learning

– visualization

• involves large datasets

Page 139: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

139

Definition - KDD

• Knowledge Discovery in Databases

• consists of:– selection

– preprocessing

– transformation

– Data Mining

– Interpretation/Evaluation

• no explicit req’t of large datasets

Page 140: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

140

Model building

• Need to normalize data

• data labelling - replacement of the starter

• use of other data sources to label

• linear regression models on STA

• contextual setting of the time interval

Page 141: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

141

Associations

Given:

I = {i1,…, im} set of items

D set of transactions (a database), each transaction is a set of items T2I

Association rule: XY, X I, Y I, XY=0

confidence c: ratio of # transactions that contain both X and Y to # of all transaction that contain X

support s: ratio of # of transactions that contain both X and Y to # of transactions in D

Itemset is frequent if its support >

Page 142: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

142

An association rule A B is a conditional implication among itemsets A and B, where A I, B I and A B = .

The confidence of an association rule r: A B is the conditional probability that a transaction contains B, given that it contains A.

The support of rule r is defined as: sup(r) = sup(AB). The confidence of rule r can be expressed as conf(r) = sup(AB)/sup(A).

Formally, let A 2I; sup(A)= |{t: t D, A t}|/|D|, if R= AB then sup(R) = SUP(AB), conf(R)= sup(A B)/sup(A)

Page 143: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

143

Associations - mining

Given D, generate all assoc rules with c, s > thresholds minc, mins

(items are ordered, e.g. by barcode)

Idea:

find all itemsets that have transaction support > mins : large itemsets

Page 144: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

144

Associations - mining

to do that: start with indiv. items with large support

in ea next step, k, • use itemsets from step k-1, generate new

itemset Ck,

• count support of Ck (by counting the candidates which are contained in any t),

• prune the ones that are not large

Page 145: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

145

Associations - mining

Only keep those thatare contained in some transaction

Page 146: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

146

Candidate generation

Ck = apriori-gen(Lk-1)

Page 147: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

147

• From large itemsets to association rules

Page 148: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

148

Subset function

Subset(Ck, t) checks if an itemset Ck is in a transaction tIt is done via a tree structure through a series of hashing:

Hash C on every item in t: itemsets notcontaining anything from t are ignored

If you got here by hashing item i of t, hashon all following items of t

set of itemsets set of itemsets

Check if itemset contained in this leaf

Page 149: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

149

Example

L3={{1 2 3}, {1 2 4},{1 3 4},{1 3 5},{2 3 4}}

C4={{1 2 3 4} {1 3 4 5}}

pruning deletes {1 3 4 5} because {1 4 5} is not in L3.

See http://www.almaden.ibm.com/u/ragrawal/pubs.html#associations for details

Page 150: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

150

DM: result evaluation

• Accuracy

• ROC

• lift curves

• cost

• but also INTERPRETABILITY

Page 151: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

151

Feature Selection [sec. 7.1 in Witten, Frank]

• Attribute-vector representation: coordinates of the vector are referred to as attributes or features

• ‘curse of dimensionality’ – learning is search, and the search space increases drastically with the number of attributes

• Theoretical justification: We know from PAC theorems that this increase is exponential – [discuss; e.g. slide 70]

• Practical justification: with divide-and-conquer algorithms the partition sizes decrease and at some point irrelevant attributes may be selected

• The task: find a subset of the original attribute set such that the classifier will perform at least as well on this subset as on the original set of attributes

Page 152: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

152

Some foundations

• We are in the classification setting, Xi are the attrs and Y is the class. We can define relevance of features wrt Optimal Bayes Classifier OBC

• Let S be a subset of features, and X a feature not in S

• X is strongly relevant if removal of X alone deteriorates the performance of the OBC.

• Xi is weakly relevant if it is not strongly relevant AND performance of BOC on SX is better than on S

Page 153: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

153

three main approaches

• Manually: often unfeasible

• Filters: use the data alone, independent of the classifier that will be used on this data (aka scheme-independent selection)

• Wrappers: the FS process is wrapped in the classifier that will be used on the data

Page 154: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

154

Filters - discussion

• Find the smallest attribute set in which all the instances are distinct. Problem: cost if exhaustive search used

• But learning and FS are related: in a way, the classifier already includes the the ‘good’ (separating) attributes. Hence the idea:

• Use one classifier for FS, then another on the results. E.g. use a DT, and pass the data on to NB. Or use 1R for DT.

Page 155: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

155

Filters cont’d: RELIEF [Kira, Rendell]

1. Initialize weight of all atrrs to 02. Sample instances and check the similar ones. 3. Determine pairs which are in the same class (near hits) and in

different classes (near misses). 4. For each hit, identify attributes with different values. Decrease

their weight5. For each miss, attributes with different values have their weight

increased. 6. Repeat the sample selection and weighing (2-5) many times7. Keep only the attrs with positive weight Discussion: high variance unless the # of samples very highDeterministic RELIEF: use all instances and all hits and misses

Page 156: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

156

A different approach

• View attribute selection as a search in the space of all attributes

• Search needs to be driven by some heuristic (evaluation criterion)

• This could be some measure of the discrimination ability of the result of search, or

• Cross-validation, on the part of the training set put aside for that purpose. This means that the classifier is wrapped in the FS process, hence the name wrapper (scheme-specific selection)

Page 157: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

157

Greedy search example

• A single attribute is added (forward) or deleted (backward)

• Could also be done as best-first search or beam search, or some randomized (e.g. genetic) search

Page 158: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

158

Wrappers

• Computationally expensive (k-fold xval at each search step)

• backward selection often yields better accuracy – x-val is just an optimistic estimation that may stop the search

prematurely –

– in backward mode attr sets will be larger than optimal

– Forward mode may result in better comprehensibility

• Experimentally FS does particularly well with NB on data on which NB does not do well

– NB is sensitive to redundant and dependent (!) attributes

– Forward selection with training set performance does well [Langley and Sage 94]

Page 159: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

159

Discretization

• Getting away from numerical attrs• We know it from DTs, where numerical attributes were sorted and

splitting points between each two values were considered• Global (independent of the classifier) and local (different results in

ea tree node) schemes exist• What is the result of discretization: a value of an nominal attribute• Ordering information could be used if the discretized attribute with k

values is converted into k-1 binary attributes – the i-1th attribute = true represents the fact that the value is <= I

• Supervised and unsupervised discretization

Page 160: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

160

• Fixed –length intervals (equal interval binning): eg (max-min)/k– How do we know k?

– May distribute instances unevenly

• Variable-length intervals, ea containing the same number of intervals (equal frequency binning, or histogram equalization)

Unsuprevised discretization

Page 161: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

161

Supervised discretization

Example of Temperature attribute in the play/don’t play data

• A recursive algorithm using information measure/ We go for the cut point with lowest information (cleanest subset)

Page 162: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

162

Supervised discretization cont’d

• What’s the right stopping criterion?

• How about MDL? Compare the info to transmit the label of ea instance before the split, and the split point in log2(N-1) bits, + info for points below and info for points above.

• Ea. Instance costs 1 bit before the split, and slightly > 0 bits after the split

• This is the Irani, Fayyad 93 method

Page 163: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

163

Error-correcting Output Codes (ECOC)• Method of combining classifiers from a two-class problem to a k-class

problem• Often when working with a k-class problem k one-against-all classifiers are

learned, and then combined using ECOC• Consider a 4-class problem, and suppose that there are 7 classifiers, and

classed are coded as follows:

• Suppose an instance a is classified as 1011111 (mistake in the 2nd classifier).

• But the this classification is the closest to class a in terms of edit (Hamming) distance. Also note that class encodings in col. 1 re not error correcting

class class encoding

a 1000 1111111

b 0100 0000111

c 0010 0011001

d 0001 0101010

Page 164: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

164

ECOC cont’d• What makes an encoding error-correcting?

• Depends on the distance between encodings: an encoding with distance d between encodings may correct up to (d-1)/2 errors (why?)

• In col. 1, d=2, so this encoding may correct 0 errors

• In col. 2, d=4, so single-bit errors will be corrected

• This example describes row separation; there must also be column separation (=1 in col. 2); otherwise two classifiers would make identical errors; this weakens error correction

• Gives good results in practice, eg with SVM (2-class method)

• See the

Page 165: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

165

ECOC cont’d

• What makes an encoding error-correcting?• Depends on the distance between encodings: an encoding with

distance d between encodings may correct up to (d-1)/2 errors (why?)

• In col. 1, d=2, so this encoding may correct 0 errors• In col. 2, d=4, so single-bit errors will be corrected• This example describes row separation; there must also be

column separation (=1 in col. 2); otherwise two classifiers would make identical errors; this weakens error correction

• For a small number of classes, exhaustive codes as in col. 2 are used

• See the Dietterich paper on how to design good error-correcting codes

• Gives good results in practice, eg with SVM, decision trees, backprop NNs

Page 166: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

166

What is ILP?

• Machine learning when instances, results and background knowledge are represented in First Order Logic, rather than attribute-value representation:

• Given E+, E-, BK

• Find h such that h E+, h E-

/

Page 167: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

167

E+:boss(mary,john). boss(phil,mary).boss(phil,john).

E-:boss(john,mary). boss(mary,phil). boss(john,phil).

BK:employee(john, ibm). employee(mary,ibm). employee(phil,ibm).

reports_to(john,mary). reports_to(mary,phil). reports_to(john,phil).

h: boss(X,Y,O):- employee(X,O), employee(Y,O),reports_to(Y, X).

Page 168: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

168

Historical justification of the name:

• From facts and BK, induce a FOL hypothesis (hypothesis in Logic Programming)

Examples

Background knowledge

PROLOG

Hypotheses

(rules)

PROLOG

Page 169: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

169

Why ILP? - practically

• Constraints of “classical” machine learning: attribute-value (AV) representation

• instances are represented as rows in a single table, or must be combined into such table

• This is not the way data comes from databases

Page 170: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

170

universityucro

universityscuf

commercialjvt

TYPECOMPANY

COMPANY Table

advanced3srw

introductory4so2

introductory3erm

introductory2cso

TYPELENGTHCOURSE

COURSE Table

srwturner

so2turner

srwscott

ermscott

so2miller

srwking

so2king

ermking

csoking

ermblake

csoblake

srwadams

so2adams

ermadams

COURSENAME

SUBSCRIPTION Table

81noucroresearcherturner

94yesscufresearcherscott

14yesjvtmanagermiller

78noucromanagerking

5yesjvtpresidentblake

23noscufresearcheradams

R_NUMBERPARTYCOMPANYJOBNAME

PARTICIPANT Table

Page 171: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

171

From tables to models to examples and background knowledge

Results of learning (in Prolog)

party(yes):-participant(_J, senior, _C)

Party(yes):-participant(president,_S,_C)

begin(background). company(jvt,commercial). company(scuf,university). company(ucro,university). course(cso,2,introductory). course(erm,3,introductory). course(so2,4,introductory). course(srw,3,advanced). job(_J):-participant(_J,_,_,_). party(_P):-participant(_,_,_P,_). company_type(_T):- participant(_,_C,_,_),company(_C,_T). course_len(_C,_L):-course(_C,_L,_). course_type(_C,_T):-course(_C,_,_T).

…end(background).

begin(model(adams)). participant(researcher,scuf,no,23). subscription(erm). subscription(so2). subscription(srw).end(model(adams)).

begin(model(miller)). participant(manager,jvt,yes,14). subscription(so2).end(model(miller)).

begin(model(scott)). participant(researcher,scuf,yes,94). subscription(erm). subscription(srw).end(model(scott)).

begin(model(blake)). participant(president,jvt,yes,5). subscription(cso). subscription(erm).end(model(blake)).

begin(model(king)). participant(manager,ucro,no,78). subscription(cso). subscription(erm). subscription(so2). subscription(srw).end(model(king)).

begin(model(turner)). participant(researcher,ucro,no,81). subscription(so2). subscription(srw).end(model(turner)).

Page 172: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

172

Why ILP - theoretically

• AV – all examples are the same length

• no recursion…

• How could we learn the concept of reachability in a graph:

Page 173: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

173

Expressive power of relations, impossible in AV

0

1

2

3 4

5

6

7

8

cannot really be expressed in AV representation, but is very easy in relational representation:

linked-to: {<0,1>, <0,3>, <1,2>,…,<7,8>}

can-reach(X,Y) :- linked-to(X,Z), can-reach(Z,Y)

Page 174: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

174

E+:boss(mary,john). boss(phil,mary).boss(phil,john).

E-:boss(john,mary). boss(mary,phil). boss(john,phil).

BK:employee(john, ibm). employee(mary,ibm). employee(phil,ibm).

reports_to_imm(john,mary). reports_to_imm(mary,phil).

h: boss(X,Y):- employee(X,O), employee(Y,O),reports_to(Y, X).

reports_to(X,Y):-reports_to_imm(X,Z), reports_to(Z,Y).

reports_to(X,X).

Another example of recursive learning:

Page 175: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

175

How is learning done: covering algorithm

Initialize the training set T

while the global training set contains +ex: find a clause that describes part of relationship Q remove the +ex covered by this clause

Finding a clause:

initialize the clause to Q(V1,…Vk) :- while T contains –ex find a literal L to add to the right-hand side of the clause

Finding a literal : greedy search

Page 176: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

176

• ‘Find a clause’ loop describes search –

• Need to structure the search space – generality – semantic and syntactic

• since logical generality is not decidable, a stronger property of -subsumption

• then search from general to specific (refinement)

Page 177: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

177

Refinement

boss(X,Y):-

boss(X,Y):-X=Y

… boss(X,Y):-reports_to(X,Y).

boss(X,Y):-empl(X,O).

boss(X,Y):-empl(X,O),empl(Y,O1).

boss(X,Y):-empl(X,O),empl(Y,O).

boss(X,Y):-empl(X,O),empl(Y,O),rep_to(Y,X).

boss(X,Y):-empl(X,O),empl(Y,O),rep_to(X,Y).

Heuristics: link to head

new variables

Page 178: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

178

Constructive learning

• Do we really learn something new?

• Hypotheses are in the same language as examples

• constructive induction

• How do we learn multiplication from examples? We need to invent plus –we have shown [IJCAI93] that true constructivism requires recursion, i.e. in mult(X,s(Y),Z) :- mult(X,Y,T), newp(T,X,Z)

mult(X,0,0).

• Newp – plus - must be recursive.

Page 179: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

179

Philosophical motivation

• Constructive induction is analogical to “revolution” in the methodology of science

• Kuhn’s Structure of Scientific Revolution:

normal science -> crisis -> revolution -> normal science

• Normal science = learning a “theory” in a fixed language

• Crisis = failure to cope with anomalies observed, due to inadequate language

• Revolution = introduction of new terms into the language (cannot be done in AV)

Page 180: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

180

Example: predicting colour in flowers

• Language: r, y; a is any red flower, b is any yellow flower; col(X,Y) X is of colour Y; ch(X,Y) = result of breeding of X and Y

• Observations (that Czech monk and his peas…)

1. col(a,r) % Adam and Eve

2. col(b,y).

3. col(ch(a,a),r). % first generation

4. col(ch(a,b),r).

5. col(ch(b,b),b).

6. col(ch(a,ch(b,b),r).%original and 1st

7. …

8. col(ch(ch(a,b)ch(a,b),y). 1st and 1st

9. ….

10.:-col(ch(a,a),y).

Page 181: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

181

col(ch(a,X),r).

col(ch(X,Y),a) :- col(X,r), col(Y,r).

col(ch(b,b),y).

col(ch(X,Y), y) :- col(X,y),col(Y,y).

• But in some generations y and r produce r, and in some – y

• We need either infinitely many clauses, or infinitely long clauses

• A revolution is necessary

Page 182: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

182

new necessary predicates are invented

col(X,y) n00(X).col(X,r) n11(X).col(X,r) n10(X).n00(ch(X,Y))n00(ch(X,X)),n00(ch(Y,Y))n10(ch(X,Y))n11(ch(X,X)),n00(ch(Y,Y))n11(ch(X,Y))n11(ch(X,X)),n11(ch(Y,Y))

type n11 and type n11 produce type n11 only

type n11 and type n00 produce type n10

Page 183: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

183

• n00 represents purebred flowers with recessive character, n11 – with dominant, and n10 – hybrid with dominant

• n00 is recessive, n11 is dominant purebred

• n10 dominant hybrid

• In fact, the invented predicates represent the concept of a gene! (dominant and recesive)

Page 184: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

184

Success story: mutagenicity

• heterogeneous chemical compounds – their structure requires relational representation

• BK: properties of specific atoms and bonds between them (relation!) and generic organic chemistry info (e.g. structure of benzene rings, etc.)

• Regression-unfriendly

A learned rule has been published in Science

conjugated double bond in a five-member ring

Page 185: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

185

problems

• Expressivity – efficiency

• Dimensionality reduction

• Therefore, interest in feature selection (relational selection)

Page 186: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

186

Dimensionality Reduction (Relational Feature Selection)

• Selection relationnelle

• Difficultés– Longueurs des clauses varient

– Différents littéraux dans différentes clauses

• Notre approche» Propositionnalisation

» MIP borné

» Selectiond’attributs dans un tel MIP, en présence du bruit introduit par le CDR

» Retour au relatoinnel

» Résultats empiriques

Page 187: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

187

Conclusion ILP

• ILP “contains” normal ML and goes beyond some of its limitations

• Is necessary if recursive definitions or ‘term invention’ are to be used

• Has both solid theoretical foundations and impressive practical achievements

• Lots to be done – e.g. in relational selection

Page 188: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

188

Course conclusion

• What we have not talked about:– clustering

– reinforcement learning

– Inductive Logic Programming

– Explanation-based learning/speedup

Page 189: 1 Advanced Topics in Computer Systems: Machine Learning and Data Mining Systems Winter 2007 Stan Matwin Professor School of Information Technology and

189

Course conclusion

• We have covered the basic inductive learning paradigms:

– decision trees– neural networks– bayesian learning– instance-based learning

• we have discussed different ways of evaluation of the results of learning

• we have connected the learning material to the data mining material

• we have presented a number of interesting applications