317
CSE 5331/7331 F'2011 1 CSE 5331/7331 Fall 2011 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering Southern Methodist University Slides extracted from Data Mining, Introductory and Advanced Topics , Prentice Hall, 2002.

CSE 5331/7331 F'2011 1 CSE 5331/7331 Fall 2011 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

  • View
    220

  • Download
    4

Embed Size (px)

Citation preview

CSE 5331/7331 F'2011 1

CSE 5331/7331Fall 2011

DATA MININGIntroductory and Related Topics

Margaret H. DunhamDepartment of Computer Science and Engineering

Southern Methodist University

Slides extracted from Data Mining, Introductory and Advanced Topics, Prentice Hall, 2002.

2CSE 5331/7331 F'2011

Data Mining Outline

PART I – Introduction– Techniques

PART II – Core Topics PART III – Related Topics

3CSE 5331/7331 F'2011

Introduction Outline

Define data mining Data mining vs. databases Basic data mining tasks Data mining development Data mining issues

Goal: Provide an overview of data mining.

4CSE 5331/7331 F'2011

Introduction

Data is growing at a phenomenal rate Users expect more sophisticated

information How?

UNCOVER HIDDEN INFORMATION

DATA MINING

5CSE 5331/7331 F'2011

Data Mining Definition

Finding hidden information in a database

Fit data to a model Similar terms

– Exploratory data analysis– Data driven discovery– Deductive learning

6CSE 5331/7331 F'2011

Data Mining Algorithm

Objective: Fit Data to a Model– Descriptive– Predictive

Preference – Technique to choose the best model

Search – Technique to search the data– “Query”

7CSE 5331/7331 F'2011

Database Processing vs. Data Mining Processing

Query– Well defined– SQL

Query– Poorly defined– No precise query

language Data– Operational data

Output– Precise– Subset of database

Data– Not operational data

Output– Fuzzy– Not a subset of database

8CSE 5331/7331 F'2011

Query Examples Database

Data Mining

– Find all customers who have purchased milk

– Find all items which are frequently purchased with milk. (association rules)

– Find all credit applicants with last name of Smith.– Identify customers who have purchased more

than $10,000 in the last month.

– Find all credit applicants who are poor credit risks. (classification)

– Identify customers with similar buying habits. (Clustering)

9CSE 5331/7331 F'2011

Data Mining Models and Tasks

10CSE 5331/7331 F'2011

Basic Data Mining Tasks Classification maps data into predefined

groups or classes– Supervised learning– Pattern recognition– Prediction

Regression is used to map a data item to a real valued prediction variable.

Clustering groups similar data together into clusters.– Unsupervised learning– Segmentation– Partitioning

11CSE 5331/7331 F'2011

Basic Data Mining Tasks (cont’d)

Summarization maps data into subsets with associated simple descriptions.– Characterization– Generalization

Link Analysis uncovers relationships among data.– Affinity Analysis– Association Rules– Sequential Analysis determines sequential

patterns.

12CSE 5331/7331 F'2011

Ex: Time Series Analysis Example: Stock Market Predict future values Determine similar patterns over time Classify behavior

13CSE 5331/7331 F'2011

Data Mining vs. KDD

Knowledge Discovery in Databases (KDD): process of finding useful information and patterns in data.

Data Mining: Use of algorithms to extract the information and patterns derived by the KDD process.

14CSE 5331/7331 F'2011

KDD Process

Selection: Obtain data from various sources. Preprocessing: Cleanse data. Transformation: Convert to common format.

Transform to new format. Data Mining: Obtain desired results. Interpretation/Evaluation: Present results

to user in meaningful manner.

Modified from [FPSS96C]

15CSE 5331/7331 F'2011

KDD Process Ex: Web Log Selection:

– Select log data (dates and locations) to use Preprocessing:

– Remove identifying URLs– Remove error logs

Transformation: – Sessionize (sort and group)

Data Mining: – Identify and count patterns– Construct data structure

Interpretation/Evaluation:– Identify and display frequently accessed sequences.

Potential User Applications:– Cache prediction– Personalization

16CSE 5331/7331 F'2011

Data Mining Development• Similarity Measures• Hierarchical Clustering• IR Systems• Imprecise Queries• Textual Data• Web Search Engines

• Bayes Theorem• Regression Analysis• EM Algorithm• K-Means Clustering• Time Series Analysis

• Neural Networks• Decision Tree Algorithms

• Algorithm Design Techniques• Algorithm Analysis• Data Structures

• Relational Data Model• SQL• Association Rule

Algorithms• Data Warehousing• Scalability Techniques

17CSE 5331/7331 F'2011

KDD Issues

Human Interaction Overfitting Outliers Interpretation Visualization Large Datasets High Dimensionality

18CSE 5331/7331 F'2011

KDD Issues (cont’d)

Multimedia Data Missing Data Irrelevant Data Noisy Data Changing Data Integration Application

19CSE 5331/7331 F'2011

Social Implications of DM

Privacy Profiling Unauthorized use

20CSE 5331/7331 F'2011

Data Mining Metrics

Usefulness Return on Investment (ROI) Accuracy Space/Time

21CSE 5331/7331 F'2011

Visualization Techniques

Graphical Geometric Icon-based Pixel-based Hierarchical Hybrid

22CSE 5331/7331 F'2011

Models Based on Summarization

Visualization: Frequency distribution, mean, variance, median, mode, etc.

Box Plot:

23CSE 5331/7331 F'2011

Scatter Diagram

24CSE 5331/7331 F'2011

Data Mining Techniques Outline

Statistical– Point Estimation– Models Based on Summarization– Bayes Theorem– Hypothesis Testing– Regression and Correlation

Similarity Measures Decision Trees Neural Networks

– Activation Functions Genetic Algorithms

Goal: Provide an overview of basic data mining techniques

25CSE 5331/7331 F'2011

Point Estimation Point Estimate: estimate a population

parameter. May be made by calculating the parameter for a

sample. May be used to predict value for missing data. Ex:

– R contains 100 employees– 99 have salary information– Mean salary of these is $50,000– Use $50,000 as value of remaining employee’s

salary. Is this a good idea?

26CSE 5331/7331 F'2011

Estimation Error

Bias: Difference between expected value and actual value.

Mean Squared Error (MSE): expected value of the squared difference between the estimate and the actual value:

Why square? Root Mean Square Error (RMSE)

27CSE 5331/7331 F'2011

Jackknife Estimate Jackknife Estimate: estimate of parameter is

obtained by omitting one value from the set of observed values.

Ex: estimate of mean for X={x1, … , xn}

28CSE 5331/7331 F'2011

Maximum Likelihood Estimate (MLE)

Obtain parameter estimates that maximize the probability that the sample data occurs for the specific model.

Joint probability for observing the sample data by multiplying the individual probabilities. Likelihood function:

Maximize L.

29CSE 5331/7331 F'2011

MLE Example

Coin toss five times: {H,H,H,H,T}

Assuming a perfect coin with H and T equally

likely, the likelihood of this sequence is:

However if the probability of a H is 0.8 then:

30CSE 5331/7331 F'2011

MLE Example (cont’d) General likelihood formula:

Estimate for p is then 4/5 = 0.8

31CSE 5331/7331 F'2011

Expectation-Maximization (EM)

Solves estimation with incomplete data. Obtain initial estimates for parameters. Iteratively use estimates for missing

data and continue until convergence.

32CSE 5331/7331 F'2011

EM Example

33CSE 5331/7331 F'2011

EM Algorithm

34CSE 5331/7331 F'2011

Bayes Theorem

Posterior Probability: P(h1|xi) Prior Probability: P(h1) Bayes Theorem:

Assign probabilities of hypotheses given a data value.

35CSE 5331/7331 F'2011

Bayes Theorem Example Credit authorizations (hypotheses):

h1=authorize purchase, h2 = authorize after further identification, h3=do not authorize, h4= do not authorize but contact police

Assign twelve data values for all combinations of credit and income:

From training data: P(h1) = 60%; P(h2)=20%;

P(h3)=10%; P(h4)=10%.

1 2 3 4 Excellent x1 x2 x3 x4 Good x5 x6 x7 x8 Bad x9 x10 x11 x12

36CSE 5331/7331 F'2011

Bayes Example(cont’d) Training Data:

ID Income Credit Class xi 1 4 Excellent h1 x4 2 3 Good h1 x7 3 2 Excellent h1 x2 4 3 Good h1 x7 5 4 Good h1 x8 6 2 Excellent h1 x2 7 3 Bad h2 x11 8 2 Bad h2 x10 9 3 Bad h3 x11 10 1 Bad h4 x9

37CSE 5331/7331 F'2011

Bayes Example(cont’d) Calculate P(xi|hj) and P(xi)

Ex: P(x7|h1)=2/6; P(x4|h1)=1/6; P(x2|h1)=2/6; P(x8|h1)=1/6; P(xi|h1)=0 for all other xi.

Predict the class for x4:– Calculate P(hj|x4) for all hj. – Place x4 in class with largest value.– Ex:

»P(h1|x4)=(P(x4|h1)(P(h1))/P(x4) =(1/6)(0.6)/0.1=1.

»x4 in class h1.

38CSE 5331/7331 F'2011

Regression

Predict future values based on past values

Linear Regression assumes linear relationship exists.

y = c0 + c1 x1 + … + cn xn

Find values to best fit the data

39CSE 5331/7331 F'2011

Linear Regression

40CSE 5331/7331 F'2011

Correlation

Examine the degree to which the values for two variables behave similarly.

Correlation coefficient r:• 1 = perfect correlation• -1 = perfect but opposite correlation• 0 = no correlation

41CSE 5331/7331 F'2011

Similarity Measures

Determine similarity between two objects. Similarity characteristics:

Alternatively, distance measure measure how unlike or dissimilar objects are.

42CSE 5331/7331 F'2011

Similarity Measures

43CSE 5331/7331 F'2011

Distance Measures

Measure dissimilarity between objects

44CSE 5331/7331 F'2011

Twenty Questions Game

45CSE 5331/7331 F'2011

Decision Trees Decision Tree (DT):

– Tree where the root and each internal node is labeled with a question.

– The arcs represent each possible answer to the associated question.

– Each leaf node represents a prediction of a solution to the problem.

Popular technique for classification; Leaf node indicates class to which the corresponding tuple belongs.

46CSE 5331/7331 F'2011

Decision Tree Example

47CSE 5331/7331 F'2011

Decision Trees

A Decision Tree Model is a computational model consisting of three parts:– Decision Tree– Algorithm to create the tree– Algorithm that applies the tree to data

Creation of the tree is the most difficult part. Processing is basically a search similar to

that in a binary search tree (although DT may not be binary).

48CSE 5331/7331 F'2011

Decision Tree Algorithm

49CSE 5331/7331 F'2011

DT Advantages/Disadvantages

Advantages:– Easy to understand. – Easy to generate rules

Disadvantages:– May suffer from overfitting.– Classifies by rectangular partitioning.– Does not easily handle nonnumeric data.– Can be quite large – pruning is necessary.

50CSE 5331/7331 F'2011

Neural Networks Based on observed functioning of human

brain. (Artificial Neural Networks (ANN) Our view of neural networks is very

simplistic. We view a neural network (NN) from a

graphical viewpoint. Alternatively, a NN may be viewed from

the perspective of matrices. Used in pattern recognition, speech

recognition, computer vision, and classification.

51CSE 5331/7331 F'2011

Neural Networks Neural Network (NN) is a directed graph

F=<V,A> with vertices V={1,2,…,n} and arcs A={<i,j>|1<=i,j<=n}, with the following restrictions:– V is partitioned into a set of input nodes, VI,

hidden nodes, VH, and output nodes, VO.– The vertices are also partitioned into layers – Any arc <i,j> must have node i in layer h-1

and node j in layer h.– Arc <i,j> is labeled with a numeric value wij.– Node i is labeled with a function fi.

52CSE 5331/7331 F'2011

Neural Network Example

53CSE 5331/7331 F'2011

NN Node

54CSE 5331/7331 F'2011

NN Activation Functions

Functions associated with nodes in graph.

Output may be in range [-1,1] or [0,1]

55CSE 5331/7331 F'2011

NN Activation Functions

56CSE 5331/7331 F'2011

NN Learning

Propagate input values through graph. Compare output to desired output. Adjust weights in graph accordingly.

57CSE 5331/7331 F'2011

Neural Networks

A Neural Network Model is a computational model consisting of three parts:– Neural Network graph – Learning algorithm that indicates how

learning takes place.– Recall techniques that determine hew

information is obtained from the network. We will look at propagation as the recall

technique.

58CSE 5331/7331 F'2011

NN Advantages

Learning Can continue learning even after

training set has been applied. Easy parallelization Solves many problems

59CSE 5331/7331 F'2011

NN Disadvantages

Difficult to understand May suffer from overfitting Structure of graph must be determined

a priori. Input values must be numeric. Verification difficult.

60CSE 5331/7331 F'2011

Genetic Algorithms Optimization search type algorithms. Creates an initial feasible solution and

iteratively creates new “better” solutions. Based on human evolution and survival of the

fittest. Must represent a solution as an individual. Individual: string I=I1,I2,…,In where Ij is in

given alphabet A. Each character Ij is called a gene. Population: set of individuals.

61CSE 5331/7331 F'2011

Genetic Algorithms A Genetic Algorithm (GA) is a computational

model consisting of five parts:– A starting set of individuals, P.– Crossover: technique to combine two

parents to create offspring.– Mutation: randomly change an individual.– Fitness: determine the best individuals.– Algorithm which applies the crossover and

mutation techniques to P iteratively using the fitness function to determine the best individuals in P to keep.

62CSE 5331/7331 F'2011

Crossover Examples

111 111

000 000

Parents Children

111 000

000 111

a) Single Crossover

111 111

Parents Children

111 000

000

a) Single Crossover

111 111

000 000

Parents

a) Multiple Crossover

111 111

000

Parents Children

111 000

000 111

Children

111 000

000 11100

11

00

11

63CSE 5331/7331 F'2011

Genetic Algorithm

64CSE 5331/7331 F'2011

GA Advantages/Disadvantages Advantages

– Easily parallelized Disadvantages

– Difficult to understand and explain to end users.

– Abstraction of the problem and method to represent individuals is quite difficult.

– Determining fitness function is difficult.– Determining how to perform crossover and

mutation is difficult.

65CSE 5331/7331 F'2011

Data Mining Outline

PART I - Introduction PART II – Core Topics

– Classification– Clustering– Association Rules

PART III – Related Topics

66CSE 5331/7331 F'2011

Classification Outline

Classification Problem Overview Classification Techniques

– Regression– Distance– Decision Trees– Rules– Neural Networks

Goal: Provide an overview of the classification problem and introduce some of the basic algorithms

67CSE 5331/7331 F'2011

Classification Problem Given a database D={t1,t2,…,tn} and a set

of classes C={C1,…,Cm}, the Classification Problem is to define a mapping f:DgC where each ti is assigned to one class.

Actually divides D into equivalence classes.

Prediction is similar, but may be viewed as having infinite number of classes.

68CSE 5331/7331 F'2011

Classification Examples

Teachers classify students’ grades as A, B, C, D, or F.

Identify mushrooms as poisonous or edible.

Predict when a river will flood. Identify individuals with credit risks. Speech recognition Pattern recognition

69CSE 5331/7331 F'2011

Classification Ex: Grading

If x >= 90 then grade =A.

If 80<=x<90 then grade =B.

If 70<=x<80 then grade =C.

If 60<=x<70 then grade =D.

If x<50 then grade =F.

>=90<90

x

>=80<80

x

>=70<70

x

F

B

A

>=60<50

x C

D

70CSE 5331/7331 F'2011

Classification Ex: Letter Recognition

View letters as constructed from 5 components:

Letter C

Letter E

Letter A

Letter D

Letter F

Letter B

71CSE 5331/7331 F'2011

Classification Techniques

Approach:1. Create specific model by evaluating

training data (or using domain experts’ knowledge).

2. Apply model developed to new data. Classes must be predefined Most common techniques use DTs,

NNs, or are based on distances or statistical methods.

72CSE 5331/7331 F'2011

Defining Classes

Partitioning Based

Distance Based

73CSE 5331/7331 F'2011

Issues in Classification

Missing Data– Ignore– Replace with assumed value

Measuring Performance– Classification accuracy on test data– Confusion matrix– OC Curve

74CSE 5331/7331 F'2011

Height Example DataName Gender Height Output1 Output2 Kristina F 1.6m Short Medium Jim M 2m Tall Medium Maggie F 1.9m Medium Tall Martha F 1.88m Medium Tall Stephanie F 1.7m Short Medium Bob M 1.85m Medium Medium Kathy F 1.6m Short Medium Dave M 1.7m Short Medium Worth M 2.2m Tall Tall Steven M 2.1m Tall Tall Debbie F 1.8m Medium Medium Todd M 1.95m Medium Medium Kim F 1.9m Medium Tall Amy F 1.8m Medium Medium Wynette F 1.75m Medium Medium

75CSE 5331/7331 F'2011

Classification Performance

True Positive

True NegativeFalse Positive

False Negative

76CSE 5331/7331 F'2011

Confusion Matrix Example

Using height data example with Output1 correct and Output2 actual assignment

Actual Assignment Membership Short Medium Tall Short 0 4 0 Medium 0 5 3 Tall 0 1 2

77CSE 5331/7331 F'2011

Operating Characteristic Curve

78CSE 5331/7331 F'2011

RegressionTopics

Linear Regression Nonlinear Regression Logistic Regression Metrics

79CSE 5331/7331 F'2011

Remember High School?

Y= mx + b You need two points to determine a

straight line. You need two points to find values for m

and b.

THIS IS REGRESSION

80CSE 5331/7331 F'2011

Regression Assume data fits a predefined function Determine best values for regression

coefficients c0,c1,…,cn. Assume an error: y = c0+c1x1+…+cnxn+e Estimate error using mean squared error for

training set:

81CSE 5331/7331 F'2011

Linear Regression Assume data fits a predefined function Determine best values for regression

coefficients c0,c1,…,cn. Assume an error: y = c0+c1x1+…+cnxn+e Estimate error using mean squared error for

training set:

82CSE 5331/7331 F'2011

Classification Using Linear Regression

Division: Use regression function to divide area into regions.

Prediction: Use regression function to predict a class membership function. Input includes desired class.

83CSE 5331/7331 F'2011

Division

84CSE 5331/7331 F'2011

Prediction

85CSE 5331/7331 F'2011

Linear Regression Poor Fit

Why use sum of least squares?http://curvefit.com/sum_of_squares.htmLinear doesn’t always work well

86CSE 5331/7331 F'2011

Nonlinear Regression

Data does not nicely fit a straight line Fit data to a curve Many possible functions Not as easy and straightforward as

linear regression How nonlinear regression works:

http://curvefit.com/how_nonlin_works.htm

87CSE 5331/7331 F'2011

P-value

The probability that a variable has a value greater than the observed value

http://en.wikipedia.org/wiki/P-value http://sportsci.org/resource/stats/pvalues.html

88CSE 5331/7331 F'2011

Covariance

Degree to which two variables vary in the same manner

Correlation is normalized and covariance is not

http://www.ds.unifi.it/VL/VL_EN/expect/expect3.html

89CSE 5331/7331 F'2011

Residual

Error Difference between desired output and

predicted output May actually use sum of squares

90CSE 5331/7331 F'2011

Classification Using Distance Place items in class to which they are

“closest”. Must determine distance between an

item and a class. Classes represented by

–Centroid: Central value.–Medoid: Representative point.– Individual points

Algorithm: KNN

91CSE 5331/7331 F'2011

K Nearest Neighbor (KNN):

Training set includes classes. Examine K items near item to be

classified. New item placed in class with the most

number of close items. O(q) for each tuple to be classified.

(Here q is the size of the training set.)

92CSE 5331/7331 F'2011

KNN

93CSE 5331/7331 F'2011

KNN Algorithm

94CSE 5331/7331 F'2011

Classification Using Decision Trees

Partitioning based: Divide search space into rectangular regions.

Tuple placed into class based on the region within which it falls.

DT approaches differ in how the tree is built: DT Induction

Internal nodes associated with attribute and arcs with values for that attribute.

Algorithms: ID3, C4.5, CART

95CSE 5331/7331 F'2011

Decision TreeGiven:

– D = {t1, …, tn} where ti=<ti1, …, tih> – Database schema contains {A1, A2, …, Ah}– Classes C={C1, …., Cm}

Decision or Classification Tree is a tree associated with D such that– Each internal node is labeled with attribute, Ai

– Each arc is labeled with predicate which can be applied to attribute at parent

– Each leaf node is labeled with a class, Cj

96CSE 5331/7331 F'2011

DT Induction

97CSE 5331/7331 F'2011

DT Splits Area

Gender

Height

M

F

98CSE 5331/7331 F'2011

Comparing DTs

BalancedDeep

99CSE 5331/7331 F'2011

DT Issues

Choosing Splitting Attributes Ordering of Splitting Attributes Splits Tree Structure Stopping Criteria Training Data Pruning

100CSE 5331/7331 F'2011

Decision Tree Induction is often based on Information Theory

So

101CSE 5331/7331 F'2011

Information

102CSE 5331/7331 F'2011

DT Induction

When all the marbles in the bowl are mixed up, little information is given.

When the marbles in the bowl are all from one class and those in the other two classes are on either side, more information is given.

Use this approach with DT Induction !

103CSE 5331/7331 F'2011

Information/Entropy Given probabilitites p1, p2, .., ps whose sum is

1, Entropy is defined as:

Entropy measures the amount of randomness or surprise or uncertainty.

Goal in classification– no surprise– entropy = 0

104CSE 5331/7331 F'2011

Entropy

log (1/p) H(p,1-p)

105CSE 5331/7331 F'2011

ID3 Creates tree using information theory

concepts and tries to reduce expected number of comparison..

ID3 chooses split attribute with the highest information gain:

106CSE 5331/7331 F'2011

ID3 Example (Output1) Starting state entropy:4/15 log(15/4) + 8/15 log(15/8) + 3/15 log(15/3) = 0.4384 Gain using gender:

– Female: 3/9 log(9/3)+6/9 log(9/6)=0.2764– Male: 1/6 (log 6/1) + 2/6 log(6/2) + 3/6 log(6/3) =

0.4392– Weighted sum: (9/15)(0.2764) + (6/15)(0.4392) =

0.34152– Gain: 0.4384 – 0.34152 = 0.09688

Gain using height:0.4384 – (2/15)(0.301) = 0.3983

Choose height as first splitting attribute

107CSE 5331/7331 F'2011

C4.5 ID3 favors attributes with large number of

divisions Improved version of ID3:

– Missing Data– Continuous Data– Pruning– Rules– GainRatio:

108CSE 5331/7331 F'2011

CART

Create Binary Tree Uses entropy Formula to choose split point, s, for node t:

PL,PR probability that a tuple in the training set will be on the left or right side of the tree.

109CSE 5331/7331 F'2011

CART Example At the start, there are six choices for

split point (right branch on equality):– P(Gender)=2(6/15)(9/15)(2/15 + 4/15 + 3/15)=0.224– P(1.6) = 0– P(1.7) = 2(2/15)(13/15)(0 + 8/15 + 3/15) = 0.169– P(1.8) = 2(5/15)(10/15)(4/15 + 6/15 + 3/15) = 0.385– P(1.9) = 2(9/15)(6/15)(4/15 + 2/15 + 3/15) = 0.256– P(2.0) = 2(12/15)(3/15)(4/15 + 8/15 + 3/15) = 0.32

Split at 1.8

110CSE 5331/7331 F'2011

Classification Using Neural Networks

Typical NN structure for classification:– One output node per class– Output value is class membership function value

Supervised learning For each tuple in training set, propagate it

through NN. Adjust weights on edges to improve future classification.

Algorithms: Propagation, Backpropagation, Gradient Descent

111CSE 5331/7331 F'2011

NN Issues

Number of source nodes Number of hidden layers Training data Number of sinks Interconnections Weights Activation Functions Learning Technique When to stop learning

112CSE 5331/7331 F'2011

Decision Tree vs. Neural Network

113CSE 5331/7331 F'2011

Propagation

Tuple Input

Output

114CSE 5331/7331 F'2011

NN Propagation Algorithm

115CSE 5331/7331 F'2011

Example Propagation

© Prentie Hall

116CSE 5331/7331 F'2011

NN Learning

Adjust weights to perform better with the associated test data.

Supervised: Use feedback from knowledge of correct classification.

Unsupervised: No knowledge of correct classification needed.

117CSE 5331/7331 F'2011

NN Supervised Learning

118CSE 5331/7331 F'2011

Supervised Learning

Possible error values assuming output from node i is yi but should be di:

Change weights on arcs based on estimated error

119CSE 5331/7331 F'2011

NN Backpropagation

Propagate changes to weights backward from output layer to input layer.

Delta Rule: r wij= c xij (dj – yj) Gradient Descent: technique to modify

the weights in the graph.

120CSE 5331/7331 F'2011

Backpropagation

Error

121CSE 5331/7331 F'2011

Backpropagation Algorithm

122CSE 5331/7331 F'2011

Gradient Descent

123CSE 5331/7331 F'2011

Gradient Descent Algorithm

124CSE 5331/7331 F'2011

Output Layer Learning

125CSE 5331/7331 F'2011

Hidden Layer Learning

126CSE 5331/7331 F'2011

Types of NNs

Different NN structures used for different problems.

Perceptron Self Organizing Feature Map Radial Basis Function Network

127CSE 5331/7331 F'2011

Perceptron

Perceptron is one of the simplest NNs. No hidden layers.

128CSE 5331/7331 F'2011

Perceptron Example

Suppose:– Summation: S=3x1+2x2-6

– Activation: if S>0 then 1 else 0

129CSE 5331/7331 F'2011

Self Organizing Feature Map (SOFM)

Competitive Unsupervised Learning Observe how neurons work in brain:

– Firing impacts firing of those near– Neurons far apart inhibit each other– Neurons have specific nonoverlapping

tasks Ex: Kohonen Network

130CSE 5331/7331 F'2011

Kohonen Network

131CSE 5331/7331 F'2011

Kohonen Network

Competitive Layer – viewed as 2D grid Similarity between competitive nodes and

input nodes:– Input: X = <x1, …, xh>

– Weights: <w1i, … , whi>

– Similarity defined based on dot product Competitive node most similar to input “wins” Winning node weights (as well as

surrounding node weights) increased.

132CSE 5331/7331 F'2011

Radial Basis Function Network

RBF function has Gaussian shape RBF Networks

– Three Layers– Hidden layer – Gaussian activation

function– Output layer – Linear activation function

133CSE 5331/7331 F'2011

Radial Basis Function Network

134CSE 5331/7331 F'2011

Classification Using Rules Perform classification using If-Then

rules Classification Rule: r = <a,c>

Antecedent, Consequent May generate from from other

techniques (DT, NN) or generate directly.

Algorithms: Gen, RX, 1R, PRISM

135CSE 5331/7331 F'2011

Generating Rules from DTs

136CSE 5331/7331 F'2011

Generating Rules Example

137CSE 5331/7331 F'2011

Generating Rules from NNs

138CSE 5331/7331 F'2011

1R Algorithm

139CSE 5331/7331 F'2011

1R Example

140CSE 5331/7331 F'2011

PRISM Algorithm

141CSE 5331/7331 F'2011

PRISM Example

142CSE 5331/7331 F'2011

Decision Tree vs. Rules

Tree has implied order in which splitting is performed.

Tree created based on looking at all classes.

Rules have no ordering of predicates.

Only need to look at one class to generate its rules.

143CSE 5331/7331 F'2011

Clustering Outline

Clustering Problem Overview Clustering Techniques

– Hierarchical Algorithms– Partitional Algorithms– Genetic Algorithm– Clustering Large Databases

Goal: Provide an overview of the clustering problem and introduce some of the basic algorithms

144CSE 5331/7331 F'2011

Clustering Examples

Segment customer database based on similar buying patterns.

Group houses in a town into neighborhoods based on similar features.

Identify new plant species Identify similar Web usage patterns

145CSE 5331/7331 F'2011

Clustering Example

146CSE 5331/7331 F'2011

Clustering Houses

Size BasedGeographic Distance Based

147CSE 5331/7331 F'2011

Clustering vs. Classification

No prior knowledge– Number of clusters– Meaning of clusters

Unsupervised learning

148CSE 5331/7331 F'2011

Clustering Issues

Outlier handling Dynamic data Interpreting results Evaluating results Number of clusters Data to be used Scalability

149CSE 5331/7331 F'2011

Impact of Outliers on Clustering

150CSE 5331/7331 F'2011

Clustering Problem

Given a database D={t1,t2,…,tn} of tuples and an integer value k, the Clustering Problem is to define a mapping f:Dg{1,..,k} where each ti is assigned to one cluster Kj, 1<=j<=k.

A Cluster, Kj, contains precisely those tuples mapped to it.

Unlike classification problem, clusters are not known a priori.

151CSE 5331/7331 F'2011

Types of Clustering

Hierarchical – Nested set of clusters created.

Partitional – One set of clusters created.

Incremental – Each element handled one at a time.

Simultaneous – All elements handled together.

Overlapping/Non-overlapping

152CSE 5331/7331 F'2011

Cluster Parameters

153CSE 5331/7331 F'2011

Distance Between Clusters Single Link: smallest distance between

points Complete Link: largest distance between

points Average Link: average distance between

points Centroid: distance between centroids

154CSE 5331/7331 F'2011

Hierarchical Clustering

Clusters are created in levels actually creating sets of clusters at each level.

Agglomerative– Initially each item in its own cluster– Iteratively clusters are merged together– Bottom Up

Divisive– Initially all items in one cluster– Large clusters are successively divided– Top Down

155CSE 5331/7331 F'2011

Hierarchical Algorithms

Single Link MST Single Link Complete Link Average Link

156CSE 5331/7331 F'2011

Dendrogram

Dendrogram: a tree data structure which illustrates hierarchical clustering techniques.

Each level shows clusters for that level.– Leaf – individual clusters– Root – one cluster

A cluster at level i is the union of its children clusters at level i+1.

157CSE 5331/7331 F'2011

Levels of Clustering

158CSE 5331/7331 F'2011

Agglomerative ExampleA B C D E

A 0 1 2 2 3

B 1 0 2 4 3

C 2 2 0 1 5

D 2 4 1 0 3

E 3 3 5 3 0

BA

E C

D

4

Threshold of

2 3 51

A B C D E

159CSE 5331/7331 F'2011

MST Example

A B C D E

A 0 1 2 2 3

B 1 0 2 4 3

C 2 2 0 1 5

D 2 4 1 0 3

E 3 3 5 3 0

BA

E C

D

160CSE 5331/7331 F'2011

Agglomerative Algorithm

161CSE 5331/7331 F'2011

Single Link View all items with links (distances)

between them. Finds maximal connected components

in this graph. Two clusters are merged if there is at

least one edge which connects them. Uses threshold distances at each level. Could be agglomerative or divisive.

162CSE 5331/7331 F'2011

MST Single Link Algorithm

163CSE 5331/7331 F'2011

Single Link Clustering

164CSE 5331/7331 F'2011

Partitional Clustering

Nonhierarchical Creates clusters in one step as

opposed to several steps. Since only one set of clusters is output,

the user normally has to input the desired number of clusters, k.

Usually deals with static sets.

165CSE 5331/7331 F'2011

Partitional Algorithms

MST Squared Error K-Means Nearest Neighbor PAM BEA GA

166CSE 5331/7331 F'2011

MST Algorithm

167CSE 5331/7331 F'2011

Squared Error

Minimized squared error

168CSE 5331/7331 F'2011

Squared Error Algorithm

169CSE 5331/7331 F'2011

K-Means Initial set of clusters randomly chosen. Iteratively, items are moved among sets

of clusters until the desired set is reached.

High degree of similarity among elements in a cluster is obtained.

Given a cluster Ki={ti1,ti2,…,tim}, the

cluster mean is mi = (1/m)(ti1 + … + tim)

170CSE 5331/7331 F'2011

K-Means Example Given: {2,4,10,12,3,20,30,11,25}, k=2 Randomly assign means: m1=3,m2=4 K1={2,3}, K2={4,10,12,20,30,11,25},

m1=2.5,m2=16 K1={2,3,4},K2={10,12,20,30,11,25},

m1=3,m2=18 K1={2,3,4,10},K2={12,20,30,11,25},

m1=4.75,m2=19.6 K1={2,3,4,10,11,12},K2={20,30,25},

m1=7,m2=25 Stop as the clusters with these means

are the same.

171CSE 5331/7331 F'2011

K-Means Algorithm

172CSE 5331/7331 F'2011

Nearest Neighbor

Items are iteratively merged into the existing clusters that are closest.

Incremental Threshold, t, used to determine if items

are added to existing clusters or a new cluster is created.

173CSE 5331/7331 F'2011

Nearest Neighbor Algorithm

174CSE 5331/7331 F'2011

PAM

Partitioning Around Medoids (PAM) (K-Medoids)

Handles outliers well. Ordering of input does not impact results. Does not scale well. Each cluster represented by one item,

called the medoid. Initial set of k medoids randomly chosen.

175CSE 5331/7331 F'2011

PAM

176CSE 5331/7331 F'2011

PAM Cost Calculation At each step in algorithm, medoids are

changed if the overall cost is improved. Cjih – cost change for an item tj associated

with swapping medoid ti with non-medoid th.

177CSE 5331/7331 F'2011

PAM Algorithm

178CSE 5331/7331 F'2011

BEA Bond Energy Algorithm Database design (physical and logical) Vertical fragmentation Determine affinity (bond) between attributes

based on common usage. Algorithm outline:

1. Create affinity matrix

2. Convert to BOND matrix

3. Create regions of close bonding

179CSE 5331/7331 F'2011

BEA

Modified from [OV99]

180CSE 5331/7331 F'2011

Genetic Algorithm Example

{A,B,C,D,E,F,G,H} Randomly choose initial solution:

{A,C,E} {B,F} {D,G,H} or10101000, 01000100, 00010011

Suppose crossover at point four and choose 1st and 3rd individuals:10100011, 01000100, 00011000

What should termination criteria be?

181CSE 5331/7331 F'2011

GA Algorithm

182CSE 5331/7331 F'2011

Clustering Large Databases

Most clustering algorithms assume a large data structure which is memory resident.

Clustering may be performed first on a sample of the database then applied to the entire database.

Algorithms– BIRCH– DBSCAN– CURE

183CSE 5331/7331 F'2011

Desired Features for Large Databases

One scan (or less) of DB Online Suspendable, stoppable, resumable Incremental Work with limited main memory Different techniques to scan (e.g.

sampling) Process each tuple once

184CSE 5331/7331 F'2011

BIRCH Balanced Iterative Reducing and

Clustering using Hierarchies Incremental, hierarchical, one scan Save clustering information in a tree Each entry in the tree contains

information about one cluster New nodes inserted in closest entry in

tree

185CSE 5331/7331 F'2011

Clustering Feature CT Triple: (N,LS,SS)

– N: Number of points in cluster– LS: Sum of points in the cluster– SS: Sum of squares of points in the cluster

CF Tree– Balanced search tree– Node has CF triple for each child– Leaf node represents cluster and has CF value

for each subcluster in it.– Subcluster has maximum diameter

186CSE 5331/7331 F'2011

BIRCH Algorithm

187CSE 5331/7331 F'2011

Improve Clusters

188CSE 5331/7331 F'2011

DBSCAN

Density Based Spatial Clustering of Applications with Noise

Outliers will not effect creation of cluster. Input

– MinPts – minimum number of points in cluster

– Eps – for each point in cluster there must be another point in it less than this distance away.

189CSE 5331/7331 F'2011

DBSCAN Density Concepts

Eps-neighborhood: Points within Eps distance of a point.

Core point: Eps-neighborhood dense enough (MinPts)

Directly density-reachable: A point p is directly density-reachable from a point q if the distance is small (Eps) and q is a core point.

Density-reachable: A point si density-reachable form another point if there is a path from one to the other consisting of only core points.

190CSE 5331/7331 F'2011

Density Concepts

191CSE 5331/7331 F'2011

DBSCAN Algorithm

192CSE 5331/7331 F'2011

CURE

Clustering Using Representatives Use many points to represent a cluster

instead of only one Points will be well scattered

193CSE 5331/7331 F'2011

CURE Approach

194CSE 5331/7331 F'2011

CURE Algorithm

195CSE 5331/7331 F'2011

CURE for Large Databases

196CSE 5331/7331 F'2011

Comparison of Clustering Techniques

197CSE 5331/7331 F'2011

Association Rules OutlineGoal: Provide an overview of basic

Association Rule mining techniques Association Rules Problem Overview

– Large itemsets Association Rules Algorithms

– Apriori– Sampling– Partitioning– Parallel Algorithms

Comparing Techniques Incremental Algorithms Advanced AR Techniques

198CSE 5331/7331 F'2011

Example: Market Basket Data Items frequently purchased together:

Bread PeanutButter Uses:

– Placement – Advertising– Sales– Coupons

Objective: increase sales and reduce costs

199CSE 5331/7331 F'2011

Association Rule Definitions

Set of items: I={I1,I2,…,Im}

Transactions: D={t1,t2, …, tn}, tj I Itemset: {Ii1,Ii2, …, Iik} I Support of an itemset: Percentage of

transactions which contain that itemset. Large (Frequent) itemset: Itemset

whose number of occurrences is above a threshold.

200CSE 5331/7331 F'2011

Association Rules Example

I = { Beer, Bread, Jelly, Milk, PeanutButter}

Support of {Bread,PeanutButter} is 60%

201CSE 5331/7331 F'2011

Association Rule Definitions

Association Rule (AR): implication X Y where X,Y I and X Y = ;

Support of AR (s) X Y: Percentage of transactions that contain X Y

Confidence of AR (a) X Y: Ratio of number of transactions that contain X Y to the number that contain X

202CSE 5331/7331 F'2011

Association Rules Ex (cont’d)

203CSE 5331/7331 F'2011

Association Rule Problem Given a set of items I={I1,I2,…,Im} and a

database of transactions D={t1,t2, …, tn} where ti={Ii1,Ii2, …, Iik} and Iij I, the Association Rule Problem is to identify all association rules X Y with a minimum support and confidence.

Link Analysis NOTE: Support of X Y is same as

support of X Y.

204CSE 5331/7331 F'2011

Association Rule Techniques

1. Find Large Itemsets.

2. Generate rules from frequent itemsets.

205CSE 5331/7331 F'2011

Algorithm to Generate ARs

206CSE 5331/7331 F'2011

Apriori

Large Itemset Property:

Any subset of a large itemset is large. Contrapositive:

If an itemset is not large,

none of its supersets are large.

207CSE 5331/7331 F'2011

Large Itemset Property

208CSE 5331/7331 F'2011

Apriori Ex (cont’d)

s=30% a = 50%

209CSE 5331/7331 F'2011

Apriori Algorithm

1. C1 = Itemsets of size one in I;

2. Determine all large itemsets of size 1, L1;

3. i = 1;

4. Repeat

5. i = i + 1;

6. Ci = Apriori-Gen(Li-1);

7. Count Ci to determine Li;

8. until no more large itemsets found;

210CSE 5331/7331 F'2011

Apriori-Gen

Generate candidates of size i+1 from large itemsets of size i.

Approach used: join large itemsets of size i if they agree on i-1

May also prune candidates who have subsets that are not large.

211CSE 5331/7331 F'2011

Apriori-Gen Example

212CSE 5331/7331 F'2011

Apriori-Gen Example (cont’d)

213CSE 5331/7331 F'2011

Apriori Adv/Disadv

Advantages:– Uses large itemset property.– Easily parallelized– Easy to implement.

Disadvantages:– Assumes transaction database is memory

resident.– Requires up to m database scans.

214CSE 5331/7331 F'2011

Sampling Large databases Sample the database and apply Apriori to the

sample. Potentially Large Itemsets (PL): Large

itemsets from sample Negative Border (BD - ):

– Generalization of Apriori-Gen applied to itemsets of varying sizes.

– Minimal set of itemsets which are not in PL, but whose subsets are all in PL.

215CSE 5331/7331 F'2011

Negative Border Example

PL PL BD-(PL)

216CSE 5331/7331 F'2011

Sampling Algorithm

1. Ds = sample of Database D;

2. PL = Large itemsets in Ds using smalls;

3. C = PL BD-(PL);4. Count C in Database using s;

5. ML = large itemsets in BD-(PL);6. If ML = then done7. else C = repeated application of

BD-;

8. Count C in Database;

217CSE 5331/7331 F'2011

Sampling Example Find AR assuming s = 20% Ds = { t1,t2} Smalls = 10% PL = {{Bread}, {Jelly}, {PeanutButter},

{Bread,Jelly}, {Bread,PeanutButter}, {Jelly, PeanutButter}, {Bread,Jelly,PeanutButter}}

BD-(PL)={{Beer},{Milk}} ML = {{Beer}, {Milk}} Repeated application of BD- generates all

remaining itemsets

218CSE 5331/7331 F'2011

Sampling Adv/Disadv

Advantages:– Reduces number of database scans to one

in the best case and two in worst.– Scales better.

Disadvantages:– Potentially large number of candidates in

second pass

219CSE 5331/7331 F'2011

Partitioning

Divide database into partitions D1,D2,…,Dp

Apply Apriori to each partition Any large itemset must be large in at

least one partition.

220CSE 5331/7331 F'2011

Partitioning Algorithm

1. Divide D into partitions D1,D2,…,Dp;

2. For I = 1 to p do

3. Li = Apriori(Di);

4. C = L1 … Lp;

5. Count C on D to generate L;

221CSE 5331/7331 F'2011

Partitioning Example

D1

D2

S=10%

L1 ={{Bread}, {Jelly}, {PeanutButter}, {Bread,Jelly}, {Bread,PeanutButter}, {Jelly, PeanutButter}, {Bread,Jelly,PeanutButter}}

L2 ={{Bread}, {Milk}, {PeanutButter}, {Bread,Milk}, {Bread,PeanutButter}, {Milk, PeanutButter}, {Bread,Milk,PeanutButter}, {Beer}, {Beer,Bread}, {Beer,Milk}}

222CSE 5331/7331 F'2011

Partitioning Adv/Disadv

Advantages:– Adapts to available main memory– Easily parallelized– Maximum number of database scans is

two. Disadvantages:

– May have many candidates during second scan.

223CSE 5331/7331 F'2011

Parallelizing AR Algorithms

Based on Apriori Techniques differ:

– What is counted at each site– How data (transactions) are distributed

Data Parallelism– Data partitioned– Count Distribution Algorithm

Task Parallelism– Data and candidates partitioned– Data Distribution Algorithm

224CSE 5331/7331 F'2011

Count Distribution Algorithm(CDA)1. Place data partition at each site.2. In Parallel at each site do3. C1 = Itemsets of size one in I;4. Count C1;

5. Broadcast counts to all sites;6. Determine global large itemsets of size 1, L1;7. i = 1; 8. Repeat9. i = i + 1;10. Ci = Apriori-Gen(Li-1);11. Count Ci;

12. Broadcast counts to all sites;13. Determine global large itemsets of size i, Li;14. until no more large itemsets found;

225CSE 5331/7331 F'2011

CDA Example

226CSE 5331/7331 F'2011

Data Distribution Algorithm(DDA)1. Place data partition at each site.2. In Parallel at each site do3. Determine local candidates of size 1 to count;4. Broadcast local transactions to other sites;5. Count local candidates of size 1 on all data;6. Determine large itemsets of size 1 for local

candidates; 7. Broadcast large itemsets to all sites;8. Determine L1;9. i = 1; 10. Repeat11. i = i + 1;12. Ci = Apriori-Gen(Li-1);13. Determine local candidates of size i to count;14. Count, broadcast, and find Li;15. until no more large itemsets found;

227CSE 5331/7331 F'2011

DDA Example

228CSE 5331/7331 F'2011

Comparing AR Techniques Target Type Data Type Data Source Technique Itemset Strategy and Data Structure Transaction Strategy and Data Structure Optimization Architecture Parallelism Strategy

229CSE 5331/7331 F'2011

Comparison of AR Techniques

230CSE 5331/7331 F'2011

Hash Tree

231CSE 5331/7331 F'2011

Incremental Association Rules Generate ARs in a dynamic database. Problem: algorithms assume static

database Objective:

– Know large itemsets for D– Find large itemsets for D {D D}

Must be large in either D or D D Save Li and counts

232CSE 5331/7331 F'2011

Note on ARs Many applications outside market

basket data analysis– Prediction (telecom switch failure)– Web usage mining

Many different types of association rules– Temporal– Spatial– Causal

233CSE 5331/7331 F'2011

Advanced AR Techniques

Generalized Association Rules Multiple-Level Association Rules Quantitative Association Rules Using multiple minimum supports Correlation Rules

234CSE 5331/7331 F'2011

Measuring Quality of Rules

Support Confidence Interest Conviction Chi Squared Test

235CSE 5331/7331 F'2011

Data Mining Outline

PART I - Introduction PART II – Core Topics

– Classification– Clustering– Association Rules

PART III – Related Topics

236CSE 5331/7331 F'2011

Related Topics Outline

Database/OLTP Systems Fuzzy Sets and Logic Information Retrieval(Web Search Engines) Dimensional Modeling Data Warehousing OLAP/DSS Statistics Machine Learning Pattern Matching

Goal: Examine some areas which are related to data mining.

237CSE 5331/7331 F'2011

DB & OLTP Systems Schema

– (ID,Name,Address,Salary,JobNo) Data Model

– ER– Relational

Transaction Query:

SELECT NameFROM TWHERE Salary > 100000

DM: Only imprecise queries

CSE 5331/7331 F'2011 238

Fuzzy Sets Outline

Introduction/Overview

Material for these slides obtained from:

Data Mining Introductory and Advanced Topics by Margaret H. Dunham

http://www.engr.smu.edu/~mhd/bookIntroduction to “Type-2 Fuzzy Logic” by Jenny Carter

http://www.cse.dmu.ac.uk/~jennyc/

239CSE 5331/7331 F'2011

Fuzzy Sets and Logic Fuzzy Set: Set membership function is a real valued

function with output in the range [0,1]. f(x): Probability x is in F. 1-f(x): Probability x is not in F. EX:

– T = {x | x is a person and x is tall}– Let f(x) be the probability that x is tall– Here f is the membership function

DM: Prediction and classification are fuzzy.

240CSE 5331/7331 F'2011

Fuzzy Sets and Logic

Fuzzy Set: Set membership function is a real valued function with output in the range [0,1].

f(x): Probability x is in F. 1-f(x): Probability x is not in F. EX:

– T = {x | x is a person and x is tall}– Let f(x) be the probability that x is tall– Here f is the membership function

241CSE 5331/7331 F'2011

Fuzzy Sets

242CSE 5331/7331 F'2011

IR is Fuzzy

Simple Fuzzy

Accept Accept

RejectReject

243CSE 5331/7331 F'2011

Fuzzy Set Theory

A fuzzy subset A of U is characterized by a membership function

(A,u) : U [0,1]which associates with each element u of

U a number (u) in the interval [0,1] Definition

– Let A and B be two fuzzy subsets of U. Also, let ¬A be the complement of A. Then,» (¬A,u) = 1 - (A,u) » (AB,u) = max((A,u), (B,u))» (AB,u) = min((A,u), (B,u))

244CSE 5331/7331 F'2011

The world is imprecise. Mathematical and Statistical techniques often

unsatisfactory.– Experts make decisions with imprecise data in an

uncertain world.– They work with knowledge that is rarely defined

mathematically or algorithmically but uses vague terminology with words.

Fuzzy logic is able to use vagueness to achieve a precise answer. By considering shades of grey and all factors simultaneously, you get a better answer, one that is more suited to the situation.

© Jenny Carter

245CSE 5331/7331 F'2011

Fuzzy Logic then . . . is particularly good at handling uncertainty,

vagueness and imprecision. especially useful where a problem can be

described linguistically (using words). Applications include:

– robotics– washing machine control– nuclear reactors– focusing a camcorder– information retrieval– train scheduling

© Jenny Carter

246CSE 5331/7331 F'2011

Crisp Sets

Different heights have same ‘tallness’

© Jenny Carter

247CSE 5331/7331 F'2011

Fuzzy Sets

The shape you see is known as the membership function

© Jenny Carter

248CSE 5331/7331 F'2011

Fuzzy Sets

Shows two membership functions: ‘tall’ and ‘short’

© Jenny Carter

249CSE 5331/7331 F'2011

NotationFor the member, x, of a discrete set with membership µ we use the notation µ/x . In other words, x is a member of the set to degree µ. Discrete sets are written as:

A = µ1/x1 + µ2/x2 + .......... + µn/xn

Or

where x1, x2....xn are members of the set A and µ1, µ2, ...., µn are their degrees of membership. A continuous fuzzy set A is written as:

© Jenny Carter

250CSE 5331/7331 F'2011

Fuzzy Sets The members of a fuzzy set are members to

some degree, known as a membership grade or degree of membership.

The membership grade is the degree of belonging to the fuzzy set. The larger the number (in [0,1]) the more the degree of belonging. (N.B. This is not a probability)

The translation from x to µA(x) is known as fuzzification.

A fuzzy set is either continuous or discrete. Graphical representation of membership

functions is very useful.

© Jenny Carter

251CSE 5331/7331 F'2011

Fuzzy Sets - Example

Again, notice the overlapping of the sets reflecting the real worldmore accurately than if we were using a traditional approach.

© Jenny Carter

252CSE 5331/7331 F'2011

Rules Rules often of the form:

IF x is A THEN y is B

where A and B are fuzzy sets defined on the universes of discourse X and Y respectively.

– if pressure is high then volume is small;– if a tomato is red then a tomato is ripe.

where high, small, red and ripe are fuzzy sets.

© Jenny Carter

CSE 5331/7331 F'2011 253

Information Retrieval Outline

Introduction/Overview

Material for these slides obtained from:Modern Information Retrieval by Ricardo Baeza-Yates and Berthier Ribeiro-Neto

http://www.sims.berkeley.edu/~hearst/irbook/Data Mining Introductory and Advanced Topics by Margaret H. Dunham

http://www.engr.smu.edu/~mhd/book

254CSE 5331/7331 F'2011

Information Retrieval

Information Retrieval (IR): retrieving desired information from textual data.

Library Science Digital Libraries Web Search Engines Traditionally keyword based Sample query:

Find all documents about “data mining”.

DM: Similarity measures; Mine text/Web data.

255CSE 5331/7331 F'2011

Information Retrieval

Information Retrieval (IR): retrieving desired information from textual data.

Library Science Digital Libraries Web Search Engines Traditionally keyword based Sample query:

Find all documents about “data mining”.

256CSE 5331/7331 F'2011

DB vs IR

Records (tuples) vs. documents Well defined results vs. fuzzy results DB grew out of files and traditional

business systesm IR grew out of library science and need

to categorize/group/access books/articles

257CSE 5331/7331 F'2011

DB vs IR (cont’d)

Data retrieval which docs contain a set of keywords? Well defined semantics a single erroneous object implies failure!

Information retrieval information about a subject or topic semantics is frequently loose small errors are tolerated

IR system: interpret contents of information items generate a ranking which reflects relevance notion of relevance is most important

258CSE 5331/7331 F'2011

Motivation

IR in the last 20 years: classification and categorization systems and languages user interfaces and visualization

Still, area was seen as of narrow interestAdvent of the Web changed this perception

once and for all universal repository of knowledge free (low cost) universal access no central editorial board many problems though: IR seen as key to finding the

solutions!

259CSE 5331/7331 F'2011

Basic Concepts

Logical view of the documents

Document representation viewed as a continuum: logical view of docs might shift

structure

Accentsspacing stopwords

Noungroups stemming

Manual indexingDocs

structure Full text Index terms

© Baeza-Yates and Ribeiro-Neto

260CSE 5331/7331 F'2011

UserInterface

Text Operations

Query Operations Indexing

Searching

Ranking

Index

Text

query

user need

user feedback

ranked docs

retrieved docs

logical viewlogical view

inverted file

DB Manager Module

Text Database

Text

The Retrieval Process

© Baeza-Yates and Ribeiro-Neto

261CSE 5331/7331 F'2011

Information Retrieval

Similarity: measure of how close a query is to a document.

Documents which are “close enough” are retrieved.

Metrics:– Precision = |Relevant and Retrieved|

|Retrieved|– Recall = |Relevant and Retrieved|

|Relevant|

262CSE 5331/7331 F'2011

Indexing

IR systems usually adopt index terms to process queries

Index term:– a keyword or group of selected words– any word (more general)

Stemming might be used:– connect: connecting, connection, connections

An inverted file is built for the chosen index terms

© Baeza-Yates and Ribeiro-Neto

263CSE 5331/7331 F'2011

Indexing Docs

Information Need

Index Terms

doc

query

Rankingmatch

© Baeza-Yates and Ribeiro-Neto

264CSE 5331/7331 F'2011

Inverted Files There are two main elements:

– vocabulary – set of unique terms – Occurrences – where those terms appear

The occurrences can be recorded as terms or byte offsets

Using term offset is good to retrieve concepts such as proximity, whereas byte offsets allow direct access

Vocabulary Occurrences (byte offset)… …

© Baeza-Yates and Ribeiro-Neto

265CSE 5331/7331 F'2011

Inverted Files

The number of indexed terms is often several orders of magnitude smaller when compared to the documents size (Mbs vs Gbs)

The space consumed by the occurrence list is not trivial. Each time the term appears it must be added to a list in the inverted file

That may lead to a quite considerable index overhead

© Baeza-Yates and Ribeiro-Neto

266CSE 5331/7331 F'2011

Example Text:

Inverted file

1 6 12 16 18 25 29 36 40 45 54 58 66 70

That house has a garden. The garden has many flowers. The flowers are beautiful

beautiful

flowers

garden

house

70

45, 58

18, 29

6

Vocabulary Occurrences

© Baeza-Yates and Ribeiro-Neto

267CSE 5331/7331 F'2011

Ranking

A ranking is an ordering of the documents retrieved that (hopefully) reflects the relevance of the documents to the query

A ranking is based on fundamental premisses regarding the notion of relevance, such as:– common sets of index terms– sharing of weighted terms– likelihood of relevance

Each set of premisses leads to a distinct IR model© Baeza-Yates and Ribeiro-Neto

268CSE 5331/7331 F'2011

Classic IR Models - Basic Concepts

Each document represented by a set of representative keywords or index terms

An index term is a document word useful for remembering the document main themes

Usually, index terms are nouns because nouns have meaning by themselves

However, search engines assume that all words are index terms (full text representation)

© Baeza-Yates and Ribeiro-Neto

269CSE 5331/7331 F'2011

Classic IR Models - Basic Concepts

The importance of the index terms is represented by weights associated to them

ki- an index term

dj - a document

wij - a weight associated with (ki,dj)

The weight wij quantifies the importance of the index term for describing the document contents

© Baeza-Yates and Ribeiro-Neto

270CSE 5331/7331 F'2011

Classic IR Models - Basic Concepts

– t is the total number of index terms– K = {k1, k2, …, kt} is the set of all index terms

– wij >= 0 is a weight associated with (ki,dj)

– wij = 0 indicates that term does not belong to doc

– dj= (w1j, w2j, …, wtj) is a weighted vector associated with the document dj

– gi(dj) = wij is a function which returns the weight associated with pair (ki,dj)

© Baeza-Yates and Ribeiro-Neto

271CSE 5331/7331 F'2011

The Boolean Model

Simple model based on set theory Queries specified as boolean expressions

– precise semantics and neat formalism Terms are either present or absent. Thus,

wij {0,1} Consider

– q = ka (kb kc)

– qdnf = (1,1,1) (1,1,0) (1,0,0)

– qcc= (1,1,0) is a conjunctive component

© Baeza-Yates and Ribeiro-Neto

272CSE 5331/7331 F'2011

The Vector Model

Use of binary weights is too limiting Non-binary weights provide consideration for

partial matches These term weights are used to compute a

degree of similarity between a query and each document

Ranked set of documents provides for better matching

© Baeza-Yates and Ribeiro-Neto

273CSE 5331/7331 F'2011

The Vector Model

wij > 0 whenever ki appears in dj

wiq >= 0 associated with the pair (ki,q)

dj = (w1j, w2j, ..., wtj)

q = (w1q, w2q, ..., wtq)

To each term ki is associated a unitary vector i The unitary vectors i and j are assumed to be

orthonormal (i.e., index terms are assumed to occur independently within the documents)

The t unitary vectors i form an orthonormal basis for a t-dimensional space where queries and documents are represented as weighted vectors

© Baeza-Yates and Ribeiro-Neto

274CSE 5331/7331 F'2011

Query Languages

Keyword Based Boolean Weighted Boolean Context Based (Phrasal & Proximity) Pattern Matching Structural Queries

© Baeza-Yates and Ribeiro-Neto

275CSE 5331/7331 F'2011

Keyword Based Queries

Basic Queries– Single word– Multiple words

Context Queries– Phrase– Proximity

© Baeza-Yates and Ribeiro-Neto

276CSE 5331/7331 F'2011

Boolean Queries

Keywords combined with Boolean operators:– OR: (e1 OR e2)

– AND: (e1 AND e2)

– BUT: (e1 BUT e2) Satisfy e1 but not e2

Negation only allowed using BUT to allow efficient use of inverted index by filtering another efficiently retrievable set.

Naïve users have trouble with Boolean logic.

© Baeza-Yates and Ribeiro-Neto

277CSE 5331/7331 F'2011

Boolean Retrieval with Inverted Indices

Primitive keyword: Retrieve containing documents using the inverted index.

OR: Recursively retrieve e1 and e2 and take union of results.

AND: Recursively retrieve e1 and e2 and take intersection of results.

BUT: Recursively retrieve e1 and e2 and take set difference of results.

© Baeza-Yates and Ribeiro-Neto

278CSE 5331/7331 F'2011

Phrasal Queries

Retrieve documents with a specific phrase (ordered list of contiguous words)– “information theory”

May allow intervening stop words and/or stemming.– “buy camera” matches:

“buy a camera” “buying the cameras” etc.

© Baeza-Yates and Ribeiro-Neto

279CSE 5331/7331 F'2011

Phrasal Retrieval with Inverted Indices

Must have an inverted index that also stores positions of each keyword in a document.

Retrieve documents and positions for each individual word, intersect documents, and then finally check for ordered contiguity of keyword positions.

Best to start contiguity check with the least common word in the phrase.

© Baeza-Yates and Ribeiro-Neto

280CSE 5331/7331 F'2011

Proximity Queries

List of words with specific maximal distance constraints between terms.

Example: “dogs” and “race” within 4 words match “…dogs will begin the race…”

May also perform stemming and/or not count stop words.

© Baeza-Yates and Ribeiro-Neto

281CSE 5331/7331 F'2011

Pattern Matching

Allow queries that match strings rather than word tokens.

Requires more sophisticated data structures and algorithms than inverted indices to retrieve efficiently.

© Baeza-Yates and Ribeiro-Neto

282CSE 5331/7331 F'2011

Simple Patterns

Prefixes: Pattern that matches start of word.– “anti” matches “antiquity”, “antibody”, etc.

Suffixes: Pattern that matches end of word:– “ix” matches “fix”, “matrix”, etc.

Substrings: Pattern that matches arbitrary subsequence of characters.– “rapt” matches “enrapture”, “velociraptor” etc.

Ranges: Pair of strings that matches any word lexicographically (alphabetically) between them.– “tin” to “tix” matches “tip”, “tire”, “title”, etc.

© Baeza-Yates and Ribeiro-Neto

283CSE 5331/7331 F'2011

IR Query Result Measures and Classification

IR Classification

284CSE 5331/7331 F'2011

Dimensional Modeling View data in a hierarchical manner more as

business executives might Useful in decision support systems and mining Dimension: collection of logically related

attributes; axis for modeling data. Facts: data stored Ex: Dimensions – products, locations, date

Facts – quantity, unit price

DM: May view data as dimensional.

285CSE 5331/7331 F'2011

Dimensional Modeling

View data in a hierarchical manner more as business executives might

Useful in decision support systems and mining Dimension: collection of logically related

attributes; axis for modeling data. Facts: data stored Ex: Dimensions – products, locations, date

Facts – quantity, unit price

286CSE 5331/7331 F'2011

Aggregation Hierarchies

287CSE 5331/7331 F'2011

Multidimensional Schemas

Star Schema shows facts and dimensions– Center of the star has facts shown in fact tables– Outside of the facts, each diemnsion is shown

separately in dimension tables– Access to fact table from dimension table via join

SELECT Quantity, PriceFROM Facts, LocationWhere (Facts.LocationID = Location.LocationID) and(Location.City = ‘Dallas’)

– View as relations, problem volume of data and indexing

288CSE 5331/7331 F'2011

Star Schema

289CSE 5331/7331 F'2011

Flattened Star

290CSE 5331/7331 F'2011

Normalized Star

291CSE 5331/7331 F'2011

Snowflake Schema

292CSE 5331/7331 F'2011

OLAP Online Analytic Processing (OLAP): provides more

complex queries than OLTP. OnLine Transaction Processing (OLTP): traditional

database/transaction processing. Dimensional data; cube view Visualization of operations:

– Slice: examine sub-cube.– Dice: rotate cube to look at another dimension.– Roll Up/Drill Down

DM: May use OLAP queries.

293CSE 5331/7331 F'2011

OLAP Introduction

OLAP by Example

http://perso.orange.fr/bernard.lupin/english/index.htm What is OLAP?

http://www.olapreport.com/fasmi.htm

294CSE 5331/7331 F'2011

OLAP Online Analytic Processing (OLAP): provides more

complex queries than OLTP. OnLine Transaction Processing (OLTP): traditional

database/transaction processing. Dimensional data; cube view Support ad hoc querying Require analysis of data Can be thought of as an extension of some of the basic

aggregation functions available in SQL OLAP tools may be used in DSS systems Multidimentional view is fundamental

295CSE 5331/7331 F'2011

OLAP Implementations MOLAP (Multidimensional OLAP)

– Multidimential Database (MDD)– Specialized DBMS and software system capable of

supporting the multidimensional data directly– Data stored as an n-dimensional array (cube)– Indexes used to speed up processing

ROLAP (Relational OLAP)– Data stored in a relational database– ROLAP server (middleware) creates the

multidimensional view for the user– Less Complex; Less efficient

HOLAP (Hybrid OLAP)– Not updated frequently – MDD– Updated frequently - RDB

296CSE 5331/7331 F'2011

OLAP Operations

Single Cell Multiple Cells Slice Dice

Roll Up

Drill Down

297CSE 5331/7331 F'2011

OLAP Operations

Simple query – single cell in the cube Slice – Look at a subcube to get more

specific information Dice – Rotate cube to look at another

dimension Roll Up – Dimension Reduction; Aggregation Drill Down Visualization: These operations allow the

OLAP users to actually “see” results of an operation.

298CSE 5331/7331 F'2011

Relationship Between Topcs

299CSE 5331/7331 F'2011

Decision Support Systems Tools and computer systems that assist

management in decision making What if types of questions High level decisions Data warehouse – data which supports

DSS

300CSE 5331/7331 F'2011

Unified Dimensional Model

Microsoft Cube View SQL Server 2005

http://msdn2.microsoft.com/en-us/library/ms345143.aspx

http://cwebbbi.spaces.live.com/Blog/cns!1pi7ETChsJ1un_2s41jm9Iyg!325.entry MDX AS2005

http://msdn2.microsoft.com/en-us/library/aa216767(SQL.80).aspx

301CSE 5331/7331 F'2011

Data Warehousing

“Subject-oriented, integrated, time-variant, nonvolatile” William Inmon

Operational Data: Data used in day to day needs of company.

Informational Data: Supports other functions such as planning and forecasting.

Data mining tools often access data warehouses rather than operational data.

DM: May access data in warehouse.

302CSE 5331/7331 F'2011

Operational vs. Informational

  Operational Data Data Warehouse

Application OLTP OLAP

Use Precise Queries Ad Hoc

Temporal Snapshot Historical

Modification Dynamic Static

Orientation Application Business

Data Operational Values Integrated

Size Gigabits TerabitsLevel Detailed Summarized

Access Often Less Often

Response Few Seconds Minutes

Data Schema Relational Star/Snowflake

303CSE 5331/7331 F'2011

Statistics Simple descriptive models Statistical inference: generalizing a model

created from a sample of the data to the entire dataset.

Exploratory Data Analysis: – Data can actually drive the creation of the

model– Opposite of traditional statistical view.

Data mining targeted to business user

DM: Many data mining methods come from statistical techniques.

304CSE 5331/7331 F'2011

Pattern Matching (Recognition)

Pattern Matching: finds occurrences of a predefined pattern in the data.

Applications include speech recognition, information retrieval, time series analysis.

DM: Type of classification.

305CSE 5331/7331 F'2011

Image Mining Outline

Image Mining – What is it? Feature Extraction Shape Detection Color Techniques Video Mining Facial Recognition Bioinformatics

306CSE 5331/7331 F'2011

The 2000 ozone hole over the antarctic seen by EPTOMS http://jwocky.gsfc.nasa.gov/multi/multi.html#hole

307CSE 5331/7331 F'2011

Image Mining – What is it? Image Retrieval Image Classification Image Clustering Video Mining Applications

– Bioinformatics– Geology/Earth Science– Security– …

308CSE 5331/7331 F'2011

Feature Extraction

Identify major components of image Color Texture Shape Spatial relationships Feature Extraction & Image Processing

http://users.ecs.soton.ac.uk/msn/book/ Feature Extraction Tutorial

http://facweb.cs.depaul.edu/research/vc/VC_Workshop/presentations/pdf/daniela_tutorial2.pdf

309CSE 5331/7331 F'2011

Shape Detection

Boundary/Edge Detection Time Series – Eamonn Keogh

http://www.engr.smu.edu/~mhd/8337sp07/shapes.ppt

310CSE 5331/7331 F'2011

Color Techniques

Color Representations

RGB:

http://en.wikipedia.org/wiki/Rgb

HSV: http://en.wikipedia.org/wiki/HSV_color_space

Color Histogram Color Anglogram

http://www.cs.sunysb.edu/~rzhao/publications/VideoDB.pdf

311CSE 5331/7331 F'2011

What is Similarity?

(c) Eamonn Keogh, [email protected]

312CSE 5331/7331 F'2011

Video Mining

Boundaries between shots Movement between frames ANSES:

http://mmir.doc.ic.ac.uk/demos/anses.html

313CSE 5331/7331 F'2011

Facial Recognition Based upon features in face Convert face to a feature vector Less invasive than other biometric

techniques http://www.face-rec.org http://computer.howstuffworks.com/facial-re

cognition.htm SIMS:

http://www.casinoincidentreporting.com/Products.aspx

314CSE 5331/7331 F'2011

Microarray Data Analysis Each probe location associated with gene Measure the amount of mRNA Color indicates degree of gene expression Compare different samples (normal/disease) Track same sample over time Questions

– Which genes are related to this disease?– Which genes behave in a similar manner?– What is the function of a gene?

Clustering– Hierarchical– K-means

315CSE 5331/7331 F'2011

Affymetrix GeneChip® Array

http://www.affymetrix.com/corporate/outreach/lesson_plan/educator_resources.affx

316CSE 5331/7331 F'2011

Microarray Data - Clustering

"Gene expression profiling identifies clinically relevant subtypes of prostate cancer"

Proc. Natl. Acad. Sci. USA, Vol. 101, Issue 3, 811-816, January 20, 2004

317CSE 5331/7331 F'2011