337
CSE 5331/7331 F'09 1 CSE 5331/7331 CSE 5331/7331 Fall 2009 Fall 2009 DATA MINING DATA MINING Introductory and Related Topics Introductory and Related Topics Margaret H. Dunham Margaret H. Dunham Department of Computer Science and Department of Computer Science and Engineering Engineering Southern Methodist University Southern Methodist University Slides extracted from Slides extracted from Data Mining, Introductory and Advanced Topics Data Mining, Introductory and Advanced Topics , Prentice Hall, , Prentice Hall, 2002. 2002.

CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

Embed Size (px)

Citation preview

Page 1: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

CSE 5331/7331 F'09 1

CSE 5331/7331CSE 5331/7331Fall 2009Fall 2009

DATA MININGDATA MININGIntroductory and Related TopicsIntroductory and Related Topics

Margaret H. DunhamMargaret H. DunhamDepartment of Computer Science and EngineeringDepartment of Computer Science and Engineering

Southern Methodist UniversitySouthern Methodist University

Slides extracted from Slides extracted from Data Mining, Introductory and Advanced TopicsData Mining, Introductory and Advanced Topics, Prentice Hall, 2002., Prentice Hall, 2002.

Page 2: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

2CSE 5331/7331 F'09

Data Mining OutlineData Mining Outline

PART I PART I – IntroductionIntroduction– TechniquesTechniques

PART II – Core Topics PART III – Related Topics

Page 3: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

3CSE 5331/7331 F'09

Introduction OutlineIntroduction Outline

Define data miningDefine data mining Data mining vs. databasesData mining vs. databases Basic data mining tasksBasic data mining tasks Data mining developmentData mining development Data mining issuesData mining issues

Goal:Goal: Provide an overview of data mining. Provide an overview of data mining.

Page 4: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

4CSE 5331/7331 F'09

IntroductionIntroduction

Data is growing at a phenomenal rateData is growing at a phenomenal rate Users expect more sophisticated Users expect more sophisticated

informationinformation How?How?

UNCOVER HIDDEN INFORMATIONUNCOVER HIDDEN INFORMATION

DATA MININGDATA MINING

Page 5: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

5CSE 5331/7331 F'09

Data Mining DefinitionData Mining Definition

Finding hidden information in a Finding hidden information in a databasedatabase

Fit data to a modelFit data to a model Similar termsSimilar terms

– Exploratory data analysisExploratory data analysis– Data driven discoveryData driven discovery– Deductive learningDeductive learning

Page 6: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

6CSE 5331/7331 F'09

Data Mining AlgorithmData Mining Algorithm

Objective: Fit Data to a ModelObjective: Fit Data to a Model– DescriptiveDescriptive– PredictivePredictive

Preference – Technique to choose the Preference – Technique to choose the best modelbest model

Search – Technique to search the dataSearch – Technique to search the data– ““Query”Query”

Page 7: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

7CSE 5331/7331 F'09

Database Processing vs. Data Database Processing vs. Data Mining ProcessingMining Processing

QueryQuery– Well definedWell defined– SQLSQL

QueryQuery– Poorly definedPoorly defined– No precise query languageNo precise query language

DataData– Operational dataOperational data

OutputOutput– PrecisePrecise– Subset of databaseSubset of database

DataData– Not operational dataNot operational data

OutputOutput– FuzzyFuzzy– Not a subset of databaseNot a subset of database

Page 8: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

8CSE 5331/7331 F'09

Query ExamplesQuery Examples DatabaseDatabase

Data MiningData Mining

– Find all customers who have purchased milkFind all customers who have purchased milk

– Find all items which are frequently purchased Find all items which are frequently purchased with milk. (association rules)with milk. (association rules)

– Find all credit applicants with last name of Smith.Find all credit applicants with last name of Smith.– Identify customers who have purchased more Identify customers who have purchased more than $10,000 in the last month.than $10,000 in the last month.

– Find all credit applicants who are poor credit Find all credit applicants who are poor credit risks. (classification)risks. (classification)– Identify customers with similar buying habits. Identify customers with similar buying habits. (Clustering)(Clustering)

Page 9: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

9CSE 5331/7331 F'09

Data Mining Models and TasksData Mining Models and Tasks

Page 10: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

10CSE 5331/7331 F'09

Basic Data Mining TasksBasic Data Mining Tasks Classification Classification maps data into predefined groups maps data into predefined groups

or classesor classes– Supervised learningSupervised learning– Pattern recognitionPattern recognition– PredictionPrediction

RegressionRegression is used to map a data item to a real is used to map a data item to a real valued prediction variable.valued prediction variable.

Clustering Clustering groups similar data together into groups similar data together into clusters.clusters.– Unsupervised learningUnsupervised learning– SegmentationSegmentation– PartitioningPartitioning

Page 11: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

11CSE 5331/7331 F'09

Basic Data Mining Tasks Basic Data Mining Tasks (cont’d)(cont’d)

Summarization Summarization maps data into subsets with maps data into subsets with associated simple descriptions.associated simple descriptions.– CharacterizationCharacterization– GeneralizationGeneralization

Link AnalysisLink Analysis uncovers relationships among uncovers relationships among data.data.– Affinity AnalysisAffinity Analysis– Association RulesAssociation Rules– Sequential Analysis determines sequential Sequential Analysis determines sequential

patterns.patterns.

Page 12: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

12CSE 5331/7331 F'09

Ex: Time Series AnalysisEx: Time Series Analysis Example: Stock MarketExample: Stock Market Predict future valuesPredict future values Determine similar patterns over timeDetermine similar patterns over time Classify behaviorClassify behavior

Page 13: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

13CSE 5331/7331 F'09

Data Mining vs. KDDData Mining vs. KDD

Knowledge Discovery in Databases Knowledge Discovery in Databases (KDD):(KDD): process of finding useful process of finding useful information and patterns in data.information and patterns in data.

Data Mining:Data Mining: Use of algorithms to Use of algorithms to extract the information and patterns extract the information and patterns derived by the KDD process. derived by the KDD process.

Page 14: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

14CSE 5331/7331 F'09

KDD ProcessKDD Process

Selection:Selection: Obtain data from various sources. Obtain data from various sources. Preprocessing:Preprocessing: Cleanse data. Cleanse data. Transformation:Transformation: Convert to common format. Convert to common format.

Transform to new format.Transform to new format. Data Mining:Data Mining: Obtain desired results. Obtain desired results. Interpretation/Evaluation:Interpretation/Evaluation: Present results Present results

to user in meaningful manner.to user in meaningful manner.

Modified from [FPSS96C]

Page 15: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

15CSE 5331/7331 F'09

KDD Process Ex: Web LogKDD Process Ex: Web Log Selection:Selection:

– Select log data (dates and locations) to useSelect log data (dates and locations) to use Preprocessing:Preprocessing:

– Remove identifying URLsRemove identifying URLs– Remove error logsRemove error logs

Transformation:Transformation: – Sessionize (sort and group)Sessionize (sort and group)

Data Mining:Data Mining: – Identify and count patternsIdentify and count patterns– Construct data structureConstruct data structure

Interpretation/Evaluation:Interpretation/Evaluation:– Identify and display frequently accessed sequences.Identify and display frequently accessed sequences.

Potential User Applications:Potential User Applications:– Cache predictionCache prediction– PersonalizationPersonalization

Page 16: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

16CSE 5331/7331 F'09

Data Mining DevelopmentData Mining Development•Similarity Measures•Hierarchical Clustering•IR Systems•Imprecise Queries•Textual Data•Web Search Engines

•Bayes Theorem•Regression Analysis•EM Algorithm•K-Means Clustering•Time Series Analysis

•Neural Networks•Decision Tree Algorithms

•Algorithm Design Techniques•Algorithm Analysis•Data Structures

•Relational Data Model•SQL•Association Rule Algorithms•Data Warehousing•Scalability Techniques

Page 17: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

17CSE 5331/7331 F'09

KDD IssuesKDD Issues

Human InteractionHuman Interaction OverfittingOverfitting OutliersOutliers InterpretationInterpretation Visualization Visualization Large DatasetsLarge Datasets High DimensionalityHigh Dimensionality

Page 18: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

18CSE 5331/7331 F'09

KDD Issues (cont’d)KDD Issues (cont’d)

Multimedia DataMultimedia Data Missing DataMissing Data Irrelevant DataIrrelevant Data Noisy DataNoisy Data Changing DataChanging Data IntegrationIntegration ApplicationApplication

Page 19: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

19CSE 5331/7331 F'09

Social Implications of DMSocial Implications of DM

Privacy Privacy ProfilingProfiling Unauthorized useUnauthorized use

Page 20: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

20CSE 5331/7331 F'09

Data Mining MetricsData Mining Metrics

UsefulnessUsefulness Return on Investment (ROI)Return on Investment (ROI) AccuracyAccuracy Space/TimeSpace/Time

Page 21: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

21CSE 5331/7331 F'09

Visualization TechniquesVisualization Techniques

GraphicalGraphical GeometricGeometric Icon-basedIcon-based Pixel-basedPixel-based HierarchicalHierarchical HybridHybrid

Page 22: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

22CSE 5331/7331 F'09

Models Based on SummarizationModels Based on Summarization

Visualization:Visualization: Frequency distribution, mean, variance, Frequency distribution, mean, variance, median, mode, etc.median, mode, etc.

Box Plot:Box Plot:

Page 23: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

23CSE 5331/7331 F'09

Scatter DiagramScatter Diagram

Page 24: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

24CSE 5331/7331 F'09

Data Mining Techniques OutlineData Mining Techniques Outline

StatisticalStatistical– Point EstimationPoint Estimation– Models Based on SummarizationModels Based on Summarization– Bayes TheoremBayes Theorem– Hypothesis TestingHypothesis Testing– Regression and CorrelationRegression and Correlation

Similarity MeasuresSimilarity Measures Decision TreesDecision Trees Neural NetworksNeural Networks

– Activation FunctionsActivation Functions

Genetic AlgorithmsGenetic Algorithms

Goal:Goal: Provide an overview of basic data Provide an overview of basic data mining techniquesmining techniques

Page 25: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

25CSE 5331/7331 F'09

Point EstimationPoint Estimation Point Estimate:Point Estimate: estimate a population estimate a population

parameter.parameter. May be made by calculating the parameter for a May be made by calculating the parameter for a

sample.sample. May be used to predict value for missing data.May be used to predict value for missing data. Ex: Ex:

– R contains 100 employeesR contains 100 employees– 99 have salary information99 have salary information– Mean salary of these is $50,000Mean salary of these is $50,000– Use $50,000 as value of remaining employee’s Use $50,000 as value of remaining employee’s

salary. salary. Is this a good idea?Is this a good idea?

Page 26: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

26CSE 5331/7331 F'09

Estimation ErrorEstimation Error

Bias: Bias: Difference between expected value and Difference between expected value and actual value.actual value.

Mean Squared Error (MSE):Mean Squared Error (MSE): expected value expected value of the squared difference between the of the squared difference between the estimate and the actual value:estimate and the actual value:

Why square?Why square? Root Mean Square Error (RMSE)Root Mean Square Error (RMSE)

Page 27: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

27CSE 5331/7331 F'09

Jackknife EstimateJackknife Estimate Jackknife Estimate:Jackknife Estimate: estimate of parameter is estimate of parameter is

obtained by omitting one value from the set of obtained by omitting one value from the set of observed values.observed values.

Ex: estimate of mean for X={xEx: estimate of mean for X={x1, … , x, … , xn}}

Page 28: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

28CSE 5331/7331 F'09

Maximum Likelihood Maximum Likelihood Estimate (MLE)Estimate (MLE)

Obtain parameter estimates that maximize Obtain parameter estimates that maximize the probability that the sample data occurs for the probability that the sample data occurs for the specific model.the specific model.

Joint probability for observing the sample Joint probability for observing the sample data by multiplying the individual probabilities. data by multiplying the individual probabilities. Likelihood function: Likelihood function:

Maximize L.Maximize L.

Page 29: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

29CSE 5331/7331 F'09

MLE ExampleMLE Example

Coin toss five times: {H,H,H,H,T}Coin toss five times: {H,H,H,H,T}

Assuming a perfect coin with H and T equally Assuming a perfect coin with H and T equally

likely, the likelihood of this sequence is: likely, the likelihood of this sequence is:

However if the probability of a H is 0.8 then:However if the probability of a H is 0.8 then:

Page 30: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

30CSE 5331/7331 F'09

MLE Example (cont’d)MLE Example (cont’d) General likelihood formula:General likelihood formula:

Estimate for p is then 4/5 = 0.8Estimate for p is then 4/5 = 0.8

Page 31: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

31CSE 5331/7331 F'09

Expectation-Maximization Expectation-Maximization (EM)(EM)

Solves estimation with incomplete data.Solves estimation with incomplete data. Obtain initial estimates for parameters.Obtain initial estimates for parameters. Iteratively use estimates for missing Iteratively use estimates for missing

data and continue until convergence.data and continue until convergence.

Page 32: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

32CSE 5331/7331 F'09

EM ExampleEM Example

Page 33: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

33CSE 5331/7331 F'09

EM AlgorithmEM Algorithm

Page 34: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

34CSE 5331/7331 F'09

Bayes TheoremBayes Theorem

Posterior Probability:Posterior Probability: P(hP(h1|x|xi)) Prior Probability:Prior Probability: P(h P(h1)) Bayes Theorem:Bayes Theorem:

Assign probabilities of hypotheses given a data Assign probabilities of hypotheses given a data value.value.

Page 35: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

35CSE 5331/7331 F'09

Bayes Theorem ExampleBayes Theorem Example Credit authorizations (hypotheses): Credit authorizations (hypotheses):

hh11=authorize purchase, h=authorize purchase, h2 = authorize after = authorize after further identification, hfurther identification, h3=do not authorize, =do not authorize, hh4= do not authorize but contact police= do not authorize but contact police

Assign twelve data values for all Assign twelve data values for all combinations of credit and income:combinations of credit and income:

From training data: P(hFrom training data: P(h11) = 60%; P(h) = 60%; P(h22)=20%; )=20%;

P(h P(h33)=10%; P(h)=10%; P(h44)=10%.)=10%.

1 2 3 4 Excellent x1 x2 x3 x4 Good x5 x6 x7 x8 Bad x9 x10 x11 x12

Page 36: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

36CSE 5331/7331 F'09

Bayes Example(cont’d)Bayes Example(cont’d) Training Data:Training Data:

ID Income Credit Class xi 1 4 Excellent h1 x4 2 3 Good h1 x7 3 2 Excellent h1 x2 4 3 Good h1 x7 5 4 Good h1 x8 6 2 Excellent h1 x2 7 3 Bad h2 x11 8 2 Bad h2 x10 9 3 Bad h3 x11 10 1 Bad h4 x9

Page 37: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

37CSE 5331/7331 F'09

Bayes Example(cont’d)Bayes Example(cont’d) Calculate P(xCalculate P(xii|h|hjj) and P(x) and P(xii))

Ex: P(xEx: P(x77|h|h11)=2/6; P(x)=2/6; P(x44|h|h11)=1/6; P(x)=1/6; P(x22|h|h11)=2/6; P(x)=2/6; P(x88||

hh11)=1/6; P(x)=1/6; P(xii|h|h11)=0 for all other x)=0 for all other xii.. Predict the class for xPredict the class for x44::

– Calculate P(hCalculate P(hjj|x|x44) for all h) for all hjj. . – Place xPlace x4 4 in class with largest value.in class with largest value.– Ex: Ex:

»P(hP(h11|x|x44)=(P(x)=(P(x44|h|h11)(P(h)(P(h11))/P(x))/P(x44)) =(1/6)(0.6)/0.1=1. =(1/6)(0.6)/0.1=1.

»xx4 4 in class hin class h11..

Page 38: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

38CSE 5331/7331 F'09

Hypothesis TestingHypothesis Testing

Find model to explain behavior by Find model to explain behavior by creating and then testing a hypothesis creating and then testing a hypothesis about the data.about the data.

Exact opposite of usual DM approach.Exact opposite of usual DM approach. HH0 0 – Null hypothesis; Hypothesis to be – Null hypothesis; Hypothesis to be

tested.tested. HH1 1 – Alternative hypothesis– Alternative hypothesis

Page 39: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

39CSE 5331/7331 F'09

Chi Squared StatisticChi Squared Statistic

O – observed valueO – observed value E – Expected value based on hypothesis.E – Expected value based on hypothesis.

Ex: Ex: – O={50,93,67,78,87}O={50,93,67,78,87}– E=75E=75– 22=15.55 and therefore significant=15.55 and therefore significant

Page 40: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

40CSE 5331/7331 F'09

RegressionRegression

Predict future values based on past Predict future values based on past valuesvalues

Linear RegressionLinear Regression assumes linear assumes linear relationship exists.relationship exists.

y = cy = c00 + c + c11 x x11 + … + c + … + cnn x xnn

Find values to best fit the dataFind values to best fit the data

Page 41: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

41CSE 5331/7331 F'09

Linear RegressionLinear Regression

Page 42: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

42CSE 5331/7331 F'09

CorrelationCorrelation

Examine the degree to which the values Examine the degree to which the values for two variables behave similarly.for two variables behave similarly.

Correlation coefficient r:Correlation coefficient r:• 1 = perfect correlation1 = perfect correlation• -1 = perfect but opposite correlation-1 = perfect but opposite correlation• 0 = no correlation0 = no correlation

Page 43: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

43CSE 5331/7331 F'09

Similarity MeasuresSimilarity Measures

Determine similarity between two objects.Determine similarity between two objects. Similarity characteristics:Similarity characteristics:

Alternatively, distance measure measure how Alternatively, distance measure measure how unlike or dissimilar objects are.unlike or dissimilar objects are.

Page 44: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

44CSE 5331/7331 F'09

Similarity MeasuresSimilarity Measures

Page 45: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

45CSE 5331/7331 F'09

Distance MeasuresDistance Measures

Measure dissimilarity between objectsMeasure dissimilarity between objects

Page 46: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

46CSE 5331/7331 F'09

Twenty Questions GameTwenty Questions Game

Page 47: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

47CSE 5331/7331 F'09

Decision TreesDecision Trees Decision Tree (DT):Decision Tree (DT):

– Tree where the root and each internal node is Tree where the root and each internal node is labeled with a question. labeled with a question.

– The arcs represent each possible answer to The arcs represent each possible answer to the associated question. the associated question.

– Each leaf node represents a prediction of a Each leaf node represents a prediction of a solution to the problem.solution to the problem.

Popular technique for classification; Leaf Popular technique for classification; Leaf node indicates class to which the node indicates class to which the corresponding tuple belongs.corresponding tuple belongs.

Page 48: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

48CSE 5331/7331 F'09

Decision Tree ExampleDecision Tree Example

Page 49: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

49CSE 5331/7331 F'09

Decision TreesDecision Trees

AA Decision Tree Model Decision Tree Model is a computational is a computational model consisting of three parts:model consisting of three parts:– Decision TreeDecision Tree– Algorithm to create the treeAlgorithm to create the tree– Algorithm that applies the tree to data Algorithm that applies the tree to data

Creation of the tree is the most difficult part.Creation of the tree is the most difficult part. Processing is basically a search similar to Processing is basically a search similar to

that in a binary search tree (although DT may that in a binary search tree (although DT may not be binary).not be binary).

Page 50: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

50CSE 5331/7331 F'09

Decision Tree AlgorithmDecision Tree Algorithm

Page 51: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

51CSE 5331/7331 F'09

DT DT Advantages/DisadvantagesAdvantages/Disadvantages

Advantages:Advantages:– Easy to understand. Easy to understand. – Easy to generate rulesEasy to generate rules

Disadvantages:Disadvantages:– May suffer from overfitting.May suffer from overfitting.– Classifies by rectangular partitioning.Classifies by rectangular partitioning.– Does not easily handle nonnumeric data.Does not easily handle nonnumeric data.– Can be quite large – pruning is necessary.Can be quite large – pruning is necessary.

Page 52: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

52CSE 5331/7331 F'09

Neural Networks Neural Networks Based on observed functioning of human Based on observed functioning of human

brain. brain. (Artificial Neural Networks (ANN)(Artificial Neural Networks (ANN) Our view of neural networks is very simplistic. Our view of neural networks is very simplistic. We view a neural network (NN) from a We view a neural network (NN) from a

graphical viewpoint.graphical viewpoint. Alternatively, a NN may be viewed from the Alternatively, a NN may be viewed from the

perspective of matrices.perspective of matrices. Used in pattern recognition, speech Used in pattern recognition, speech

recognition, computer vision, and recognition, computer vision, and classification.classification.

Page 53: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

53CSE 5331/7331 F'09

Neural NetworksNeural Networks Neural Network (NN)Neural Network (NN) is a directed graph is a directed graph

F=<V,A> with vertices V={1,2,…,n} and arcs F=<V,A> with vertices V={1,2,…,n} and arcs A={<i,j>|1<=i,j<=n}, with the following A={<i,j>|1<=i,j<=n}, with the following restrictions:restrictions:– V is partitioned into a set of input nodes, VV is partitioned into a set of input nodes, V II, ,

hidden nodes, Vhidden nodes, VHH, and output nodes, V, and output nodes, VOO..– The vertices are also partitioned into layers The vertices are also partitioned into layers – Any arc <i,j> must have node i in layer h-1 Any arc <i,j> must have node i in layer h-1

and node j in layer h.and node j in layer h.– Arc <i,j> is labeled with a numeric value wArc <i,j> is labeled with a numeric value w ijij..– Node i is labeled with a function fNode i is labeled with a function f ii..

Page 54: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

54CSE 5331/7331 F'09

Neural Network ExampleNeural Network Example

Page 55: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

55CSE 5331/7331 F'09

NN NodeNN Node

Page 56: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

56CSE 5331/7331 F'09

NN Activation FunctionsNN Activation Functions

Functions associated with nodes in Functions associated with nodes in graph.graph.

Output may be in range [-1,1] or [0,1]Output may be in range [-1,1] or [0,1]

Page 57: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

57CSE 5331/7331 F'09

NN Activation FunctionsNN Activation Functions

Page 58: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

58CSE 5331/7331 F'09

NN LearningNN Learning

Propagate input values through graph.Propagate input values through graph. Compare output to desired output.Compare output to desired output. Adjust weights in graph accordingly.Adjust weights in graph accordingly.

Page 59: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

59CSE 5331/7331 F'09

Neural NetworksNeural Networks

A A Neural Network ModelNeural Network Model is a computational is a computational model consisting of three parts:model consisting of three parts:– Neural Network graph Neural Network graph – Learning algorithm that indicates how Learning algorithm that indicates how

learning takes place.learning takes place.– Recall techniques that determine hew Recall techniques that determine hew

information is obtained from the network. information is obtained from the network. We will look at propagation as the recall We will look at propagation as the recall

technique.technique.

Page 60: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

60CSE 5331/7331 F'09

NN AdvantagesNN Advantages

LearningLearning Can continue learning even after Can continue learning even after

training set has been applied.training set has been applied. Easy parallelizationEasy parallelization Solves many problemsSolves many problems

Page 61: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

61CSE 5331/7331 F'09

NN DisadvantagesNN Disadvantages

Difficult to understandDifficult to understand May suffer from overfittingMay suffer from overfitting Structure of graph must be determined Structure of graph must be determined

a priori.a priori. Input values must be numeric.Input values must be numeric. Verification difficult.Verification difficult.

Page 62: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

62CSE 5331/7331 F'09

Genetic AlgorithmsGenetic Algorithms Optimization search type algorithms. Optimization search type algorithms. Creates an initial feasible solution and Creates an initial feasible solution and

iteratively creates new “better” solutions.iteratively creates new “better” solutions. Based on human evolution and survival of the Based on human evolution and survival of the

fittest.fittest. Must represent a solution as an individual.Must represent a solution as an individual. Individual:Individual: string I=I string I=I11,I,I22,…,I,…,Inn where I where Ijj is in is in

given alphabet A. given alphabet A. Each character IEach character I j j is called a is called a genegene.. Population:Population: set of individuals. set of individuals.

Page 63: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

63CSE 5331/7331 F'09

Genetic AlgorithmsGenetic Algorithms A A Genetic Algorithm (GA)Genetic Algorithm (GA) is a is a

computational model consisting of five parts:computational model consisting of five parts:– A starting set of individuals, P.A starting set of individuals, P.– CrossoverCrossover:: technique to combine two technique to combine two

parents to create offspring.parents to create offspring.– Mutation: Mutation: randomly change an individual.randomly change an individual.– Fitness: Fitness: determine the best individuals.determine the best individuals.– Algorithm which applies the crossover and Algorithm which applies the crossover and

mutation techniques to P iteratively using mutation techniques to P iteratively using the fitness function to determine the best the fitness function to determine the best individuals in P to keep. individuals in P to keep.

Page 64: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

64CSE 5331/7331 F'09

Crossover ExamplesCrossover Examples

111 111

000 000

Parents Children

111 000

000 111

a) Single Crossover

111 111

Parents Children

111 000

000

a) Single Crossover

111 111

000 000

Parents

a) Multiple Crossover

111 111

000

Parents Children

111 000

000 111

Children

111 000

000 11100

11

00

11

Page 65: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

65CSE 5331/7331 F'09

Genetic AlgorithmGenetic Algorithm

Page 66: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

66CSE 5331/7331 F'09

GA Advantages/DisadvantagesGA Advantages/Disadvantages AdvantagesAdvantages

– Easily parallelizedEasily parallelized DisadvantagesDisadvantages

– Difficult to understand and explain to end Difficult to understand and explain to end users.users.

– Abstraction of the problem and method to Abstraction of the problem and method to represent individuals is quite difficult.represent individuals is quite difficult.

– Determining fitness function is difficult.Determining fitness function is difficult.– Determining how to perform crossover and Determining how to perform crossover and

mutation is difficult.mutation is difficult.

Page 67: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

67CSE 5331/7331 F'09

Data Mining OutlineData Mining Outline

PART I - Introduction PART II – Core TopicsPART II – Core Topics

– ClassificationClassification– ClusteringClustering– Association RulesAssociation Rules

PART III – Related Topics

Page 68: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

68CSE 5331/7331 F'09

Classification OutlineClassification Outline

Classification Problem OverviewClassification Problem Overview Classification TechniquesClassification Techniques

– RegressionRegression– DistanceDistance– Decision TreesDecision Trees– RulesRules– Neural NetworksNeural Networks

Goal:Goal: Provide an overview of the classification Provide an overview of the classification problem and introduce some of the basic problem and introduce some of the basic algorithmsalgorithms

Page 69: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

69CSE 5331/7331 F'09

Classification ProblemClassification Problem Given a database D={tGiven a database D={t11,t,t22,…,t,…,tnn} and a set } and a set

of classes C={Cof classes C={C11,…,C,…,Cmm}, the }, the Classification ProblemClassification Problem is to define a is to define a mapping f:Dmapping f:DC where each tC where each tii is assigned is assigned to one class.to one class.

Actually divides D into Actually divides D into equivalence equivalence classesclasses..

PredictionPrediction isis similar, but may be viewed similar, but may be viewed as having infinite number of classes.as having infinite number of classes.

Page 70: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

70CSE 5331/7331 F'09

Classification ExamplesClassification Examples

Teachers classify students’ grades as Teachers classify students’ grades as A, B, C, D, or F. A, B, C, D, or F.

Identify mushrooms as poisonous or Identify mushrooms as poisonous or edible.edible.

Predict when a river will flood.Predict when a river will flood. Identify individuals with credit risks. Identify individuals with credit risks. Speech recognitionSpeech recognition Pattern recognitionPattern recognition

Page 71: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

71CSE 5331/7331 F'09

Classification Ex: GradingClassification Ex: Grading

If x >= 90 then grade If x >= 90 then grade =A.=A.

If 80<=x<90 then If 80<=x<90 then grade =B.grade =B.

If 70<=x<80 then If 70<=x<80 then grade =C.grade =C.

If 60<=x<70 then If 60<=x<70 then grade =D.grade =D.

If x<50 then grade =F.If x<50 then grade =F.

>=90<90

x

>=80<80

x

>=70<70

x

F

B

A

>=60<50

x C

D

Page 72: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

72CSE 5331/7331 F'09

Classification Ex: Letter Classification Ex: Letter RecognitionRecognition

View letters as constructed from 5 components:

Letter C

Letter E

Letter A

Letter D

Letter F

Letter B

Page 73: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

73CSE 5331/7331 F'09

Classification TechniquesClassification Techniques

Approach:Approach:1.1. Create specific model by evaluating Create specific model by evaluating

training data (or using domain training data (or using domain experts’ knowledge).experts’ knowledge).

2.2. Apply model developed to new data.Apply model developed to new data. Classes must be predefinedClasses must be predefined Most common techniques use DTs, Most common techniques use DTs,

NNs, or are based on distances or NNs, or are based on distances or statistical methods.statistical methods.

Page 74: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

74CSE 5331/7331 F'09

Defining ClassesDefining Classes

Partitioning Based

Distance Based

Page 75: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

75CSE 5331/7331 F'09

Issues in ClassificationIssues in Classification

Missing DataMissing Data– IgnoreIgnore– Replace with assumed valueReplace with assumed value

Measuring PerformanceMeasuring Performance– Classification accuracy on test dataClassification accuracy on test data– Confusion matrixConfusion matrix– OC CurveOC Curve

Page 76: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

76CSE 5331/7331 F'09

Height Example DataHeight Example DataName Gender Height Output1 Output2 Kristina F 1.6m Short Medium Jim M 2m Tall Medium Maggie F 1.9m Medium Tall Martha F 1.88m Medium Tall Stephanie F 1.7m Short Medium Bob M 1.85m Medium Medium Kathy F 1.6m Short Medium Dave M 1.7m Short Medium Worth M 2.2m Tall Tall Steven M 2.1m Tall Tall Debbie F 1.8m Medium Medium Todd M 1.95m Medium Medium Kim F 1.9m Medium Tall Amy F 1.8m Medium Medium Wynette F 1.75m Medium Medium

Page 77: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

77CSE 5331/7331 F'09

Classification PerformanceClassification Performance

True Positive

True NegativeFalse Positive

False Negative

Page 78: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

78CSE 5331/7331 F'09

Confusion Matrix ExampleConfusion Matrix Example

Using height data example with Output1 Using height data example with Output1 correct and Output2 actual assignmentcorrect and Output2 actual assignment

Actual Assignment Membership Short Medium Tall Short 0 4 0 Medium 0 5 3 Tall 0 1 2

Page 79: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

79CSE 5331/7331 F'09

Operating Characteristic CurveOperating Characteristic Curve

Page 80: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

80CSE 5331/7331 F'09

RegressionTopicsRegressionTopics

Linear RegressionLinear Regression Nonlinear RegressionNonlinear Regression Logistic RegressionLogistic Regression MetricsMetrics

Page 81: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

81CSE 5331/7331 F'09

Remember High School?Remember High School?

Y= mx + bY= mx + b You need two points to determine a You need two points to determine a

straight line.straight line. You need two points to find values for m You need two points to find values for m

and b.and b.

THIS IS REGRESSIONTHIS IS REGRESSION

Page 82: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

82CSE 5331/7331 F'09

RegressionRegression Assume data fits a predefined functionAssume data fits a predefined function Determine best values for Determine best values for regression regression

coefficientscoefficients cc00,c,c11,…,c,…,cnn.. Assume an error: y = cAssume an error: y = c00+c+c11xx11+…+c+…+cnnxxnn+ Estimate error using mean squared error for

training set:

Page 83: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

83CSE 5331/7331 F'09

Linear RegressionLinear Regression Assume data fits a predefined functionAssume data fits a predefined function Determine best values for Determine best values for regression regression

coefficientscoefficients cc00,c,c11,…,c,…,cnn.. Assume an error: y = cAssume an error: y = c00+c+c11xx11+…+c+…+cnnxxnn+ Estimate error using mean squared error for

training set:

Page 84: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

84CSE 5331/7331 F'09

Classification Using Linear Classification Using Linear RegressionRegression

Division:Division: Use regression function to Use regression function to divide area into regions. divide area into regions.

PredictionPrediction: Use regression function to : Use regression function to predict a class membership function. predict a class membership function. Input includes desired class.Input includes desired class.

Page 85: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

85CSE 5331/7331 F'09

DivisionDivision

Page 86: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

86CSE 5331/7331 F'09

PredictionPrediction

Page 87: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

87CSE 5331/7331 F'09

Linear Regression Poor FitLinear Regression Poor Fit

Why use sum of least squares?http://curvefit.com/sum_of_squares.htmLinear doesn’t always work well

Page 88: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

88CSE 5331/7331 F'09

Nonlinear RegressionNonlinear Regression

Data does not nicely fit a straight lineData does not nicely fit a straight line Fit data to a curveFit data to a curve Many possible functionsMany possible functions Not as easy and straightforward as Not as easy and straightforward as

linear regressionlinear regression How nonlinear regression works:How nonlinear regression works:

http://curvefit.com/how_nonlin_works.htm

Page 89: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

89CSE 5331/7331 F'09

Logistic RegressionLogistic Regression

Generalized linear modelGeneralized linear model Predict discrete outcomePredict discrete outcome

– Binomial (binary) logistic regressionBinomial (binary) logistic regression– Multinomial logistic regressionMultinomial logistic regression

One dependent variableOne dependent variable Logistic Regression by Gerard E. DallalLogistic Regression by Gerard E. Dallal

http://www.jerrydallal.com/LHSP/logistic.htm

Page 90: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

90CSE 5331/7331 F'09

Logistic Regression (cont’d)Logistic Regression (cont’d)

Log Odds Function: Log Odds Function:

P is probability that outcome is 1P is probability that outcome is 1 Odds – The probability the event occurs Odds – The probability the event occurs

divided by the probability that it does not divided by the probability that it does not occuroccur

Log Odds function is strictly increasing as p Log Odds function is strictly increasing as p increasesincreases

xp

p10)

1log(

Page 91: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

91CSE 5331/7331 F'09

Why Log Odds?Why Log Odds?

Shape of curve is desirableShape of curve is desirable Relationship to probabilityRelationship to probability Range – to +Range – to +

Page 92: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

92CSE 5331/7331 F'09

P-valueP-value

The probability that a variable has a The probability that a variable has a value greater than the observed valuevalue greater than the observed value

http://en.wikipedia.org/wiki/P-value http://sportsci.org/resource/stats/pvalues.html

Page 93: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

93CSE 5331/7331 F'09

CovarianceCovariance

Degree to which two variables vary in the Degree to which two variables vary in the same mannersame manner

Correlation is normalized and covariance Correlation is normalized and covariance is notis not

http://www.ds.unifi.it/VL/VL_EN/expect/expect3.html

Page 94: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

94CSE 5331/7331 F'09

ResidualResidual

ErrorError Difference between desired output and Difference between desired output and

predicted outputpredicted output May actually use sum of squaresMay actually use sum of squares

Page 95: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

95CSE 5331/7331 F'09

Classification Using DistanceClassification Using Distance Place items in class to which they are Place items in class to which they are

“closest”.“closest”. Must determine distance between an item Must determine distance between an item

and a class.and a class. Classes represented byClasses represented by

– Centroid:Centroid: Central value. Central value.– Medoid:Medoid: Representative point. Representative point.– Individual pointsIndividual points

Algorithm: KNNAlgorithm: KNN

Page 96: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

96CSE 5331/7331 F'09

K Nearest Neighbor (KNN):K Nearest Neighbor (KNN):

Training set includes classes.Training set includes classes. Examine K items near item to be Examine K items near item to be

classified.classified. New item placed in class with the most New item placed in class with the most

number of close items.number of close items. O(q) for each tuple to be classified. O(q) for each tuple to be classified.

(Here q is the size of the training set.)(Here q is the size of the training set.)

Page 97: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

97CSE 5331/7331 F'09

KNNKNN

Page 98: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

98CSE 5331/7331 F'09

KNN AlgorithmKNN Algorithm

Page 99: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

99CSE 5331/7331 F'09

Classification Using Decision Classification Using Decision TreesTrees

Partitioning based:Partitioning based: Divide search Divide search space into rectangular regions.space into rectangular regions.

Tuple placed into class based on the Tuple placed into class based on the region within which it falls.region within which it falls.

DT approaches differ in how the tree is DT approaches differ in how the tree is built: built: DT InductionDT Induction

Internal nodes associated with attribute Internal nodes associated with attribute and arcs with values for that attribute.and arcs with values for that attribute.

Algorithms: ID3, C4.5, CARTAlgorithms: ID3, C4.5, CART

Page 100: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

100CSE 5331/7331 F'09

Decision TreeDecision TreeGiven: Given:

– D = {tD = {t11, …, t, …, tnn} where t} where tii=<t=<ti1i1, …, t, …, tihih> > – Database schema contains {ADatabase schema contains {A11, A, A22, …, A, …, Ahh}}– Classes C={CClasses C={C11, …., C, …., Cmm}}

Decision or Classification TreeDecision or Classification Tree is is a tree associated a tree associated with D such thatwith D such that– Each internal node is labeled with attribute, AEach internal node is labeled with attribute, A ii

– Each arc is labeled with predicate which can be Each arc is labeled with predicate which can be applied to attribute at parentapplied to attribute at parent

– Each leaf node is labeled with a class, CEach leaf node is labeled with a class, C jj

Page 101: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

101CSE 5331/7331 F'09

DT InductionDT Induction

Page 102: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

102CSE 5331/7331 F'09

DT Splits Area DT Splits Area

Gender

Height

M

F

Page 103: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

103CSE 5331/7331 F'09

Comparing DTsComparing DTs

BalancedDeep

Page 104: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

104CSE 5331/7331 F'09

DT IssuesDT Issues

Choosing Splitting AttributesChoosing Splitting Attributes Ordering of Splitting AttributesOrdering of Splitting Attributes SplitsSplits Tree StructureTree Structure Stopping CriteriaStopping Criteria Training DataTraining Data PruningPruning

Page 105: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

105CSE 5331/7331 F'09

Decision Tree Induction is often based on Decision Tree Induction is often based on Information TheoryInformation Theory

SoSo

Page 106: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

106CSE 5331/7331 F'09

InformationInformation

Page 107: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

107CSE 5331/7331 F'09

DT Induction DT Induction

When all the marbles in the bowl are When all the marbles in the bowl are mixed up, little information is given. mixed up, little information is given.

When the marbles in the bowl are all When the marbles in the bowl are all from one class and those in the other from one class and those in the other two classes are on either side, more two classes are on either side, more information is given.information is given.

Use this approach with DT Induction !Use this approach with DT Induction !

Page 108: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

108CSE 5331/7331 F'09

Information/EntropyInformation/Entropy Given probabilitites pGiven probabilitites p11, p, p22, .., p, .., pss whose sum is whose sum is

1, 1, EntropyEntropy is defined as:is defined as:

Entropy measures the amount of randomness Entropy measures the amount of randomness or surprise or uncertainty.or surprise or uncertainty.

Goal in classificationGoal in classification– no surpriseno surprise– entropy = 0entropy = 0

Page 109: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

109CSE 5331/7331 F'09

EntropyEntropy

log (1/p) H(p,1-p)

Page 110: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

110CSE 5331/7331 F'09

ID3ID3 Creates tree using information theory Creates tree using information theory

concepts and tries to reduce expected concepts and tries to reduce expected number of comparison..number of comparison..

ID3 chooses split attribute with the highest ID3 chooses split attribute with the highest information gain:information gain:

Page 111: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

111CSE 5331/7331 F'09

ID3 Example (Output1)ID3 Example (Output1) Starting state entropy:Starting state entropy:4/15 log(15/4) + 8/15 log(15/8) + 3/15 log(15/3) = 0.43844/15 log(15/4) + 8/15 log(15/8) + 3/15 log(15/3) = 0.4384 Gain using gender:Gain using gender:

– Female: 3/9 log(9/3)+6/9 log(9/6)=0.2764Female: 3/9 log(9/3)+6/9 log(9/6)=0.2764– Male: 1/6 (log 6/1) + 2/6 log(6/2) + 3/6 log(6/3) = Male: 1/6 (log 6/1) + 2/6 log(6/2) + 3/6 log(6/3) =

0.43920.4392– Weighted sum: (9/15)(0.2764) + (6/15)(0.4392) = Weighted sum: (9/15)(0.2764) + (6/15)(0.4392) =

0.341520.34152– Gain: 0.4384 – 0.34152 = 0.09688Gain: 0.4384 – 0.34152 = 0.09688

Gain using height:Gain using height:0.4384 – (2/15)(0.301) = 0.39830.4384 – (2/15)(0.301) = 0.3983

Choose height as first splitting attributeChoose height as first splitting attribute

Page 112: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

112CSE 5331/7331 F'09

C4.5C4.5 ID3 ID3 favors attributes with large number of favors attributes with large number of

divisionsdivisions Improved version of ID3:Improved version of ID3:

– Missing DataMissing Data– Continuous DataContinuous Data– PruningPruning– RulesRules– GainRatio:GainRatio:

Page 113: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

113CSE 5331/7331 F'09

CARTCART

Create Binary TreeCreate Binary Tree Uses entropyUses entropy Formula to choose split point, s, for node t:Formula to choose split point, s, for node t:

PPLL,P,PRR probability that a tuple in the training set probability that a tuple in the training set

will be on the left or right side of the tree.will be on the left or right side of the tree.

Page 114: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

114CSE 5331/7331 F'09

CART ExampleCART Example At the start, there are six choices for At the start, there are six choices for

split point split point (right branch on equality):(right branch on equality):– P(Gender)=P(Gender)=2(6/15)(9/15)(2/15 + 4/15 + 3/15)=0.2242(6/15)(9/15)(2/15 + 4/15 + 3/15)=0.224

– P(1.6) = 0P(1.6) = 0– P(1.7) = P(1.7) = 2(2/15)(13/15)(0 + 8/15 + 3/15) = 0.1692(2/15)(13/15)(0 + 8/15 + 3/15) = 0.169

– P(1.8) = P(1.8) = 2(5/15)(10/15)(4/15 + 6/15 + 3/15) = 0.3852(5/15)(10/15)(4/15 + 6/15 + 3/15) = 0.385

– P(1.9) = P(1.9) = 2(9/15)(6/15)(4/15 + 2/15 + 3/15) = 0.2562(9/15)(6/15)(4/15 + 2/15 + 3/15) = 0.256

– P(2.0) = P(2.0) = 2(12/15)(3/15)(4/15 + 8/15 + 3/15) = 0.322(12/15)(3/15)(4/15 + 8/15 + 3/15) = 0.32

Split at 1.8Split at 1.8

Page 115: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

115CSE 5331/7331 F'09

Classification Using Neural Classification Using Neural NetworksNetworks

Typical NN structure for classification:Typical NN structure for classification:– One output node per classOne output node per class– Output value is class membership function valueOutput value is class membership function value

Supervised learning Supervised learning For each tuple in training set, propagate it For each tuple in training set, propagate it

through NN. Adjust weights on edges to through NN. Adjust weights on edges to improve future classification. improve future classification.

Algorithms: Propagation, Backpropagation, Algorithms: Propagation, Backpropagation, Gradient DescentGradient Descent

Page 116: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

116CSE 5331/7331 F'09

NN Issues NN Issues

Number of source nodesNumber of source nodes Number of hidden layersNumber of hidden layers Training dataTraining data Number of sinksNumber of sinks InterconnectionsInterconnections WeightsWeights Activation FunctionsActivation Functions Learning TechniqueLearning Technique When to stop learningWhen to stop learning

Page 117: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

117CSE 5331/7331 F'09

Decision Tree vs. Neural Decision Tree vs. Neural NetworkNetwork

Page 118: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

118CSE 5331/7331 F'09

PropagationPropagation

Tuple Input

Output

Page 119: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

119CSE 5331/7331 F'09

NN Propagation AlgorithmNN Propagation Algorithm

Page 120: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

120CSE 5331/7331 F'09

Example PropagationExample Propagation

© Prentie Hall

Page 121: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

121CSE 5331/7331 F'09

NN LearningNN Learning

Adjust weights to perform better with the Adjust weights to perform better with the associated test data.associated test data.

Supervised:Supervised: Use feedback from Use feedback from knowledge of correct classification.knowledge of correct classification.

Unsupervised:Unsupervised: No knowledge of No knowledge of correct classification needed.correct classification needed.

Page 122: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

122CSE 5331/7331 F'09

NN Supervised LearningNN Supervised Learning

Page 123: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

123CSE 5331/7331 F'09

Supervised LearningSupervised Learning

Possible error values assuming output from Possible error values assuming output from node i is ynode i is yii but should be d but should be d ii::

Change weights on arcs based on estimated Change weights on arcs based on estimated errorerror

Page 124: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

124CSE 5331/7331 F'09

NN BackpropagationNN Backpropagation

Propagate changes to weights Propagate changes to weights backward from output layer to input backward from output layer to input layer.layer.

Delta Rule:Delta Rule: w wijij= c x= c xijij (d (dj j – y– yjj)) Gradient Descent:Gradient Descent: technique to modify technique to modify

the weights in the graph.the weights in the graph.

Page 125: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

125CSE 5331/7331 F'09

BackpropagationBackpropagation

Error

Page 126: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

126CSE 5331/7331 F'09

Backpropagation AlgorithmBackpropagation Algorithm

Page 127: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

127CSE 5331/7331 F'09

Gradient DescentGradient Descent

Page 128: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

128CSE 5331/7331 F'09

Gradient Descent AlgorithmGradient Descent Algorithm

Page 129: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

129CSE 5331/7331 F'09

Output Layer LearningOutput Layer Learning

Page 130: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

130CSE 5331/7331 F'09

Hidden Layer LearningHidden Layer Learning

Page 131: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

131CSE 5331/7331 F'09

Types of NNsTypes of NNs

Different NN structures used for Different NN structures used for different problems.different problems.

PerceptronPerceptron Self Organizing Feature MapSelf Organizing Feature Map Radial Basis Function NetworkRadial Basis Function Network

Page 132: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

132CSE 5331/7331 F'09

PerceptronPerceptron

Perceptron is one of the simplest NNs.Perceptron is one of the simplest NNs. No hidden layers.No hidden layers.

Page 133: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

133CSE 5331/7331 F'09

Perceptron ExamplePerceptron Example

Suppose:Suppose:– Summation: S=3xSummation: S=3x11+2x+2x22-6-6

– Activation: if S>0 then 1 else 0Activation: if S>0 then 1 else 0

Page 134: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

134CSE 5331/7331 F'09

Self Organizing Feature Map Self Organizing Feature Map (SOFM)(SOFM)

Competitive Unsupervised LearningCompetitive Unsupervised Learning Observe how neurons work in brain:Observe how neurons work in brain:

– Firing impacts firing of those nearFiring impacts firing of those near– Neurons far apart inhibit each otherNeurons far apart inhibit each other– Neurons have specific nonoverlapping Neurons have specific nonoverlapping

taskstasks Ex: Kohonen NetworkEx: Kohonen Network

Page 135: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

135CSE 5331/7331 F'09

Kohonen NetworkKohonen Network

Page 136: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

136CSE 5331/7331 F'09

Kohonen NetworkKohonen Network

Competitive Layer – viewed as 2D gridCompetitive Layer – viewed as 2D grid Similarity between competitive nodes and Similarity between competitive nodes and

input nodes:input nodes:– Input: X = <xInput: X = <x11, …, x, …, xhh>>

– Weights: <wWeights: <w1i1i, … , w, … , whihi>>

– Similarity defined based on dot productSimilarity defined based on dot product

Competitive node most similar to input “wins”Competitive node most similar to input “wins” Winning node weights (as well as Winning node weights (as well as

surrounding node weights) increased.surrounding node weights) increased.

Page 137: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

137CSE 5331/7331 F'09

Radial Basis Function NetworkRadial Basis Function Network

RBF function has Gaussian shapeRBF function has Gaussian shape RBF NetworksRBF Networks

– Three LayersThree Layers– Hidden layer – Gaussian activation Hidden layer – Gaussian activation

functionfunction– Output layer – Linear activation functionOutput layer – Linear activation function

Page 138: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

138CSE 5331/7331 F'09

Radial Basis Function NetworkRadial Basis Function Network

Page 139: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

139CSE 5331/7331 F'09

Classification Using RulesClassification Using Rules Perform classification using If-Then Perform classification using If-Then

rulesrules Classification Rule:Classification Rule: r = <a,c> r = <a,c>

Antecedent, ConsequentAntecedent, Consequent May generate from from other May generate from from other

techniques (DT, NN) or generate techniques (DT, NN) or generate directly.directly.

Algorithms: Gen, RX, 1R, PRISMAlgorithms: Gen, RX, 1R, PRISM

Page 140: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

140CSE 5331/7331 F'09

Generating Rules from DTsGenerating Rules from DTs

Page 141: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

141CSE 5331/7331 F'09

Generating Rules ExampleGenerating Rules Example

Page 142: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

142CSE 5331/7331 F'09

Generating Rules from NNsGenerating Rules from NNs

Page 143: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

143CSE 5331/7331 F'09

1R Algorithm1R Algorithm

Page 144: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

144CSE 5331/7331 F'09

1R Example1R Example

Page 145: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

145CSE 5331/7331 F'09

PRISM AlgorithmPRISM Algorithm

Page 146: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

146CSE 5331/7331 F'09

PRISM ExamplePRISM Example

Page 147: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

147CSE 5331/7331 F'09

Decision Tree vs. Rules Decision Tree vs. Rules

Tree has implied Tree has implied order in which order in which splitting is splitting is performed.performed.

Tree created based Tree created based on looking at all on looking at all classes.classes.

Rules have no Rules have no ordering of ordering of predicates.predicates.

Only need to look at Only need to look at one class to one class to generate its rules.generate its rules.

Page 148: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

148CSE 5331/7331 F'09

Clustering OutlineClustering Outline

Clustering Problem OverviewClustering Problem Overview Clustering TechniquesClustering Techniques

– Hierarchical AlgorithmsHierarchical Algorithms– Partitional AlgorithmsPartitional Algorithms– Genetic AlgorithmGenetic Algorithm– Clustering Large DatabasesClustering Large Databases

Goal:Goal: Provide an overview of the clustering Provide an overview of the clustering problem and introduce some of the basic problem and introduce some of the basic algorithmsalgorithms

Page 149: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

149CSE 5331/7331 F'09

Clustering ExamplesClustering Examples

SegmentSegment customer database based on customer database based on similar buying patterns.similar buying patterns.

Group houses in a town into Group houses in a town into neighborhoods based on similar neighborhoods based on similar features.features.

Identify new plant speciesIdentify new plant species Identify similar Web usage patternsIdentify similar Web usage patterns

Page 150: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

150CSE 5331/7331 F'09

Clustering ExampleClustering Example

Page 151: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

151CSE 5331/7331 F'09

Clustering HousesClustering Houses

Size BasedGeographic Distance Based

Page 152: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

152CSE 5331/7331 F'09

Clustering vs. ClassificationClustering vs. Classification

No prior knowledgeNo prior knowledge– Number of clustersNumber of clusters– Meaning of clustersMeaning of clusters

Unsupervised learningUnsupervised learning

Page 153: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

153CSE 5331/7331 F'09

Clustering IssuesClustering Issues

Outlier handlingOutlier handling Dynamic dataDynamic data Interpreting resultsInterpreting results Evaluating resultsEvaluating results Number of clustersNumber of clusters Data to be usedData to be used ScalabilityScalability

Page 154: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

154CSE 5331/7331 F'09

Impact of Outliers on Impact of Outliers on ClusteringClustering

Page 155: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

155CSE 5331/7331 F'09

Clustering ProblemClustering Problem

Given a database D={tGiven a database D={t11,t,t22,…,t,…,tnn} of tuples } of tuples and an integer value k, the and an integer value k, the Clustering Clustering ProblemProblem is to define a mapping is to define a mapping f:Df:D{1,..,k} where each t{1,..,k} where each tii is assigned to is assigned to one cluster Kone cluster Kjj, 1<=j<=k., 1<=j<=k.

A A ClusterCluster, K, Kjj, contains precisely those , contains precisely those tuples mapped to it.tuples mapped to it.

Unlike classification problem, clusters Unlike classification problem, clusters are not known a priori.are not known a priori.

Page 156: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

156CSE 5331/7331 F'09

Types of Clustering Types of Clustering

HierarchicalHierarchical – Nested set of clusters – Nested set of clusters created.created.

Partitional Partitional – One set of clusters – One set of clusters created.created.

Incremental Incremental – Each element handled – Each element handled one at a time.one at a time.

SimultaneousSimultaneous – All elements handled – All elements handled together.together.

Overlapping/Non-overlappingOverlapping/Non-overlapping

Page 157: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

157CSE 5331/7331 F'09

Clustering ApproachesClustering Approaches

Clustering

Hierarchical Partitional Categorical Large DB

Agglomerative Divisive Sampling Compression

Page 158: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

158CSE 5331/7331 F'09

Cluster ParametersCluster Parameters

Page 159: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

159CSE 5331/7331 F'09

Distance Between ClustersDistance Between Clusters Single LinkSingle Link: smallest distance between points: smallest distance between points Complete Link:Complete Link: largest distance between points largest distance between points Average Link:Average Link: average distance between pointsaverage distance between points Centroid:Centroid: distance between centroidsdistance between centroids

Page 160: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

160CSE 5331/7331 F'09

Hierarchical ClusteringHierarchical Clustering

Clusters are created in levels actually Clusters are created in levels actually creating sets of clusters at each level.creating sets of clusters at each level.

AgglomerativeAgglomerative– Initially each item in its own clusterInitially each item in its own cluster– Iteratively clusters are merged togetherIteratively clusters are merged together– Bottom UpBottom Up

DivisiveDivisive– Initially all items in one clusterInitially all items in one cluster– Large clusters are successively dividedLarge clusters are successively divided– Top DownTop Down

Page 161: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

161CSE 5331/7331 F'09

Hierarchical AlgorithmsHierarchical Algorithms

Single LinkSingle Link MST Single LinkMST Single Link Complete LinkComplete Link Average LinkAverage Link

Page 162: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

162CSE 5331/7331 F'09

DendrogramDendrogram

Dendrogram:Dendrogram: a tree data a tree data structure which illustrates structure which illustrates hierarchical clustering hierarchical clustering techniques.techniques.

Each level shows clusters Each level shows clusters for that level.for that level.– Leaf – individual clustersLeaf – individual clusters– Root – one clusterRoot – one cluster

A cluster at level i is the A cluster at level i is the union of its children clusters union of its children clusters at level i+1.at level i+1.

Page 163: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

163CSE 5331/7331 F'09

Levels of ClusteringLevels of Clustering

Page 164: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

164CSE 5331/7331 F'09

Agglomerative ExampleAgglomerative ExampleAA BB CC DD EE

AA 00 11 22 22 33

BB 11 00 22 44 33

CC 22 22 00 11 55

DD 22 44 11 00 33

EE 33 33 55 33 00

BA

E C

D

4

Threshold of

2 3 51

A B C D E

Page 165: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

165CSE 5331/7331 F'09

MST ExampleMST Example

AA BB CC DD EE

AA 00 11 22 22 33

BB 11 00 22 44 33

CC 22 22 00 11 55

DD 22 44 11 00 33

EE 33 33 55 33 00

BA

E C

D

Page 166: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

166CSE 5331/7331 F'09

Agglomerative AlgorithmAgglomerative Algorithm

Page 167: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

167CSE 5331/7331 F'09

Single LinkSingle Link View all items with links (distances) View all items with links (distances)

between them.between them. Finds maximal connected components Finds maximal connected components

in this graph.in this graph. Two clusters are merged if there is at Two clusters are merged if there is at

least one edge which connects them.least one edge which connects them. Uses threshold distances at each level.Uses threshold distances at each level. Could be agglomerative or divisive.Could be agglomerative or divisive.

Page 168: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

168CSE 5331/7331 F'09

MST Single Link AlgorithmMST Single Link Algorithm

Page 169: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

169CSE 5331/7331 F'09

Single Link ClusteringSingle Link Clustering

Page 170: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

170CSE 5331/7331 F'09

Partitional ClusteringPartitional Clustering

NonhierarchicalNonhierarchical Creates clusters in one step as Creates clusters in one step as

opposed to several steps.opposed to several steps. Since only one set of clusters is output, Since only one set of clusters is output,

the user normally has to input the the user normally has to input the desired number of clusters, k.desired number of clusters, k.

Usually deals with static sets.Usually deals with static sets.

Page 171: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

171CSE 5331/7331 F'09

Partitional AlgorithmsPartitional Algorithms

MSTMST Squared ErrorSquared Error K-MeansK-Means Nearest NeighborNearest Neighbor PAMPAM BEABEA GAGA

Page 172: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

172CSE 5331/7331 F'09

MST AlgorithmMST Algorithm

Page 173: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

173CSE 5331/7331 F'09

Squared ErrorSquared Error

Minimized squared errorMinimized squared error

Page 174: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

174CSE 5331/7331 F'09

Squared Error AlgorithmSquared Error Algorithm

Page 175: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

175CSE 5331/7331 F'09

K-MeansK-Means Initial set of clusters randomly chosen.Initial set of clusters randomly chosen. Iteratively, items are moved among sets Iteratively, items are moved among sets

of clusters until the desired set is of clusters until the desired set is reached.reached.

High degree of similarity among High degree of similarity among elements in a cluster is obtained.elements in a cluster is obtained.

Given a cluster KGiven a cluster Kii={t={ti1i1,t,ti2i2,…,t,…,timim}, the }, the

cluster meancluster mean is m is mii = (1/m)(t = (1/m)(ti1i1 + … + t + … + timim))

Page 176: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

176CSE 5331/7331 F'09

K-Means ExampleK-Means Example Given: {2,4,10,12,3,20,30,11,25}, k=2Given: {2,4,10,12,3,20,30,11,25}, k=2 Randomly assign means: mRandomly assign means: m11=3,m=3,m22=4=4 KK11={2,3}, K={2,3}, K22={4,10,12,20,30,11,25}, ={4,10,12,20,30,11,25},

mm11=2.5,m=2.5,m22=16=16 KK11={2,3,4},K={2,3,4},K22={10,12,20,30,11,25}, m={10,12,20,30,11,25}, m11=3,m=3,m22=18=18 KK11={2,3,4,10},K={2,3,4,10},K22={12,20,30,11,25}, ={12,20,30,11,25},

mm11=4.75,m=4.75,m22=19.6=19.6 KK11={2,3,4,10,11,12},K={2,3,4,10,11,12},K22={20,30,25}, m={20,30,25}, m11=7,m=7,m22=25=25 Stop as the clusters with these means are the Stop as the clusters with these means are the

same.same.

Page 177: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

177CSE 5331/7331 F'09

K-Means AlgorithmK-Means Algorithm

Page 178: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

178CSE 5331/7331 F'09

Nearest NeighborNearest Neighbor

Items are iteratively merged into the Items are iteratively merged into the existing clusters that are closest.existing clusters that are closest.

IncrementalIncremental Threshold, t, used to determine if items Threshold, t, used to determine if items

are added to existing clusters or a new are added to existing clusters or a new cluster is created.cluster is created.

Page 179: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

179CSE 5331/7331 F'09

Nearest Neighbor AlgorithmNearest Neighbor Algorithm

Page 180: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

180CSE 5331/7331 F'09

PAMPAM

Partitioning Around Medoids (PAM) Partitioning Around Medoids (PAM) (K-Medoids)(K-Medoids)

Handles outliers well.Handles outliers well. Ordering of input does not impact results.Ordering of input does not impact results. Does not scale well.Does not scale well. Each cluster represented by one item, Each cluster represented by one item,

called the called the medoid.medoid. Initial set of k medoids randomly chosen.Initial set of k medoids randomly chosen.

Page 181: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

181CSE 5331/7331 F'09

PAMPAM

Page 182: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

182CSE 5331/7331 F'09

PAM Cost CalculationPAM Cost Calculation At each step in algorithm, medoids are At each step in algorithm, medoids are

changed if the overall cost is improved.changed if the overall cost is improved. CCjihjih – cost change for an item t – cost change for an item t jj associated associated

with swapping medoid twith swapping medoid t ii with non-medoid t with non-medoid thh..

Page 183: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

183CSE 5331/7331 F'09

PAM AlgorithmPAM Algorithm

Page 184: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

184CSE 5331/7331 F'09

BEABEA Bond Energy AlgorithmBond Energy Algorithm Database design (physical and logical)Database design (physical and logical) Vertical fragmentationVertical fragmentation Determine affinity (bond) between attributes Determine affinity (bond) between attributes

based on common usage.based on common usage. Algorithm outline:Algorithm outline:

1.1. Create affinity matrixCreate affinity matrix

2.2. Convert to BOND matrix Convert to BOND matrix

3.3. Create regions of close bondingCreate regions of close bonding

Page 185: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

185CSE 5331/7331 F'09

BEABEA

Modified from [OV99]

Page 186: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

186CSE 5331/7331 F'09

Genetic Algorithm ExampleGenetic Algorithm Example

{{A,B,C,D,E,F,G,H}A,B,C,D,E,F,G,H} Randomly choose initial solution:Randomly choose initial solution:

{A,C,E} {B,F} {D,G,H} or{A,C,E} {B,F} {D,G,H} or10101000, 01000100, 0001001110101000, 01000100, 00010011

Suppose crossover at point four and Suppose crossover at point four and choose 1choose 1stst and 3 and 3rdrd individuals: individuals:10100011, 01000100, 0001100010100011, 01000100, 00011000

What should termination criteria be?What should termination criteria be?

Page 187: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

187CSE 5331/7331 F'09

GA AlgorithmGA Algorithm

Page 188: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

188CSE 5331/7331 F'09

Clustering Large DatabasesClustering Large Databases

Most clustering algorithms assume a large Most clustering algorithms assume a large data structure which is memory resident.data structure which is memory resident.

Clustering may be performed first on a Clustering may be performed first on a sample of the database then applied to the sample of the database then applied to the entire database.entire database.

AlgorithmsAlgorithms– BIRCHBIRCH– DBSCANDBSCAN– CURECURE

Page 189: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

189CSE 5331/7331 F'09

Desired Features for Large Desired Features for Large DatabasesDatabases

One scan (or less) of DBOne scan (or less) of DB OnlineOnline Suspendable, stoppable, resumableSuspendable, stoppable, resumable IncrementalIncremental Work with limited main memoryWork with limited main memory Different techniques to scan (e.g. Different techniques to scan (e.g.

sampling)sampling) Process each tuple onceProcess each tuple once

Page 190: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

190CSE 5331/7331 F'09

BIRCHBIRCH Balanced Iterative Reducing and Balanced Iterative Reducing and

Clustering using HierarchiesClustering using Hierarchies Incremental, hierarchical, one scanIncremental, hierarchical, one scan Save clustering information in a tree Save clustering information in a tree Each entry in the tree contains Each entry in the tree contains

information about one clusterinformation about one cluster New nodes inserted in closest entry in New nodes inserted in closest entry in

treetree

Page 191: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

191CSE 5331/7331 F'09

Clustering FeatureClustering Feature CT Triple: (N,LS,SS)CT Triple: (N,LS,SS)

– N: Number of points in clusterN: Number of points in cluster– LS: Sum of points in the clusterLS: Sum of points in the cluster– SS: Sum of squares of points in the clusterSS: Sum of squares of points in the cluster

CF TreeCF Tree– Balanced search treeBalanced search tree– Node has CF triple for each childNode has CF triple for each child– Leaf node represents cluster and has CF value Leaf node represents cluster and has CF value

for each subcluster in it.for each subcluster in it.– Subcluster has maximum diameterSubcluster has maximum diameter

Page 192: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

192CSE 5331/7331 F'09

BIRCH AlgorithmBIRCH Algorithm

Page 193: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

193CSE 5331/7331 F'09

Improve ClustersImprove Clusters

Page 194: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

194CSE 5331/7331 F'09

DBSCANDBSCAN

Density Based Spatial Clustering of Density Based Spatial Clustering of Applications with NoiseApplications with Noise

Outliers will not effect creation of cluster.Outliers will not effect creation of cluster. InputInput

– MinPts MinPts – minimum number of points in – minimum number of points in clustercluster

– EpsEps – for each point in cluster there must – for each point in cluster there must be another point in it less than this distance be another point in it less than this distance away.away.

Page 195: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

195CSE 5331/7331 F'09

DBSCAN Density ConceptsDBSCAN Density Concepts

Eps-neighborhood:Eps-neighborhood: Points within Eps Points within Eps distance of a point.distance of a point.

Core point:Core point: Eps-neighborhood dense enough Eps-neighborhood dense enough (MinPts)(MinPts)

Directly density-reachable:Directly density-reachable: A point p is A point p is directly density-reachable from a point q if the directly density-reachable from a point q if the distance is small (Eps) and q is a core point.distance is small (Eps) and q is a core point.

Density-reachable:Density-reachable: A point si density- A point si density-reachable form another point if there is a path reachable form another point if there is a path from one to the other consisting of only core from one to the other consisting of only core points.points.

Page 196: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

196CSE 5331/7331 F'09

Density ConceptsDensity Concepts

Page 197: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

197CSE 5331/7331 F'09

DBSCAN AlgorithmDBSCAN Algorithm

Page 198: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

198CSE 5331/7331 F'09

CURECURE

Clustering Using RepresentativesClustering Using Representatives Use many points to represent a cluster Use many points to represent a cluster

instead of only oneinstead of only one Points will be well scatteredPoints will be well scattered

Page 199: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

199CSE 5331/7331 F'09

CURE ApproachCURE Approach

Page 200: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

200CSE 5331/7331 F'09

CURE AlgorithmCURE Algorithm

Page 201: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

201CSE 5331/7331 F'09

CURE for Large DatabasesCURE for Large Databases

Page 202: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

202CSE 5331/7331 F'09

Comparison of Clustering Comparison of Clustering TechniquesTechniques

Page 203: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

203CSE 5331/7331 F'09

Association Rules OutlineAssociation Rules OutlineGoal: Provide an overview of basic Association Provide an overview of basic Association

Rule mining techniquesRule mining techniques Association Rules Problem OverviewAssociation Rules Problem Overview

– Large itemsetsLarge itemsets Association Rules AlgorithmsAssociation Rules Algorithms

– AprioriApriori– SamplingSampling– PartitioningPartitioning– Parallel AlgorithmsParallel Algorithms

Comparing TechniquesComparing Techniques Incremental AlgorithmsIncremental Algorithms Advanced AR TechniquesAdvanced AR Techniques

Page 204: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

204CSE 5331/7331 F'09

Example: Market Basket DataExample: Market Basket Data Items frequently purchased together:Items frequently purchased together:

Bread Bread PeanutButterPeanutButter Uses:Uses:

– Placement Placement – AdvertisingAdvertising– SalesSales– CouponsCoupons

Objective: increase sales and reduce Objective: increase sales and reduce costscosts

Page 205: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

205CSE 5331/7331 F'09

Association Rule DefinitionsAssociation Rule Definitions

Set of items:Set of items: I={I I={I11,I,I22,…,I,…,Imm}}

Transactions:Transactions: D={t D={t11,t,t22, …, t, …, tnn}, t}, tjj I I

Itemset:Itemset: {I {Ii1i1,I,Ii2i2, …, I, …, Iikik} } I I Support of an itemset:Support of an itemset: Percentage of Percentage of

transactions which contain that itemset.transactions which contain that itemset. Large (Frequent) itemset:Large (Frequent) itemset: Itemset Itemset

whose number of occurrences is above whose number of occurrences is above a threshold.a threshold.

Page 206: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

206CSE 5331/7331 F'09

Association Rules ExampleAssociation Rules Example

I = { Beer, Bread, Jelly, Milk, PeanutButter}

Support of {Bread,PeanutButter} is 60%

Page 207: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

207CSE 5331/7331 F'09

Association Rule DefinitionsAssociation Rule Definitions

Association Rule (AR): Association Rule (AR): implication X implication X Y where X,Y Y where X,Y I and X I and X Y = Y = ;;

Support of AR (s) X Support of AR (s) X YY: : Percentage of transactions that Percentage of transactions that contain X contain X YY

Confidence of AR (Confidence of AR () X ) X Y: Y: Ratio of Ratio of number of transactions that contain X number of transactions that contain X Y to the number that contain X Y to the number that contain X

Page 208: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

208CSE 5331/7331 F'09

Association Rules Ex (cont’d)Association Rules Ex (cont’d)

Page 209: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

209CSE 5331/7331 F'09

Association Rule ProblemAssociation Rule Problem Given a set of items I={IGiven a set of items I={I11,I,I22,…,I,…,Imm} and a } and a

database of transactions D={tdatabase of transactions D={t11,t,t22, …, t, …, tnn} } where twhere tii={I={Ii1i1,I,Ii2i2, …, I, …, Iikik} and I} and Iijij I, the I, the Association Rule ProblemAssociation Rule Problem is to is to identify all association rulesidentify all association rules X X Y Y with with a minimum support and confidence.a minimum support and confidence.

Link AnalysisLink Analysis NOTE:NOTE: Support of Support of X X Y Y is same as is same as

support of X support of X Y. Y.

Page 210: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

210CSE 5331/7331 F'09

Association Rule TechniquesAssociation Rule Techniques

1.1. Find Large Itemsets.Find Large Itemsets.

2.2. Generate rules from frequent itemsets.Generate rules from frequent itemsets.

Page 211: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

211CSE 5331/7331 F'09

Algorithm to Generate ARsAlgorithm to Generate ARs

Page 212: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

212CSE 5331/7331 F'09

AprioriApriori

Large Itemset Property:Large Itemset Property:

Any subset of a large itemset is large.Any subset of a large itemset is large. Contrapositive:Contrapositive:

If an itemset is not large, If an itemset is not large,

none of its supersets are large.none of its supersets are large.

Page 213: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

213CSE 5331/7331 F'09

Large Itemset PropertyLarge Itemset Property

Page 214: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

214CSE 5331/7331 F'09

Apriori Ex (cont’d)Apriori Ex (cont’d)

s=30% = 50%

Page 215: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

215CSE 5331/7331 F'09

Apriori AlgorithmApriori Algorithm

1.1. CC11 = Itemsets of size one in I; = Itemsets of size one in I;

2.2. Determine all large itemsets of size 1, LDetermine all large itemsets of size 1, L1;1;

3. i = 1;

4.4. RepeatRepeat

5.5. i = i + 1;i = i + 1;

6.6. CCi i = Apriori-Gen(L= Apriori-Gen(Li-1i-1););

7.7. Count CCount Cii to determine L to determine L i;i;

8.8. until no more large itemsets found;until no more large itemsets found;

Page 216: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

216CSE 5331/7331 F'09

Apriori-GenApriori-Gen

Generate candidates of size i+1 from Generate candidates of size i+1 from large itemsets of size i.large itemsets of size i.

Approach used: join large itemsets of Approach used: join large itemsets of size i if they agree on i-1 size i if they agree on i-1

May also prune candidates who have May also prune candidates who have subsets that are not large.subsets that are not large.

Page 217: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

217CSE 5331/7331 F'09

Apriori-Gen ExampleApriori-Gen Example

Page 218: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

218CSE 5331/7331 F'09

Apriori-Gen Example (cont’d)Apriori-Gen Example (cont’d)

Page 219: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

219CSE 5331/7331 F'09

Apriori Adv/DisadvApriori Adv/Disadv

Advantages:Advantages:– Uses large itemset property.Uses large itemset property.– Easily parallelizedEasily parallelized– Easy to implement.Easy to implement.

Disadvantages:Disadvantages:– Assumes transaction database is memory Assumes transaction database is memory

resident.resident.– Requires up to m database scans.Requires up to m database scans.

Page 220: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

220CSE 5331/7331 F'09

SamplingSampling Large databasesLarge databases Sample the database and apply Apriori to the Sample the database and apply Apriori to the

sample. sample. Potentially Large Itemsets (PL):Potentially Large Itemsets (PL): Large Large

itemsets from sampleitemsets from sample Negative Border (BD Negative Border (BD -- ): ):

– Generalization of Apriori-Gen applied to Generalization of Apriori-Gen applied to itemsets of varying sizes.itemsets of varying sizes.

– Minimal set of itemsets which are not in PL, Minimal set of itemsets which are not in PL, butbut whose subsets are all in PL. whose subsets are all in PL.

Page 221: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

221CSE 5331/7331 F'09

Negative Border ExampleNegative Border Example

PL PL BD-(PL)

Page 222: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

222CSE 5331/7331 F'09

Sampling AlgorithmSampling Algorithm

1.1. DDss = sample of Database D; = sample of Database D;

2.2. PL = Large itemsets in DPL = Large itemsets in Dss using smalls; using smalls;

3.3. C = PL C = PL BDBD--(PL);(PL);4.4. Count C in Database using s;Count C in Database using s;

5.5. ML = large itemsets in BDML = large itemsets in BD--(PL);(PL);6.6. If ML = If ML = then donethen done

7.7. else C = repeated application of BDelse C = repeated application of BD-;-;

8.8. Count C in Database;Count C in Database;

Page 223: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

223CSE 5331/7331 F'09

Sampling ExampleSampling Example Find AR assuming s = 20%Find AR assuming s = 20% DDss = { t = { t11,t,t22}} Smalls = 10%Smalls = 10% PL = {{Bread}, {Jelly}, {PeanutButter}, PL = {{Bread}, {Jelly}, {PeanutButter},

{Bread,Jelly}, {Bread,PeanutButter}, {Jelly, {Bread,Jelly}, {Bread,PeanutButter}, {Jelly, PeanutButter}, {Bread,Jelly,PeanutButter}}PeanutButter}, {Bread,Jelly,PeanutButter}}

BDBD--(PL)={{Beer},{Milk}}(PL)={{Beer},{Milk}} ML = {{Beer}, {Milk}} ML = {{Beer}, {Milk}} Repeated application of BDRepeated application of BD- - generates all generates all

remaining itemsetsremaining itemsets

Page 224: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

224CSE 5331/7331 F'09

Sampling Adv/DisadvSampling Adv/Disadv

Advantages:Advantages:– Reduces number of database scans to one Reduces number of database scans to one

in the best case and two in worst.in the best case and two in worst.– Scales better.Scales better.

Disadvantages:Disadvantages:– Potentially large number of candidates in Potentially large number of candidates in

second passsecond pass

Page 225: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

225CSE 5331/7331 F'09

PartitioningPartitioning

Divide database into partitions DDivide database into partitions D11,D,D22,,…,D…,Dpp

Apply Apriori to each partitionApply Apriori to each partition Any large itemset must be large in at Any large itemset must be large in at

least one partition.least one partition.

Page 226: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

226CSE 5331/7331 F'09

Partitioning AlgorithmPartitioning Algorithm

1.1. Divide D into partitions DDivide D into partitions D11,D,D22,…,D,…,Dp;p;

2. For I = 1 to p do

3.3. LLii = Apriori(D = Apriori(Dii););

4.4. C = LC = L11 … … L Lpp;;

5.5. Count C on D to generate L;Count C on D to generate L;

Page 227: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

227CSE 5331/7331 F'09

Partitioning ExamplePartitioning Example

D1

D2

S=10%

L1 ={{Bread}, {Jelly}, {Bread}, {Jelly}, {PeanutButter}, {PeanutButter}, {Bread,Jelly}, {Bread,Jelly}, {Bread,PeanutButter}, {Bread,PeanutButter}, {Jelly, PeanutButter}, {Jelly, PeanutButter}, {Bread,Jelly,PeanutButter}}{Bread,Jelly,PeanutButter}}

L2 ={{Bread}, {Milk}, {Bread}, {Milk}, {PeanutButter}, {Bread,Milk}, {PeanutButter}, {Bread,Milk}, {Bread,PeanutButter}, {Milk, {Bread,PeanutButter}, {Milk, PeanutButter}, PeanutButter}, {Bread,Milk,PeanutButter}, {Bread,Milk,PeanutButter}, {Beer}, {Beer,Bread}, {Beer}, {Beer,Bread}, {Beer,Milk}}{Beer,Milk}}

Page 228: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

228CSE 5331/7331 F'09

Partitioning Adv/DisadvPartitioning Adv/Disadv

Advantages:Advantages:– Adapts to available main memoryAdapts to available main memory– Easily parallelizedEasily parallelized– Maximum number of database scans is Maximum number of database scans is

two.two. Disadvantages:Disadvantages:

– May have many candidates during second May have many candidates during second scan.scan.

Page 229: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

229CSE 5331/7331 F'09

Parallelizing AR AlgorithmsParallelizing AR Algorithms

Based on AprioriBased on Apriori Techniques differ:Techniques differ:

– What is counted at each siteWhat is counted at each site– How data (transactions) are distributedHow data (transactions) are distributed

Data ParallelismData Parallelism– Data partitionedData partitioned– Count Distribution AlgorithmCount Distribution Algorithm

Task ParallelismTask Parallelism– Data and candidates partitionedData and candidates partitioned– Data Distribution AlgorithmData Distribution Algorithm

Page 230: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

230CSE 5331/7331 F'09

Count Distribution Algorithm(CDA)Count Distribution Algorithm(CDA)1.1. Place data partition at each site.Place data partition at each site.2.2. In Parallel at each site doIn Parallel at each site do3.3. CC11 = Itemsets of size one in I; = Itemsets of size one in I;4.4. Count CCount C1;1;

5.5. Broadcast counts to all sites;Broadcast counts to all sites;6.6. Determine global large itemsets of size 1, LDetermine global large itemsets of size 1, L11;;7. i = 1; 8.8. RepeatRepeat9.9. i = i + 1;i = i + 1;10.10. CCi i = Apriori-Gen(L= Apriori-Gen(Li-1i-1););11.11. Count CCount Ci;i;

12.12. Broadcast counts to all sites;Broadcast counts to all sites;13.13. Determine global large itemsets of size i, LDetermine global large itemsets of size i, L ii;;14.14. until no more large itemsets found;until no more large itemsets found;

Page 231: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

231CSE 5331/7331 F'09

CDA ExampleCDA Example

Page 232: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

232CSE 5331/7331 F'09

Data Distribution Algorithm(DDA)Data Distribution Algorithm(DDA)1.1. Place data partition at each site.Place data partition at each site.2.2. In Parallel at each site doIn Parallel at each site do3.3. Determine local candidates of size 1 to count;Determine local candidates of size 1 to count;4.4. Broadcast local transactions to other sites;Broadcast local transactions to other sites;5.5. Count local candidates of size 1 on all data;Count local candidates of size 1 on all data;6.6. Determine large itemsets of size 1 for local Determine large itemsets of size 1 for local

candidates; candidates; 7.7. Broadcast large itemsets to all sites;Broadcast large itemsets to all sites;8.8. Determine LDetermine L11;;9. i = 1; 10.10. RepeatRepeat11.11. i = i + 1;i = i + 1;12.12. CCi i = Apriori-Gen(L= Apriori-Gen(Li-1i-1););13.13. Determine local candidates of size i to count;Determine local candidates of size i to count;14.14. Count, broadcast, and find LCount, broadcast, and find Lii;;15.15. until no more large itemsets found;until no more large itemsets found;

Page 233: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

233CSE 5331/7331 F'09

DDA ExampleDDA Example

Page 234: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

234CSE 5331/7331 F'09

Comparing AR TechniquesComparing AR Techniques TargetTarget TypeType Data TypeData Type Data SourceData Source TechniqueTechnique Itemset Strategy and Data StructureItemset Strategy and Data Structure Transaction Strategy and Data StructureTransaction Strategy and Data Structure OptimizationOptimization ArchitectureArchitecture Parallelism StrategyParallelism Strategy

Page 235: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

235CSE 5331/7331 F'09

Comparison of AR TechniquesComparison of AR Techniques

Page 236: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

236CSE 5331/7331 F'09

Hash TreeHash Tree

Page 237: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

237CSE 5331/7331 F'09

Incremental Association RulesIncremental Association Rules Generate ARs in a dynamic database.Generate ARs in a dynamic database. Problem: algorithms assume static Problem: algorithms assume static

databasedatabase Objective: Objective:

– Know large itemsets for DKnow large itemsets for D– Find large itemsets for D Find large itemsets for D { { D} D}

Must be large in either D or Must be large in either D or D D Save LSave Li i and counts and counts

Page 238: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

238CSE 5331/7331 F'09

Note on ARsNote on ARs Many applications outside market basket Many applications outside market basket

data analysisdata analysis– Prediction (telecom switch failure)Prediction (telecom switch failure)– Web usage miningWeb usage mining

Many different types of association rulesMany different types of association rules– TemporalTemporal– SpatialSpatial– CausalCausal

Page 239: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

239CSE 5331/7331 F'09

Advanced AR TechniquesAdvanced AR Techniques

Generalized Association RulesGeneralized Association Rules Multiple-Level Association RulesMultiple-Level Association Rules Quantitative Association RulesQuantitative Association Rules Using multiple minimum supportsUsing multiple minimum supports Correlation RulesCorrelation Rules

Page 240: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

240CSE 5331/7331 F'09

Measuring Quality of RulesMeasuring Quality of Rules

SupportSupport ConfidenceConfidence InterestInterest ConvictionConviction Chi Squared TestChi Squared Test

Page 241: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

241CSE 5331/7331 F'09

Data Mining OutlineData Mining Outline

PART I - Introduction PART II – Core Topics

– Classification– Clustering– Association Rules

PART III – Related TopicsPART III – Related Topics

Page 242: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

242CSE 5331/7331 F'09

Related Topics OutlineRelated Topics Outline

Database/OLTP SystemsDatabase/OLTP Systems Fuzzy Sets and LogicFuzzy Sets and Logic Information Retrieval(Web Search Engines)Information Retrieval(Web Search Engines) Dimensional ModelingDimensional Modeling Data WarehousingData Warehousing OLAP/DSSOLAP/DSS StatisticsStatistics Machine LearningMachine Learning Pattern MatchingPattern Matching

Goal:Goal: Examine some areas which are related to Examine some areas which are related to data mining.data mining.

Page 243: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

243CSE 5331/7331 F'09

DB & OLTP SystemsDB & OLTP Systems SchemaSchema

– (ID,Name,Address,Salary,JobNo)(ID,Name,Address,Salary,JobNo) Data ModelData Model

– ERER– RelationalRelational

TransactionTransaction Query:Query:

SELECT NameSELECT NameFROM TFROM TWHERE Salary > 100000WHERE Salary > 100000

DM: Only imprecise queriesDM: Only imprecise queries

Page 244: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

CSE 5331/7331 F'09 244

Fuzzy Sets OutlineFuzzy Sets Outline

Introduction/OverviewIntroduction/Overview

Material for these slides obtained from:Material for these slides obtained from:

Data Mining Introductory and Advanced Topics by Margaret H. Dunham Data Mining Introductory and Advanced Topics by Margaret H. Dunham

http://www.engr.smu.edu/~mhd/bookIntroduction to “Type-2 Fuzzy Logic” by Jenny CarterIntroduction to “Type-2 Fuzzy Logic” by Jenny Carter

http://www.cse.dmu.ac.uk/~jennyc/

Page 245: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

245CSE 5331/7331 F'09

Fuzzy Sets and LogicFuzzy Sets and Logic Fuzzy Set:Fuzzy Set: Set membership function is a real valued Set membership function is a real valued

function with output in the range [0,1].function with output in the range [0,1]. f(x): Probability x is in F.f(x): Probability x is in F. 1-f(x): Probability x is not in F.1-f(x): Probability x is not in F. EX:EX:

– T = {x | x is a person and x is tall}T = {x | x is a person and x is tall}– Let f(x) be the probability that x is tallLet f(x) be the probability that x is tall– Here f is the membership functionHere f is the membership function

DM: DM: Prediction and classification are fuzzy.Prediction and classification are fuzzy.

Page 246: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

246CSE 5331/7331 F'09

Fuzzy Sets and LogicFuzzy Sets and Logic

Fuzzy Set:Fuzzy Set: Set membership function is a real Set membership function is a real valued function with output in the range [0,1].valued function with output in the range [0,1].

f(x): Probability x is in F.f(x): Probability x is in F. 1-f(x): Probability x is not in F.1-f(x): Probability x is not in F. EX:EX:

– T = {x | x is a person and x is tall}T = {x | x is a person and x is tall}– Let f(x) be the probability that x is tallLet f(x) be the probability that x is tall– Here f is the membership functionHere f is the membership function

Page 247: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

247CSE 5331/7331 F'09

Fuzzy SetsFuzzy Sets

Page 248: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

248CSE 5331/7331 F'09

IR is FuzzyIR is Fuzzy

Simple Fuzzy

Accept Accept

RejectReject

Page 249: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

249CSE 5331/7331 F'09

Fuzzy Set TheoryFuzzy Set Theory

A fuzzy subset A of U is characterized by a A fuzzy subset A of U is characterized by a membership function membership function

(A,u) : U (A,u) : U [0,1] [0,1]which associates with each element which associates with each element uu of of

U a number U a number (u) in the interval [0,1](u) in the interval [0,1] DefinitionDefinition

– Let A and B be two fuzzy subsets of U. Also, let Let A and B be two fuzzy subsets of U. Also, let ¬A be the complement of A. Then,¬A be the complement of A. Then,

» (¬A,u) = 1 - (¬A,u) = 1 - (A,u) (A,u) » (A(AB,u) = max(B,u) = max((A,u), (A,u), (B,u))(B,u))» (A(AB,u) = min(B,u) = min((A,u), (A,u), (B,u))(B,u))

Page 250: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

250CSE 5331/7331 F'09

The world is imprecise.The world is imprecise. Mathematical and Statistical techniques often Mathematical and Statistical techniques often

unsatisfactory.unsatisfactory.– Experts make decisions with imprecise data in an Experts make decisions with imprecise data in an

uncertain world.uncertain world.

– They work with knowledge that is rarely defined They work with knowledge that is rarely defined mathematically or algorithmically but uses vague mathematically or algorithmically but uses vague terminology with words.terminology with words.

Fuzzy logic is able to use vagueness to achieve Fuzzy logic is able to use vagueness to achieve a precise answer. By considering shades of grey a precise answer. By considering shades of grey and all factors simultaneously, you get a better and all factors simultaneously, you get a better answer, one that is more suited to the situation.answer, one that is more suited to the situation.

© Jenny Carter

Page 251: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

251CSE 5331/7331 F'09

Fuzzy Logic then . . .Fuzzy Logic then . . . is particularly good at handling uncertainty, is particularly good at handling uncertainty,

vagueness and imprecision.vagueness and imprecision. especially useful where a problem can be especially useful where a problem can be

described linguistically (using words).described linguistically (using words). Applications include:Applications include:

– roboticsrobotics– washing machine controlwashing machine control– nuclear reactorsnuclear reactors– focusing a camcorderfocusing a camcorder– information retrievalinformation retrieval– train schedulingtrain scheduling

© Jenny Carter

Page 252: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

252CSE 5331/7331 F'09

Crisp SetsCrisp Sets

Different heights have same ‘tallness’Different heights have same ‘tallness’

© Jenny Carter

Page 253: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

253CSE 5331/7331 F'09

Fuzzy SetsFuzzy Sets

The shape you see is known as the membership The shape you see is known as the membership functionfunction

© Jenny Carter

Page 254: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

254CSE 5331/7331 F'09

Fuzzy SetsFuzzy Sets

Shows two membership functions: ‘tall’ and ‘short’

© Jenny Carter

Page 255: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

255CSE 5331/7331 F'09

NotationNotationFor the member, x, of a discrete set with membership µ we use the notation µ/x . In other words, x is a member of the set to degree µ. Discrete sets are written as:

A = µ1/x1 + µ2/x2 + .......... + µn/xn

Or

where x1, x2....xn are members of the set A and µ1, µ2, ...., µn are their degrees of membership. A continuous fuzzy set A is written as:

© Jenny Carter

Page 256: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

256CSE 5331/7331 F'09

Fuzzy SetsFuzzy Sets The members of a fuzzy set are members to some The members of a fuzzy set are members to some

degree, known as a membership grade or degree degree, known as a membership grade or degree of membership.of membership.

The membership grade is the degree of belonging The membership grade is the degree of belonging to the fuzzy set. The larger the number (in [0,1]) to the fuzzy set. The larger the number (in [0,1]) the more the degree of belonging. (N.B. This is the more the degree of belonging. (N.B. This is notnot a probability)a probability)

The translation from x to µThe translation from x to µAA(x) is known as (x) is known as fuzzification.fuzzification.

A fuzzy set is either continuous or discrete.A fuzzy set is either continuous or discrete. Graphical representation of membership functions Graphical representation of membership functions

is very useful.is very useful.

© Jenny Carter

Page 257: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

257CSE 5331/7331 F'09

Fuzzy Sets - ExampleFuzzy Sets - Example

Again, notice the overlapping of the sets reflecting the real worldmore accurately than if we were using a traditional approach.

© Jenny Carter

Page 258: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

258CSE 5331/7331 F'09

RulesRules Rules often of the form:Rules often of the form:

IF x is A THEN y is BIF x is A THEN y is B

where where A A and and B B are fuzzy sets defined on the are fuzzy sets defined on the universes of discourse universes of discourse X X and and Y Y respectively.respectively.

– if pressure is high then volume is small;if pressure is high then volume is small;– if a tomato is red then a tomato is ripe.if a tomato is red then a tomato is ripe.

where where high, small, redhigh, small, red and and riperipe are fuzzy sets. are fuzzy sets.

© Jenny Carter

Page 259: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

259CSE 5331/7331 F'09

Example - Dinner for twoExample - Dinner for two((p2-21 of FL toolbox user guide)p2-21 of FL toolbox user guide)

Rule 2 If service is good, then tip is average

Rule 3 If service is excellent or food is delicious, then tip is generous

The inputs are crisp (non-fuzzy) numbers limited to a specific range

All rules are evaluated in parallel using fuzzy reasoning

The results of the rules are combined and distilled (de-fuzzyfied)

The result is a crisp (non-fuzzy) number

Output

Tip (5-25%)

Dinner for two: this is a 2 input, 1 output, 3 rule system

Input 1

Service (0-10)

Input 2

Food (0-10)

Rule 1 If service is poor or food is rancid, then tip is cheap

© Jenny Carter

Page 260: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

260CSE 5331/7331 F'09

Dinner for twoDinner for two

1.1. Fuzzify the Fuzzify the input:input:

2.2. Apply Fuzzy Apply Fuzzy operatoroperator

© Jenny Carter

Page 261: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

261CSE 5331/7331 F'09

Dinner for twoDinner for two3. 3. Apply implication methodApply implication method

© Jenny Carter

Page 262: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

262CSE 5331/7331 F'09

Dinner for twoDinner for two

4.4.

Aggregate Aggregate

all outputsall outputs

© Jenny Carter

Page 263: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

263CSE 5331/7331 F'09

Dinner for twoDinner for two

5. 5. defuzzifydefuzzify

Various approaches e.g.

centre of area mean of max

© Jenny Carter

Page 264: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

CSE 5331/7331 F'09 264

Information Retrieval Information Retrieval OutlineOutline

Introduction/OverviewIntroduction/Overview

Material for these slides obtained from:Material for these slides obtained from:Modern Information Retrieval by Ricardo Baeza-Yates and Berthier Ribeiro-Neto Modern Information Retrieval by Ricardo Baeza-Yates and Berthier Ribeiro-Neto

http://www.sims.berkeley.edu/~hearst/irbook/Data Mining Introductory and Advanced Topics by Margaret H. Dunham Data Mining Introductory and Advanced Topics by Margaret H. Dunham

http://www.engr.smu.edu/~mhd/book

Page 265: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

265CSE 5331/7331 F'09

Information Retrieval Information Retrieval

Information Retrieval (IR):Information Retrieval (IR): retrieving desired retrieving desired information from textual data.information from textual data.

Library ScienceLibrary Science Digital LibrariesDigital Libraries Web Search EnginesWeb Search Engines Traditionally keyword basedTraditionally keyword based Sample query:Sample query:

Find all documents about “data mining”.Find all documents about “data mining”.

DM: Similarity measures; DM: Similarity measures; Mine text/Web data.Mine text/Web data.

Page 266: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

266CSE 5331/7331 F'09

Information Retrieval Information Retrieval

Information Retrieval (IR):Information Retrieval (IR): retrieving desired retrieving desired information from textual data.information from textual data.

Library ScienceLibrary Science Digital LibrariesDigital Libraries Web Search EnginesWeb Search Engines Traditionally keyword basedTraditionally keyword based Sample query:Sample query:

Find all documents about “data mining”.Find all documents about “data mining”.

Page 267: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

267CSE 5331/7331 F'09

DB vs IRDB vs IR

Records (tuples) vs. documentsRecords (tuples) vs. documents Well defined results vs. fuzzy resultsWell defined results vs. fuzzy results DB grew out of files and traditional DB grew out of files and traditional

business systesmbusiness systesm IR grew out of library science and need IR grew out of library science and need

to categorize/group/access to categorize/group/access books/articlesbooks/articles

Page 268: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

268CSE 5331/7331 F'09

DB vs IR (cont’d)

Data retrievalwhich docs contain a set of keywords?Well defined semanticsa single erroneous object implies failure!

Information retrievalinformation about a subject or topicsemantics is frequently loosesmall errors are tolerated

IR system:interpret contents of information itemsgenerate a ranking which reflects relevancenotion of relevance is most important

Page 269: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

269CSE 5331/7331 F'09

Motivation

IR in the last 20 years:classification and categorizationsystems and languagesuser interfaces and visualization

Still, area was seen as of narrow interestAdvent of the Web changed this perception once and for all

universal repository of knowledge free (low cost) universal accessno central editorial boardmany problems though: IR seen as key to finding the solutions!

Page 270: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

270CSE 5331/7331 F'09

Basic Concepts

Logical view of the documents

Document representation viewed as a continuum: logical view of docs might shift

structure

Accentsspacing stopwords

Noungroups stemming

Manual indexingDocs

structure Full text Index terms

© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 271: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

271CSE 5331/7331 F'09

UserInterface

Text Operations

Query Operations Indexing

Searching

Ranking

Index

Text

query

user need

user feedback

ranked docs

retrieved docs

logical viewlogical view

inverted file

DB Manager Module

Text Database

Text

The Retrieval Process

© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 272: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

272CSE 5331/7331 F'09

Information Retrieval Information Retrieval

Similarity:Similarity: measure of how close a query is measure of how close a query is to a document.to a document.

Documents which are “close enough” are Documents which are “close enough” are retrieved.retrieved.

Metrics:Metrics:– PrecisionPrecision = |Relevant and Retrieved| = |Relevant and Retrieved|

|Retrieved||Retrieved|– RecallRecall = |Relevant and Retrieved|= |Relevant and Retrieved|

|Relevant||Relevant|

Page 273: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

273CSE 5331/7331 F'09

IndexingIndexing

IR systems usually adopt index terms to IR systems usually adopt index terms to process queriesprocess queries

Index term:Index term:– a keyword or group of selected wordsa keyword or group of selected words– any word (more general)any word (more general)

Stemming might be used:Stemming might be used:– connect: connecting, connection, connectionsconnect: connecting, connection, connections

An inverted file is built for the chosen index An inverted file is built for the chosen index termsterms

© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 274: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

274CSE 5331/7331 F'09

Indexing Indexing Docs

Information Need

Index Terms

doc

query

Rankingmatch

© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 275: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

275CSE 5331/7331 F'09

Inverted FilesInverted Files There are two main elements: There are two main elements:

– vocabulary – set of unique terms vocabulary – set of unique terms – Occurrences – where those terms appear Occurrences – where those terms appear

The occurrences can be recorded as The occurrences can be recorded as terms or byte offsetsterms or byte offsets

Using term offset is good to retrieve Using term offset is good to retrieve concepts such as proximity, whereas concepts such as proximity, whereas byte offsets allow direct accessbyte offsets allow direct access

VocabularyVocabulary Occurrences (byte offset)Occurrences (byte offset)

…… ……© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 276: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

276CSE 5331/7331 F'09

Inverted FilesInverted Files

The number of indexed terms is often several The number of indexed terms is often several orders of magnitude smaller when compared orders of magnitude smaller when compared to the documents size (Mbs vs Gbs)to the documents size (Mbs vs Gbs)

The space consumed by the occurrence list is The space consumed by the occurrence list is not trivial. Each time the term appears it must not trivial. Each time the term appears it must be added to a list in the inverted filebe added to a list in the inverted file

That may lead to a quite considerable index That may lead to a quite considerable index overheadoverhead

© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 277: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

277CSE 5331/7331 F'09

ExampleExample Text: Text:

Inverted fileInverted file

1 6 12 16 18 25 29 36 40 45 54 58 66 70

That house has a garden. The garden has many flowers. The flowers are beautiful

beautiful

flowers

garden

house

70

45, 58

18, 29

6

Vocabulary Occurrences

© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 278: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

278CSE 5331/7331 F'09

RankingRanking

A A rankingranking is an ordering of the documents is an ordering of the documents retrieved that (hopefully) reflects the retrieved that (hopefully) reflects the relevance of the documents to the query relevance of the documents to the query

A ranking is based on fundamental A ranking is based on fundamental premisses regarding the notion of relevance, premisses regarding the notion of relevance, such as:such as:– common sets of index termscommon sets of index terms– sharing of weighted termssharing of weighted terms– likelihood of relevancelikelihood of relevance

Each set of premisses leads to a distinct Each set of premisses leads to a distinct IR modelIR model© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 279: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

279CSE 5331/7331 F'09

Classic IR Models - Basic ConceptsClassic IR Models - Basic Concepts

Each document represented by a set of Each document represented by a set of representative keywords or index termsrepresentative keywords or index terms

An index term is a document word useful for An index term is a document word useful for remembering the document main themesremembering the document main themes

Usually, index terms are nouns because Usually, index terms are nouns because nouns have meaning by themselvesnouns have meaning by themselves

However, search engines assume that all However, search engines assume that all words are index terms (full text words are index terms (full text representation)representation)

© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 280: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

280CSE 5331/7331 F'09

Classic IR Models - Basic ConceptsClassic IR Models - Basic Concepts

The The importanceimportance of the index terms is of the index terms is represented by weights associated to represented by weights associated to themthem

kkii- an index term- an index term

ddjj - a document - a document

wwij ij - a weight associated with - a weight associated with (k(kii,d,djj))

The weight The weight wwijij quantifies the importance of quantifies the importance of

the index term for describing the document the index term for describing the document contentscontents

© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 281: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

281CSE 5331/7331 F'09

Classic IR Models - Basic ConceptsClassic IR Models - Basic Concepts

– t t is the total number of index termsis the total number of index terms– K = {kK = {k11, k, k22, …, k, …, ktt} } is the set of all index termsis the set of all index terms

– wwij ij >= 0 >= 0 is a weight associated with is a weight associated with (k(kii,d,djj))

– wwijij = 0 = 0 indicates that term does not belong indicates that term does not belong

to docto doc– ddjj= (w= (w1j1j, w, w2j2j, …, w, …, wtjtj) ) is a weighted vector is a weighted vector

associated with the document associated with the document ddjj

– ggii(d(djj) = w) = wij ij is a function which returns the is a function which returns the

weight associated with pair weight associated with pair (k(kii,d,djj))

© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 282: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

282CSE 5331/7331 F'09

The Boolean ModelThe Boolean Model

Simple model based on set theorySimple model based on set theory Queries specified as boolean expressions Queries specified as boolean expressions

– precise semantics and neat formalismprecise semantics and neat formalism Terms are either present or absent. Thus, Terms are either present or absent. Thus,

wwij ij {0,1} {0,1} ConsiderConsider

– q = kq = ka a (k(kb b kkcc))

– qqdnf dnf = (1,1,1) = (1,1,1) (1,1,0) (1,1,0) (1,0,0)(1,0,0)

– qqcccc= (1,1,0) = (1,1,0) is a conjunctive componentis a conjunctive component© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 283: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

283CSE 5331/7331 F'09

The Vector ModelThe Vector Model

Use of binary weights is too limitingUse of binary weights is too limiting Non-binary weights provide consideration for Non-binary weights provide consideration for

partial matchespartial matches These term weights are used to compute a These term weights are used to compute a

degree of similaritydegree of similarity between a query and between a query and each documenteach document

Ranked set of documents provides for better Ranked set of documents provides for better matchingmatching

© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 284: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

284CSE 5331/7331 F'09

The Vector ModelThe Vector Model

wwij ij > 0 > 0 whenever whenever kki i appears in d appears in djj

wwiqiq >= 0 >= 0 associated with the pair associated with the pair (k(kii,q),q)

ddj j = (w= (w1j1j, w, w2j2j, ..., w, ..., wtjtj))

q = (wq = (w1q1q, w, w2q2q, ..., w, ..., wtqtq))

To each term To each term kki i is associated a unitary vectoris associated a unitary vector i i The unitary vectors The unitary vectors i i andand j j are assumed to be are assumed to be

orthonormal (i.e., index terms are assumed to orthonormal (i.e., index terms are assumed to occur independently within the documents)occur independently within the documents)

The The tt unitary vectors unitary vectors ii form an orthonormal basis form an orthonormal basis for a t-dimensional space where queries and for a t-dimensional space where queries and documents are represented as weighted vectors documents are represented as weighted vectors © Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 285: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

285CSE 5331/7331 F'09

Query Languages Query Languages

Keyword BasedKeyword Based BooleanBoolean Weighted BooleanWeighted Boolean Context Based (Phrasal & Proximity)Context Based (Phrasal & Proximity) Pattern MatchingPattern Matching Structural QueriesStructural Queries

© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 286: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

286CSE 5331/7331 F'09

Keyword Based QueriesKeyword Based Queries

Basic QueriesBasic Queries– Single wordSingle word– Multiple wordsMultiple words

Context QueriesContext Queries– PhrasePhrase– ProximityProximity

© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 287: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

287CSE 5331/7331 F'09

Boolean QueriesBoolean Queries

Keywords combined with Boolean operators:Keywords combined with Boolean operators:– OR: (OR: (ee11 OR OR ee22))

– AND: (AND: (ee11 AND AND ee22))

– BUT: (BUT: (ee11 BUT BUT ee22) Satisfy ) Satisfy ee11 but but notnot ee22

Negation only allowed using BUT to allow Negation only allowed using BUT to allow efficient use of inverted index by filtering efficient use of inverted index by filtering another efficiently retrievable set.another efficiently retrievable set.

Naïve users have trouble with Boolean logic.Naïve users have trouble with Boolean logic.

© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 288: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

288CSE 5331/7331 F'09

Boolean Retrieval with Inverted IndicesBoolean Retrieval with Inverted Indices

Primitive keywordPrimitive keyword: Retrieve containing : Retrieve containing documents using the inverted index.documents using the inverted index.

OROR: Recursively retrieve : Recursively retrieve ee11 and and ee22 and and take union of results.take union of results.

ANDAND: Recursively retrieve : Recursively retrieve ee11 and and ee22 and and take intersection of results.take intersection of results.

BUTBUT: Recursively retrieve : Recursively retrieve ee11 and and ee22 and and take set difference of results.take set difference of results.

© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 289: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

289CSE 5331/7331 F'09

Phrasal QueriesPhrasal Queries

Retrieve documents with a specific phrase Retrieve documents with a specific phrase (ordered list of contiguous words)(ordered list of contiguous words)– ““information theory”information theory”

May allow intervening stop words and/or May allow intervening stop words and/or stemming.stemming.– ““buy camera” matches: buy camera” matches:

““buy a camera” buy a camera” “buying the cameras” “buying the cameras” etc. etc.

© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 290: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

290CSE 5331/7331 F'09

Phrasal Retrieval with Inverted IndicesPhrasal Retrieval with Inverted Indices

Must have an inverted index that also stores Must have an inverted index that also stores positionspositions of each keyword in a document. of each keyword in a document.

Retrieve documents and positions for each Retrieve documents and positions for each individual word, intersect documents, and individual word, intersect documents, and then finally check for ordered contiguity of then finally check for ordered contiguity of keyword positions.keyword positions.

Best to start contiguity check with the least Best to start contiguity check with the least common word in the phrase.common word in the phrase.

© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 291: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

291CSE 5331/7331 F'09

Proximity QueriesProximity Queries

List of words with specific maximal List of words with specific maximal distance constraints between terms.distance constraints between terms.

Example: “dogs” and “race” within 4 Example: “dogs” and “race” within 4 words match “…dogs will begin words match “…dogs will begin the race…”the race…”

May also perform stemming and/or not May also perform stemming and/or not count stop words.count stop words.

© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 292: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

292CSE 5331/7331 F'09

Pattern MatchingPattern Matching

Allow queries that match strings rather Allow queries that match strings rather than word tokens.than word tokens.

Requires more sophisticated data Requires more sophisticated data structures and algorithms than inverted structures and algorithms than inverted indices to retrieve efficiently. indices to retrieve efficiently.

© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 293: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

293CSE 5331/7331 F'09

Simple PatternsSimple Patterns

PrefixesPrefixes: Pattern that matches start of word.: Pattern that matches start of word.– ““anti” matches “antiquity”, “antibody”, etc.anti” matches “antiquity”, “antibody”, etc.

SuffixesSuffixes: Pattern that matches end of word:: Pattern that matches end of word:– ““ix” matches “fix”, “matrix”, etc.ix” matches “fix”, “matrix”, etc.

SubstringsSubstrings: Pattern that matches arbitrary : Pattern that matches arbitrary subsequence of characters.subsequence of characters.– “ “rapt” matches “enrapture”, “velociraptor” etc.rapt” matches “enrapture”, “velociraptor” etc.

RangesRanges: Pair of strings that matches any word : Pair of strings that matches any word lexicographically (alphabetically) between lexicographically (alphabetically) between them.them.– ““tin” to “tix” matches “tip”, “tire”, “title”, etc.tin” to “tix” matches “tip”, “tire”, “title”, etc.

© Baeza-Yates and Ribeiro-NetoBaeza-Yates and Ribeiro-Neto

Page 294: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

294CSE 5331/7331 F'09

IR Query Result Measures IR Query Result Measures and Classificationand Classification

IR Classification

Page 295: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

295CSE 5331/7331 F'09

Dimensional ModelingDimensional Modeling View data in a hierarchical manner more as View data in a hierarchical manner more as

business executives mightbusiness executives might Useful in decision support systems and miningUseful in decision support systems and mining Dimension:Dimension: collection of logically related collection of logically related

attributes; axis for modeling data.attributes; axis for modeling data. Facts:Facts: data stored data stored Ex: Dimensions – products, locations, dateEx: Dimensions – products, locations, date

Facts – quantity, unit priceFacts – quantity, unit price

DM: May view data as dimensional.DM: May view data as dimensional.

Page 296: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

296CSE 5331/7331 F'09

Dimensional ModelingDimensional Modeling

View data in a hierarchical manner more as View data in a hierarchical manner more as business executives mightbusiness executives might

Useful in decision support systems and miningUseful in decision support systems and mining Dimension:Dimension: collection of logically related collection of logically related

attributes; axis for modeling data.attributes; axis for modeling data. Facts:Facts: data stored data stored Ex: Dimensions – products, locations, dateEx: Dimensions – products, locations, date

Facts – quantity, unit priceFacts – quantity, unit price

Page 297: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

297CSE 5331/7331 F'09

Aggregation HierarchiesAggregation Hierarchies

Page 298: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

298CSE 5331/7331 F'09

Multidimensional SchemasMultidimensional Schemas

Star Schema shows facts and dimensionsStar Schema shows facts and dimensions– Center of the star has facts shown in fact tablesCenter of the star has facts shown in fact tables– Outside of the facts, each diemnsion is shown Outside of the facts, each diemnsion is shown

separately in dimension tablesseparately in dimension tables– Access to fact table from dimension table via joinAccess to fact table from dimension table via join

SELECT Quantity, PriceSELECT Quantity, PriceFROM Facts, LocationFROM Facts, LocationWhere (Facts.LocationID = Location.LocationID) andWhere (Facts.LocationID = Location.LocationID) and(Location.City = ‘Dallas’)(Location.City = ‘Dallas’)

– View as relations, problem volume of data and View as relations, problem volume of data and indexingindexing

Page 299: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

299CSE 5331/7331 F'09

Star SchemaStar Schema

Page 300: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

300CSE 5331/7331 F'09

Flattened StarFlattened Star

Page 301: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

301CSE 5331/7331 F'09

Normalized StarNormalized Star

Page 302: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

302CSE 5331/7331 F'09

Snowflake SchemaSnowflake Schema

Page 303: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

303CSE 5331/7331 F'09

OLAPOLAP Online Analytic Processing (OLAP):Online Analytic Processing (OLAP): provides more provides more

complex queries than OLTP.complex queries than OLTP. OnLine Transaction Processing (OLTP):OnLine Transaction Processing (OLTP): traditional traditional

database/transaction processing.database/transaction processing. Dimensional data; cube view Dimensional data; cube view Visualization of operations:Visualization of operations:

– Slice:Slice: examine sub-cube. examine sub-cube.– Dice:Dice: rotate cube to look at another dimension. rotate cube to look at another dimension.– Roll Up/Drill DownRoll Up/Drill Down

DM: May use OLAP queries.DM: May use OLAP queries.

Page 304: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

304CSE 5331/7331 F'09

OLAP IntroductionOLAP Introduction

OLAP by ExampleOLAP by Example

http://perso.orange.fr/bernard.lupin/english/index.htm What is OLAP?What is OLAP?

http://www.olapreport.com/fasmi.htm

Page 305: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

305CSE 5331/7331 F'09

OLAPOLAP Online Analytic Processing (OLAP):Online Analytic Processing (OLAP): provides more provides more

complex queries than OLTP.complex queries than OLTP. OnLine Transaction Processing (OLTP):OnLine Transaction Processing (OLTP): traditional traditional

database/transaction processing.database/transaction processing. Dimensional data; cube view Dimensional data; cube view Support ad hoc queryingSupport ad hoc querying Require analysis of dataRequire analysis of data Can be thought of as an extension of some of the basic Can be thought of as an extension of some of the basic

aggregation functions available in SQLaggregation functions available in SQL OLAP tools may be used in DSS systemsOLAP tools may be used in DSS systems Multidimentional view is fundamentalMultidimentional view is fundamental

Page 306: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

306CSE 5331/7331 F'09

OLAP ImplementationsOLAP Implementations MOLAP (Multidimensional OLAP)MOLAP (Multidimensional OLAP)

– Multidimential Database (MDD)Multidimential Database (MDD)– Specialized DBMS and software system capable of supporting Specialized DBMS and software system capable of supporting

the multidimensional data directlythe multidimensional data directly– Data stored as an n-dimensional array (cube)Data stored as an n-dimensional array (cube)– Indexes used to speed up processingIndexes used to speed up processing

ROLAP (Relational OLAP)ROLAP (Relational OLAP)– Data stored in a relational databaseData stored in a relational database– ROLAP server (middleware) creates the multidimensional view ROLAP server (middleware) creates the multidimensional view

for the userfor the user– Less Complex; Less efficientLess Complex; Less efficient

HOLAP (Hybrid OLAP)HOLAP (Hybrid OLAP)– Not updated frequently – MDDNot updated frequently – MDD– Updated frequently - RDBUpdated frequently - RDB

Page 307: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

307CSE 5331/7331 F'09

OLAP OperationsOLAP Operations

Single Cell Multiple Cells Slice Dice

Roll Up

Drill Down

Page 308: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

308CSE 5331/7331 F'09

OLAP OperationsOLAP Operations

Simple query – single cell in the cubeSimple query – single cell in the cube SliceSlice – Look at a subcube to get more – Look at a subcube to get more

specific informationspecific information Dice Dice – Rotate cube to look at another – Rotate cube to look at another

dimensiondimension Roll UpRoll Up – Dimension Reduction; Aggregation – Dimension Reduction; Aggregation Drill DownDrill Down Visualization: These operations allow the Visualization: These operations allow the

OLAP users to actually “see” results of an OLAP users to actually “see” results of an operation.operation.

Page 309: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

309CSE 5331/7331 F'09

Relationship Between TopcsRelationship Between Topcs

Page 310: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

310CSE 5331/7331 F'09

Decision Support SystemsDecision Support Systems Tools and computer systems that assist Tools and computer systems that assist

management in decision makingmanagement in decision making What if types of questionsWhat if types of questions High level decisionsHigh level decisions Data warehouse – data which supports Data warehouse – data which supports

DSSDSS

Page 311: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

311CSE 5331/7331 F'09

Unified Dimensional ModelUnified Dimensional Model

Microsoft Cube ViewMicrosoft Cube View SQL Server 2005SQL Server 2005

http://msdn2.microsoft.com/en-us/library/ms345143.aspx

http://cwebbbi.spaces.live.com/Blog/cns!1pi7ETChsJ1un_2s41jm9Iyg!325.entry MDX AS2005MDX AS2005

http://msdn2.microsoft.com/en-us/library/aa216767(SQL.80).aspx

Page 312: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

312CSE 5331/7331 F'09

Data WarehousingData Warehousing

““Subject-oriented, integrated, time-variant, nonvolatile” Subject-oriented, integrated, time-variant, nonvolatile” William InmonWilliam Inmon

Operational Data:Operational Data: Data used in day to day needs of Data used in day to day needs of company.company.

Informational Data:Informational Data: Supports other functions such as Supports other functions such as planning and forecasting.planning and forecasting.

Data mining tools often access data warehouses rather Data mining tools often access data warehouses rather than operational data.than operational data.

DM: May access data in warehouse.DM: May access data in warehouse.

Page 313: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

313CSE 5331/7331 F'09

Operational vs. InformationalOperational vs. Informational

  Operational Data Data Warehouse

Application OLTP OLAP

Use Precise Queries Ad Hoc

Temporal Snapshot Historical

Modification Dynamic Static

Orientation Application Business

Data Operational Values Integrated

Size Gigabits TerabitsLevel Detailed Summarized

Access Often Less Often

Response Few Seconds Minutes

Data Schema Relational Star/Snowflake

Page 314: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

314CSE 5331/7331 F'09

StatisticsStatistics Simple descriptive modelsSimple descriptive models Statistical inference:Statistical inference: generalizing a model generalizing a model

created from a sample of the data to the entire created from a sample of the data to the entire dataset.dataset.

Exploratory Data Analysis:Exploratory Data Analysis: – Data can actually drive the creation of the Data can actually drive the creation of the

modelmodel– Opposite of traditional statistical view.Opposite of traditional statistical view.

Data mining targeted to business userData mining targeted to business user

DM: Many data mining methods come DM: Many data mining methods come from statistical techniques. from statistical techniques.

Page 315: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

315CSE 5331/7331 F'09

Machine Learning Outline Machine Learning Outline

Introduction (Chuck Anderson)Introduction (Chuck Anderson)

CS545: Machine LearningCS545: Machine Learning

By Chuck AndersonBy Chuck Anderson

Department of Computer ScienceDepartment of Computer Science

Colorado State UniversityColorado State University

Fall 2006Fall 2006

Page 316: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

316CSE 5331/7331 F'09

Machine LearningMachine Learning Machine Learning:Machine Learning: area of AI that examines how to area of AI that examines how to

write programs that can learn.write programs that can learn. Often used in classification and prediction Often used in classification and prediction Supervised Learning:Supervised Learning: learns by example. learns by example. Unsupervised Learning: Unsupervised Learning: learns without knowledge of learns without knowledge of

correct answers.correct answers. Machine learning often deals with small static datasets. Machine learning often deals with small static datasets.

DM: Uses many machine learning DM: Uses many machine learning techniques.techniques.

Page 317: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

317CSE 5331/7331 F'09

What is Machine Learning?

Statistics ≈ the science of inference from data Machine learning ≈ multivariate statistics +

computational statistics Multivariate statistics ≈ prediction of values of

a function assumed to underlie a multivariate dataset

Computational statistics ≈ computational methods for statistical problems (aka statistical computation) + statistical methods which happen to be computationally intensive

Data Mining ≈ exploratory data analysis, particularly with massive/complex datasets

© Chuck AndersonChuck Anderson

Page 318: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

318CSE 5331/7331 F'09

Kinds of Learning Learning algorithms are often categorized

according to the amount of information provided:

Least Information:– Unsupervised learning is more exploratory.– Requires samples of inputs. Must find regularities.

More Information:– Reinforcement learning most recent.– Requires samples of inputs, actions, and rewards

or punishments. Most Information:

– Supervised learning is most common.– Requires samples of inputs and desired outputs.

© Chuck AndersonChuck Anderson

Page 319: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

319CSE 5331/7331 F'09

Examples of Algorithms

Supervised learning– Regression

» multivariate regression» neural networks and kernel methods

– Classification» linear and quadratic discrimination analysis» k-nearest neighbors» neural networks and kernel methods

Reinforcement learning– multivariate regression– neural networks

Unsupervised learning– principal components analysis– k-means clustering– self-organizing networks

© Chuck AndersonChuck Anderson

Page 320: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

320CSE 5331/7331 F'09 © Chuck AndersonChuck Anderson

Page 321: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

321CSE 5331/7331 F'09 © Chuck AndersonChuck Anderson

Page 322: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

322CSE 5331/7331 F'09 © Chuck AndersonChuck Anderson

Page 323: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

323CSE 5331/7331 F'09 © Chuck AndersonChuck Anderson

Page 324: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

324CSE 5331/7331 F'09

Pattern Matching Pattern Matching (Recognition)(Recognition)

Pattern Matching:Pattern Matching: finds occurrences of finds occurrences of a predefined pattern in the data.a predefined pattern in the data.

Applications include speech recognition, Applications include speech recognition, information retrieval, time series information retrieval, time series analysis.analysis.

DM: Type of classification.DM: Type of classification.

Page 325: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

325CSE 5331/7331 F'09

Image Mining OutlineImage Mining Outline

Image Mining – What is it?Image Mining – What is it? Feature ExtractionFeature Extraction Shape DetectionShape Detection Color TechniquesColor Techniques Video MiningVideo Mining Facial RecognitionFacial Recognition BioinformaticsBioinformatics

Page 326: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

326CSE 5331/7331 F'09

The 2000 ozone hole over the antarctic seen by EPTOMS http://jwocky.gsfc.nasa.gov/multi/multi.html#hole

Page 327: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

327CSE 5331/7331 F'09

Image Mining – What is it?Image Mining – What is it? Image RetrievalImage Retrieval Image ClassificationImage Classification Image ClusteringImage Clustering Video MiningVideo Mining Applications Applications

– BioinformaticsBioinformatics– Geology/Earth ScienceGeology/Earth Science– SecuritySecurity– ……

Page 328: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

328CSE 5331/7331 F'09

Feature ExtractionFeature Extraction

Identify major components of imageIdentify major components of image ColorColor TextureTexture ShapeShape Spatial relationshipsSpatial relationships Feature Extraction & Image ProcessingFeature Extraction & Image Processing

http://users.ecs.soton.ac.uk/msn/book/ Feature Extraction TutorialFeature Extraction Tutorial

http://facweb.cs.depaul.edu/research/vc/VC_Workshop/presentations/pdf/daniela_tutorial2.pdf

Page 329: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

329CSE 5331/7331 F'09

Shape DetectionShape Detection

Boundary/Edge DetectionBoundary/Edge Detection Time Series – Eamonn KeoghTime Series – Eamonn Keogh

http://www.engr.smu.edu/~mhd/8337sp07/shapes.ppt

Page 330: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

330CSE 5331/7331 F'09

Color TechniquesColor Techniques

Color RepresentationsColor Representations

RGB:RGB:

http://en.wikipedia.org/wiki/Rgb

HSV: HSV: http://en.wikipedia.org/wiki/HSV_color_space

Color HistogramColor Histogram Color AnglogramColor Anglogram

http://www.cs.sunysb.edu/~rzhao/publications/VideoDB.pdf

Page 331: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

331CSE 5331/7331 F'09

What is SimilarityWhat is Similarity??

(c) Eamonn Keogh, [email protected]

Page 332: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

332CSE 5331/7331 F'09

Video MiningVideo Mining

Boundaries between shotsBoundaries between shots Movement between framesMovement between frames ANSES:ANSES:

http://mmir.doc.ic.ac.uk/demos/anses.html

Page 333: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

333CSE 5331/7331 F'09

Facial RecognitionFacial Recognition Based upon features in faceBased upon features in face Convert face to a feature vectorConvert face to a feature vector Less invasive than other biometric techniquesLess invasive than other biometric techniques http://www.face-rec.org http://computer.howstuffworks.com/facial-recogniti

on.htm SIMS: SIMS:

http://www.casinoincidentreporting.com/Products.aspx

Page 334: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

334CSE 5331/7331 F'09

Microarray Data Microarray Data AnalysisAnalysis

Each probe location associated with geneEach probe location associated with gene Measure the amount of mRNAMeasure the amount of mRNA Color indicates degree of gene expressionColor indicates degree of gene expression Compare different samples (normal/disease)Compare different samples (normal/disease) Track same sample over timeTrack same sample over time QuestionsQuestions

– Which genes are related to this disease?Which genes are related to this disease?– Which genes behave in a similar manner?Which genes behave in a similar manner?– What is the function of a gene?What is the function of a gene?

ClusteringClustering– HierarchicalHierarchical– K-meansK-means

Page 335: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

335CSE 5331/7331 F'09

Affymetrix GeneChipAffymetrix GeneChip®® ArrayArray

http://www.affymetrix.com/corporate/outreach/lesson_plan/educator_resources.affx

Page 336: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

336CSE 5331/7331 F'09

Microarray Data - Microarray Data - ClusteringClustering

"Gene expression profiling identifies clinically relevant subtypes of prostate cancer"

Proc. Natl. Acad. Sci. USA, Vol. 101, Issue 3, 811-816, January 20, 2004

Page 337: CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering

337CSE 5331/7331 F'09