69
Prasad L16FlatCluster 1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

Embed Size (px)

Citation preview

Page 1: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

Prasad L16FlatCluster 1

Flat Clustering

Adapted from Slides by

Prabhakar Raghavan, Christopher Manning,

Ray Mooney and Soumen Chakrabarti

Page 2: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

2

Today’s Topic: Clustering

Document clustering Motivations Document representations Success criteria

Clustering algorithms Partitional (Flat) Hierarchical (Tree)

Prasad L16FlatCluster

Page 3: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

3

What is clustering?

Clustering: the process of grouping a set of objects into classes of similar objects

Documents within a cluster should be similar. Documents from different clusters should be dissimilar.

The commonest form of unsupervised learning Unsupervised learning = learning from raw

data, as opposed to supervised data where a classification of examples is given

A common and important task that finds many applications in IR and other places

Prasad L16FlatCluster

Page 4: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

A data set with clear cluster structure

How would you design an algorithm for finding the three clusters in this case?

Page 5: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

5

Why cluster documents?

Whole corpus analysis/navigation Better user interface

For improving recall in search applications Better search results (pseudo RF)

For better navigation of search results Effective “user recall” will be higher

For speeding up vector space retrieval Faster search

Prasad L16FlatCluster

Page 6: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

6

Yahoo! Hierarchy isn’t clustering but is the kind of output you want from clustering

dairycrops

agronomyforestry

AI

HCIcraft

missions

botany

evolution

cellmagnetism

relativity

courses

agriculture biology physics CS space

... ... ...

… (30)

www.yahoo.com/Science

... ...

Prasad L16FlatCluster

Page 7: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

Google News: automatic clustering gives an effective news presentation metaphor

Page 8: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

Scatter/Gather: Cutting, Karger, and Pedersen

Page 9: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

For visualizing a document collection and its themes

Wise et al, “Visualizing the non-visual” PNNL ThemeScapes, Cartia

[Mountain height = cluster size]

Page 10: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

10

For improving search recall

Cluster hypothesis - Documents in the same cluster behave similarly with respect to relevance to information needs

Therefore, to improve search recall: Cluster docs in corpus a priori When a query matches a doc D, also return other docs in

the cluster containing D Example: The query “car” will also return docs

containing automobile Because clustering grouped together docs

containing car with those containing automobile.

Why might this happen?Prasad

Page 11: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

11

For better navigation of search results

For grouping search results thematically clusty.com / Vivisimo

Prasad L16FlatCluster

Page 12: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

12

Issues for clustering

Representation for clustering Document representation

Vector space? Normalization? Need a notion of similarity/distance

How many clusters? Fixed a priori? Completely data driven?

Avoid “trivial” clusters - too large or small

Prasad L16FlatCluster

Page 13: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

13

What makes docs “related”?

Ideal: semantic similarity. Practical: statistical similarity

Docs as vectors. For many algorithms, easier to think

in terms of a distance (rather than similarity) between docs.

We can use cosine similarity (alternatively, Euclidean Distance).

Prasad L16FlatCluster

Page 14: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

14

Clustering Algorithms

Partitional (Flat) algorithms Usually start with a random (partial)

partition Refine it iteratively

K means clustering (Model based clustering)

Hierarchical (Tree) algorithms Bottom-up, agglomerative (Top-down, divisive)

Prasad L16FlatCluster

Page 15: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

Hard vs. soft clustering

Hard clustering: Each document belongs to exactly one cluster More common and easier to do

Soft clustering: A document can belong to more than one cluster. Makes more sense for applications like creating

browsable hierarchies You may want to put a pair of sneakers in two

clusters: (i) sports apparel and (ii) shoes

Page 16: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

16

Partitioning Algorithms

Partitioning method: Construct a partition of n documents into a set of K clusters

Given: a set of documents and the number K Find: a partition of K clusters that optimizes

the chosen partitioning criterion Globally optimal: exhaustively enumerate all

partitions Effective heuristic methods: K-means and K-

medoids algorithms

Prasad L16FlatCluster

Page 17: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

17

K-Means

Assumes documents are real-valued vectors. Clusters based on centroids (aka the center of

gravity or mean) of points in a cluster, c.

Reassignment of instances to clusters is based on distance to the current cluster centroids. (Or one can equivalently phrase it in terms of

similarities)

Prasad L16FlatCluster

cx

xc

||

1(c)μ

Page 18: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

18

K-Means Algorithm

Select K random docs {s1, s2,… sK} as seeds.Until clustering converges or other stopping criterion: For each doc di: Assign di to the cluster cj such that dist(xi, sj) is

minimal. (Update the seeds to the centroid of each cluster) For each cluster cj

sj = (cj) Prasad L16FlatCluster

Page 19: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

19

K Means Example(K=2)

Pick seeds

Reassign clusters

Compute centroids

xx

Reassign clusters

xx xx Compute centroids

Reassign clusters

Converged!

Prasad L16FlatCluster

Page 20: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

20

Termination conditions

Several possibilities, e.g., A fixed number of iterations. Doc partition unchanged. Centroid positions don’t change.

Does this mean that the docs in a cluster are

unchanged?Prasad

Page 21: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

21

Convergence

Why should the K-means algorithm ever reach a fixed point? A state in which clusters don’t change.

K-means is a special case of a general procedure known as the Expectation Maximization (EM) algorithm. EM is known to converge. Number of iterations could be large.

Prasad L16FlatCluster

Page 22: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

22

Convergence of K-Means

Define goodness measure of cluster k as sum of squared distances from cluster centroid: Gk = Σi (di – ck)2 (sum over all di in

cluster k) G = Σk Gk

Intuition: Reassignment monotonically decreases G since each vector is assigned to the closest centroid.

Lower case

Prasad L16FlatCluster

Page 23: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

24

Time Complexity

Computing distance between two docs is O(m) where m is the dimensionality of the vectors.

Reassigning clusters: O(Kn) distance computations, or O(Knm).

Computing centroids: Each doc gets added once to some centroid: O(nm).

Assume these two steps are each done once for I iterations: O(IKnm).

Prasad L16FlatCluster

Page 24: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

25

Seed Choice

Results can vary based on random seed selection.

Some seeds can result in poor convergence rate, or convergence to sub-optimal clusterings. Select good seeds using a

heuristic (e.g., doc least similar to any existing mean)

Try out multiple starting points Initialize with the results of

another method.

In the above, if you startwith B and E as centroidsyou converge to {A,B,C}and {D,E,F}.If you start with D and Fyou converge to {A,B,D,E} {C,F}.

Example showingsensitivity to seeds

Prasad L16FlatCluster

Page 25: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

26

K-means issues, variations, etc.

Recomputing the centroid after every assignment (rather than after all points are re-assigned) can improve speed of convergence of K-means

Assumes clusters are spherical in vector space

Sensitive to coordinate changes, weighting etc. Disjoint and exhaustive

Doesn’t have a notion of “outliers” But can add outlier filtering

Prasad L16FlatCluster

Page 26: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

27

How Many Clusters?

Number of clusters K is given Partition n docs into predetermined number of

clusters Finding the “right” number of clusters is part of

the problem Given docs, partition into an “appropriate” number

of subsets. E.g., for query results - ideal value of K not known

up front - though UI may impose limits.

Prasad L16FlatCluster

Page 27: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

Prasad L17HierCluster 28

Hierarchical Clustering

Adapted from Slides by Prabhakar Raghavan, Christopher Manning,

Ray Mooney and Soumen Chakrabarti

Page 28: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

29

“The Curse of Dimensionality”

Why document clustering is difficult While clustering looks intuitive in 2 dimensions,

many of our applications involve 10,000 or more dimensions…

High-dimensional spaces look different The probability of random points being close drops

quickly as the dimensionality grows. Furthermore, random pair of vectors are all almost

perpendicular.

Prasad L17HierCluster

Page 29: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

30

Hierarchical Clustering

Build a tree-based hierarchical taxonomy (dendrogram) from a set of documents.

One approach: recursive application of a partitional clustering algorithm.

animal

vertebrate

fish reptile amphib. mammal worm insect crustacean

invertebrate

Prasad L17HierCluster

Page 30: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

31

• Clustering obtained by cutting the dendrogram at a desired level: each connectedconnected component forms a cluster.

Dendogram: Hierarchical Clustering

Prasad L17HierCluster

Page 31: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

32

Agglomerative (bottom-up): Precondition: Start with each document as a separate

cluster. Postcondition: Eventually all documents belong to the

same cluster. Divisive (top-down):

Precondition: Start with all documents belonging to the same cluster.

Postcondition: Eventually each document forms a cluster of its own.

Does not require the number of clusters k in advance.

Needs a termination/readout condition

Hierarchical Clustering algorithms

Prasad L17HierCluster

Page 32: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

33

Hierarchical Agglomerative Clustering (HAC) Algorithm

Start with all instances in their own cluster.Until there is only one cluster: Among the current clusters, determine the two clusters, ci and cj, that are most similar. Replace ci and cj with a single cluster ci cj

Prasad L17HierCluster

Page 33: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

34

Dendrogram: Document Example

As clusters agglomerate, docs likely to fall into a hierarchy of “topics” or concepts.

d1

d2

d3

d4

d5

d1,d2 d4,d5 d3

d3,d4,d5

Prasad L17HierCluster

Page 34: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

35

Key notion: cluster representative

We want a notion of a representative point in a cluster, to represent the location of each cluster

Representative should be some sort of “typical” or central point in the cluster, e.g., point inducing smallest radii to docs in cluster smallest squared distances, etc. point that is the “average” of all docs in the cluster

Centroid or center of gravity Measure intercluster distances by distances of centroids.

Prasad L17HierCluster

Page 35: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

36

Example: n=6, k=3, closest pair of centroids

d1 d2

d3

d4

d5

d6

Centroid after first step.

Centroid aftersecond step.

Prasad

Page 36: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

37

Outliers in centroid computation

Can ignore outliers when computing centroid. What is an outlier?

Lots of statistical definitions, e.g. moment of point to centroid > M some cluster moment.

CentroidOutlier

Say 10.

Prasad L17HierCluster

Page 37: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

38

Closest pair of clusters

Many variants to defining closest pair of clusters Single-link

Similarity of the most cosine-similar (single-link) Complete-link

Similarity of the “furthest” points, the least cosine-similar

Centroid Clusters whose centroids (centers of gravity) are

the most cosine-similar Average-link

Average cosine between pairs of elements

Page 38: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

39

Single Link Agglomerative Clustering

Use maximum similarity of pairs:

Can result in “straggly” (long and thin) clusters due to chaining effect.

After merging ci and cj, the similarity of the resulting cluster to another cluster, ck, is:

),(max),(,

yxsimccsimji cycx

ji

)),(),,(max()),(( kjkikji ccsimccsimcccsim Prasad L17HierCluster

Page 39: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

40

Single Link Example

Prasad L17HierCluster

Page 40: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

41

Complete Link Agglomerative Clustering

Use minimum similarity of pairs:

Makes “tighter,” spherical clusters that are typically preferable.

After merging ci and cj, the similarity of the resulting cluster to another cluster, ck, is:

),(min),(,

yxsimccsimji cycx

ji

)),(),,(min()),(( kjkikji ccsimccsimcccsim

Ci Cj CkPrasad

Page 41: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

42

Complete Link Example

Prasad L17HierCluster

Page 42: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

43

Computational Complexity In the first iteration, all HAC methods need to

compute similarity of all pairs of n individual instances which is O(n2).

In each of the subsequent n2 merging iterations, compute the distance between the most recently created cluster and all other existing clusters.

In order to maintain an overall O(n2) performance, computing similarity to each cluster must be done in constant time. Often O(n3) if done naively or O(n2 log n) if done more

cleverly Prasad L17HierCluster

Page 43: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

44

Group Average Agglomerative Clustering

Use average similarity across all pairs within the merged cluster to measure the similarity of two clusters.

Compromise between single and complete link.

Two options: Averaged across all ordered pairs in the merged cluster Averaged over all pairs between the two original clusters

No clear difference in efficacy

)( :)(

),()1(

1),(

ji jiccx xyccyjiji

ji yxsimcccc

ccsim

Prasad L17HierCluster

Page 44: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

45

Computing Group Average Similarity

Assume cosine similarity and normalized vectors with unit length.

Always maintain sum of vectors in each cluster.

Compute similarity of clusters in constant time:

jcx

j xcs

)(

)1||||)(|||(|

|)||(|))()(())()((),(

jiji

jijijiji cccc

cccscscscsccsim

Prasad L17HierCluster

Page 45: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

46

Medoid as Cluster Representative

The centroid does not have to be a document. Medoid: A cluster representative that is one of

the documents (E.g., the document closest to the centroid)

One reason this is useful Consider the representative of a large cluster (>1K docs)

The centroid of this cluster will be a dense vector The medoid of this cluster will be a sparse vector

Compare: mean/centroid vs. median/medoid

Prasad L17HierCluster

Page 46: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

47

Efficiency: “Using approximations”

In standard algorithm, must find closest pair of centroids at each step

Approximation: instead, find nearly closest pair use some data structure that makes this

approximation easier to maintain simplistic example: maintain closest pair based on

distances in projection on a random line

Random line

Prasad

Page 47: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

48

Major issue - labeling

After clustering algorithm finds clusters - how can they be useful to the end user?

Need pithy label for each cluster In search results, say “Animal” or “Car” in

the jaguar example. In topic trees (Yahoo), need navigational

cues. Often done by hand, a posteriori.

Prasad L17HierCluster

Page 48: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

49

How to Label Clusters

Show titles of typical documents Titles are easy to scan Authors create them for quick scanning! But you can only show a few titles which may not

fully represent cluster Show words/phrases prominent in cluster

More likely to fully represent cluster Use distinguishing words/phrases

Differential labeling

Prasad L17HierCluster

Page 49: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

50

Labeling

Common heuristics - list 5-10 most frequent terms in the centroid vector. Drop stop-words; stem.

Differential labeling by frequent terms Within a collection “Computers”, clusters all have

the word computer as frequent term. Discriminant analysis of centroids.

Perhaps better: distinctive noun phrase

Prasad L17HierCluster

Page 50: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

51

What is a Good Clustering?

Internal criterion: A good clustering will produce high quality clusters in which: the intra-class (that is, intra-cluster)

similarity is high the inter-class similarity is low

The measured quality of a clustering depends on both the document representation and the similarity measure used

Prasad L17HierCluster

Page 51: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

52

External criteria for clustering quality

Quality measured by its ability to discover some or all of the hidden patterns or latent classes in gold standard data

Assesses a clustering with respect to ground truth … requires labeled data Assume documents with C gold standard

classes, while our clustering algorithms produce K clusters, ω1, ω2, …, ωK with ni members.

Prasad L17HierCluster

Page 52: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

53

External Evaluation of Cluster Quality

Simple measure: purity, the ratio between the dominant class in the cluster πi and the size of cluster ωi

Others are entropy of classes in clusters (or mutual information between classes and clusters)

Cjnn

Purity ijji

i )(max1

)(

Prasad L17HierCluster

Page 53: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

54

Cluster I Cluster II Cluster III

Cluster I: Purity = 1/6 * (max(5, 1, 0)) = 5/6

Cluster II: Purity = 1/6 * (max(1, 4, 1)) = 4/6

Cluster III: Purity = 1/5 * (max(2, 0, 3)) = 3/5

Purity example

Page 54: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

Rand Index

Number of points

Same Cluster in clustering

Different Clusters in clustering

Same class in ground truth A C

Different classes in ground truth

B D

Page 55: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

56

Rand index: symmetric version

BA

AP

DCBA

DARI

CA

AR

Compare with standard Precision and Recall.

Prasad L17HierCluster

Page 56: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

Rand Index example: 0.68

Number of points

Same Cluster in clustering

Different Clusters in clustering

Same class in ground truth 20 24

Different classes in ground truth

20 72

Page 57: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

Final word and resources

In clustering, clusters are inferred from the data without human input (unsupervised learning)

However, in practice, it’s a bit less clear: there are many ways of influencing the outcome of clustering: number of clusters, similarity measure, representation of documents, . . .

Page 58: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

59

SKIP WHAT FOLLOWS

Prasad L17HierCluster

Page 59: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

60

Term vs. document space So far, we clustered docs based on their

similarities in term space For some applications, e.g., topic analysis for

inducing navigation structures, can “dualize” use docs as axes represent (some) terms as vectors proximity based on co-occurrence of terms in docs now clustering terms, not docs

Prasad L17HierCluster

Page 60: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

61

Term vs. document space

Cosine computation Constant for docs in term space Grows linearly with corpus size for terms in doc

space Cluster labeling

Clusters have clean descriptions in terms of noun phrase co-occurrence

Application of term clusters

Prasad L17HierCluster

Page 61: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

62

Multi-lingual docs

E.g., Canadian government docs. Every doc in English and equivalent French.

Must cluster by concepts rather than language Simplest: pad docs in one language with

dictionary equivalents in the other thus each doc has a representation in both

languages Axes are terms in both languages

Prasad L17HierCluster

Page 62: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

63

Feature selection

Which terms to use as axes for vector space? IDF is a form of feature selection

Can exaggerate noise e.g., mis-spellings Better to use highest weight mid-frequency

words – the most discriminating terms Pseudo-linguistic heuristics, e.g.,

drop stop-words stemming/lemmatization use only nouns/noun phrases

Good clustering should “figure out” some of these

Prasad L17HierCluster

Page 63: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

64

Evaluation of clustering Perhaps the most substantive issue in data

mining in general: how do you measure goodness?

Most measures focus on computational efficiency Time and space

For application of clustering to search: Measure retrieval effectiveness

Prasad L17HierCluster

Page 64: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

65

Approaches to evaluating

Anecdotal Ground “truth” comparison

Cluster retrieval Purely quantitative measures

Probability of generating clusters found Average distance between cluster members

Microeconomic / utility

Prasad L17HierCluster

Page 65: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

66

Anecdotal evaluation

Probably the commonest (and surely the easiest) “I wrote this clustering algorithm and look what it

found!” No benchmarks, no comparison possible Any clustering algorithm will pick up the easy

stuff like partition by languages Generally, unclear scientific value.

Prasad L17HierCluster

Page 66: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

67

Ground “truth” comparison

Take a union of docs from a taxonomy & cluster Yahoo!, ODP, newspaper sections …

Compare clustering results to baseline e.g., 80% of the clusters found map “cleanly” to

taxonomy nodes But is it the “right” answer?

There can be several equally right answers For the docs given, the static prior taxonomy may

be incomplete/wrong in places the clustering algorithm may have gotten right

things not in the static taxonomy

“Subjective”

Prasad

Page 67: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

68

Microeconomic viewpoint

Anything - including clustering - is only as good as the economic utility it provides

For clustering: net economic gain produced by an approach (vs. another approach)

Strive for a concrete optimization problem Examples

recommendation systems clock time for interactive search

expensive

Prasad L17HierCluster

Page 68: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

69

Evaluation example: Cluster retrieval

Ad-hoc retrieval Cluster docs in returned set Identify best cluster & only retrieve docs from it How do various clustering methods affect the

quality of what’s retrieved? Concrete measure of quality:

Precision as measured by user judgements for these queries

Done with TREC queries

Prasad L17HierCluster

Page 69: PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

70

Evaluation

Compare two IR algorithms 1. send query, present ranked results 2. send query, cluster results, present clusters

Experiment was simulated (no users) Results were clustered into 5 clusters Clusters were ranked according to percentage

relevant documents Documents within clusters were ranked according

to similarity to query

Prasad L17HierCluster