65
CEN474 Data Mining, Öğr. Gör. Esra Dinçer 1 CLUSTER ANALYSIS CLUSTER ANALYSIS Resource : Introduction to Data mining, Tan, Steinbach, Kumar Resource : Introduction to Data mining, Tan, Steinbach, Kumar Data mining, Concepts and Techniques, Han-Kamber, Data mining, Concepts and Techniques, Han-Kamber,

CEN474 Data Mining, Öğr. Gör. Esra Dinçer 1 CLUSTER ANALYSIS Resource : Introduction to Data mining, Tan, Steinbach, Kumar Data mining, Concepts and Techniques,

Embed Size (px)

Citation preview

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

1

CLUSTER ANALYSISCLUSTER ANALYSIS

Resource : Introduction to Data mining, Tan, Steinbach, KumarResource : Introduction to Data mining, Tan, Steinbach, Kumar Data mining, Concepts and Techniques, Han-Kamber, Data mining, Concepts and Techniques, Han-Kamber,

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

2

What is Cluster Analysis?• Finding groups of objects such that the objects in a group

will be similar (or related) to one another and different from (or unrelated to) the objects in other groups

Inter-cluster distances are maximized

Intra-cluster distances are

minimized

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

3

Cluster AnalysisCluster AnalysisCluster analysis is a way to examine similarities and dissimilarities ofobjects. Data often fall naturally into groups, or clusters, where the characteristics of objects in the same cluster are similar and the characteristics of objects in different clusters are dissimilar.

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

4

Applications of Cluster Analysis• Understanding

– Group related documents for browsing, group genes and proteins that have similar functionality, or group stocks with similar price fluctuations

Summarization– Reduce the size of

large data sets

Discovered Clusters Industry Group

1 Applied-Matl-DOWN,Bay-Network-Down,3-COM-DOWN,

Cabletron-Sys-DOWN,CISCO-DOWN,HP-DOWN, DSC-Comm-DOWN,INTEL-DOWN,LSI-Logic-DOWN,

Micron-Tech-DOWN,Texas-Inst-Down,Tellabs-Inc-Down, Natl-Semiconduct-DOWN,Oracl-DOWN,SGI-DOWN,

Sun-DOWN

Technology1-DOWN

2 Apple-Comp-DOWN,Autodesk-DOWN,DEC-DOWN,

ADV-Micro-Device-DOWN,Andrew-Corp-DOWN, Computer-Assoc-DOWN,Circuit-City-DOWN,

Compaq-DOWN, EMC-Corp-DOWN, Gen-Inst-DOWN, Motorola-DOWN,Microsoft-DOWN,Scientific-Atl-DOWN

Technology2-DOWN

3 Fannie-Mae-DOWN,Fed-Home-Loan-DOWN, MBNA-Corp-DOWN,Morgan-Stanley-DOWN

Financial-DOWN

4 Baker-Hughes-UP,Dresser-Inds-UP,Halliburton-HLD-UP,

Louisiana-Land-UP,Phillips-Petro-UP,Unocal-UP, Schlumberger-UP

Oil-UP

Clustering precipitation in Australia

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

5

Clustering: Multidisciplinary Clustering: Multidisciplinary EffortsEfforts

• Image Processing and Pattern Recognition

• Spatial Data Analysis – Create thematic maps in GIS by clustering feature

spaces

– Detect spatial clusters or for other spatial mining tasks

• Economic Science (especially market research)

• WWW– Document classification

– Cluster Weblog data to discover groups of similar access patterns

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

6

Clustering Applications: Clustering Applications: Some ExamplesSome Examples

• Marketing: Help marketers discover distinct groups in their

customer bases, and then use this knowledge to develop

targeted marketing programs

• Land use: Identification of areas of similar land use in an earth

observation database

• Insurance: Identifying groups of motor insurance policy holders

with a high average claim cost

• City-planning: Identifying groups of houses according to their

house type, value, and geographical location

• Earth-quake studies: Observed earth quake epicenters should be

clustered along continent faults

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

7

Quality: What Is Good Quality: What Is Good Clustering?Clustering?

• A good clustering method will produce high quality

clusters

– high intra-class similarity: cohesive within clusters

– low inter-class similarity: distinctive between clusters

• The quality of a clustering result depends on both the

similarity measure used by the method and its

implementation

• The quality of a clustering method is also measured by its

ability to discover some or all of the hidden patterns

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

8

Measure the Quality of Measure the Quality of ClusteringClustering

• Dissimilarity/Similarity metric– Similarity is expressed in terms of a distance function,

typically metric: d(i, j)– The definitions of distance functions are usually rather

different for interval-scaled, boolean, categorical, ordinal ratio, and vector variables

– Weights should be associated with different variables based on applications and data semantics

• Quality of clustering:– There is usually a separate “quality” function that

measures the “goodness” of a cluster.– It is hard to define “similar enough” or “good enough”

• The answer is typically highly subjective

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

9

Distance Measures for Different Distance Measures for Different Kinds of DataKinds of Data

• Numerical (interval)-based: – Minkowski Distance:– Special cases: Euclidean (L2-norm), Manhattan (L1-norm)

• Binary variables: – symmetric vs. asymmetric (Jaccard coeff.)

• Nominal variables: # of mismatches• Ordinal variables: treated like interval-based• Ratio-scaled variables: apply log-transformation

first • Vectors: cosine measure• Mixed variables: weighted combinations

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

10

Major Clustering Approaches IMajor Clustering Approaches I• Partitioning approach:

– Construct various partitions and then evaluate them by some criterion, e.g., minimizing the sum of square errors

– Typical methods: k-means, k-medoids, CLARANS• Hierarchical approach:

– Create a hierarchical decomposition of the set of data (or objects) using some criterion

– Typical methods: Diana, Agnes, BIRCH, ROCK, CAMELEON

• Density-based approach: – Based on connectivity and density functions– Typical methods: DBSCAN, OPTICS, DenClue

• Grid-based approach: – based on a multiple-level granularity structure– Typical methods: STING, WaveCluster, CLIQUE

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

11

Major Clustering Approaches IMajor Clustering Approaches III• Model-based:

– A model is hypothesized for each of the clusters and tries to find the best fit of that model to each other

– Typical methods: EM, SOM, COBWEB• Frequent pattern-based:

– Based on the analysis of frequent patterns– Typical methods: p-Cluster

• User-guided or constraint-based: – Clustering by considering user-specified or application-specific

constraints– Typical methods: COD (obstacles), constrained clustering

• Link-based clustering:– Objects are often linked together in various ways– Massive links can be used to cluster objects: SimRank, LinkClus

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

12

What is not Cluster What is not Cluster Analysis?Analysis?

• Supervised classification– Have class label information

• Simple segmentation– Dividing students into different registration

groups alphabetically, by last name

• Results of a query– Groupings are a result of an external

specification

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

13

Notion of a Cluster can be Ambiguous

How many clusters?

Four Clusters Two Clusters

Six Clusters

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

14

Types of ClusteringsTypes of Clusterings• A clustering is a set of clusters

• Important distinction between hierarchical and partitional sets of clusters

• Partitional Clustering– A division data objects into non-overlapping subsets

(clusters) such that each data object is in exactly one subset

• Hierarchical clustering– A set of nested clusters organized as a hierarchical

tree

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

15

Partitional Clustering

Original Points A Partitional Clustering

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

16

Hierarchical Clustering

p4p1

p3

p2

p4 p1

p3

p2

p4p1 p2 p3

p4p1 p2 p3

Traditional Hierarchical Clustering

Non-traditional Hierarchical Clustering Non-traditional Dendrogram

Traditional Dendrogram

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

17

Types of ClustersTypes of Clusters• Well-separated clusters

• Center-based clusters

• Contiguous clusters

• Density-based clusters

• Property or Conceptual

• Described by an Objective Function

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

18

Types of Clusters: Well-Separated

• Well-Separated Clusters: – A cluster is a set of points such that any point in a

cluster is closer (or more similar) to every other point in the cluster than to any point not in the cluster.

3 well-separated clusters

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

19

Types of Clusters: Center-Based

• Center-based– A cluster is a set of objects such that an object in a

cluster is closer (more similar) to the “center” of a cluster, than to the center of any other cluster

– The center of a cluster is often a centroid, the average of all the points in the cluster, or a medoid, the most “representative” point of a cluster

4 center-based clusters

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

20

Types of Clusters: Contiguity-Based

• Contiguous Cluster (Nearest neighbor or Transitive)– A cluster is a set of points such that a point in a

cluster is closer (or more similar) to one or more other points in the cluster than to any point not in the cluster.

8 contiguous clusters

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

21

Types of Clusters: Density-Based

• Density-based– A cluster is a dense region of points, which is

separated by low-density regions, from other regions of high density.

– Used when the clusters are irregular or intertwined, and when noise and outliers are present.

6 density-based clusters

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

22

Types of Clusters: Conceptual Clusters

• Shared Property or Conceptual Clusters– Finds clusters that share some common property or

represent a particular concept. .

2 Overlapping Circles

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

23

Types of Clusters: Objective FunctionTypes of Clusters: Objective Function

Clusters Defined by an Objective Function– Finds clusters that minimize or maximize an objective

function. – Enumerate all possible ways of dividing the points into

clusters and evaluate the `goodness' of each potential set of clusters by using the given objective function. (NP Hard)

– Can have global or local objectives.• Hierarchical clustering algorithms typically have local

objectives• Partitional algorithms typically have global objectives

– A variation of the global objective function approach is to fit the data to a parameterized model.

• Parameters for the model are determined from the data. • Mixture models assume that the data is a ‘mixture' of a

number of statistical distributions.

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

24

Types of Clusters: Objective FunctioTypes of Clusters: Objective Functionn

Map the clustering problem to a different domain and solve a related problem in that domain

– Proximity matrix defines a weighted graph, where the nodes are the points being clustered, and the weighted edges represent the proximities between points

– Clustering is equivalent to breaking the graph into connected components, one for each cluster.

– Want to minimize the edge weight between clusters and maximize the edge weight within clusters

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

25

Characteristics of the Input Characteristics of the Input Data Are ImportantData Are Important

• Type of proximity or density measure– This is a derived measure, but central to clustering

• Sparseness– Dictates type of similarity– Adds to efficiency

• Attribute type– Dictates type of similarity

• Type of Data– Dictates type of similarity– Other characteristics, e.g., autocorrelation

• Dimensionality• Noise and Outliers• Type of Distribution

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

26

Clustering AlgorithmsClustering Algorithms• K-means and its variants

• Hierarchical clustering

• Density-based clustering

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

27

K-means Clustering• Partitional clustering approach • Each cluster is associated with a centroid (center point) • Each point is assigned to the cluster with the closest centroid• Number of clusters, K, must be specified• The basic algorithm is very simple

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

28

The K-Means Clustering Method

Example

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

29

The K-Means Clustering Method

Example

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

30

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

31

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

32

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

33

K-means Clustering – Details• Initial centroids are often chosen randomly.

– Clusters produced vary from one run to another.• The centroid is (typically) the mean of the points in the cluster.• ‘Closeness’ is measured by Euclidean distance, cosine similarity, correlation, etc.• K-means will converge for common similarity measures mentioned above.• Most of the convergence happens in the first few iterations.

– Often the stopping condition is changed to ‘Until relatively few points change clusters’• Complexity is O( n * K * I * d )

– n = number of points, K = number of clusters, I = number of iterations, d = number of attributes

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

34

Evaluating K-means Clusters• Most common measure is Sum of Squared Error (SSE)

– For each point, the error is the distance to the nearest cluster– To get SSE, we square these errors and sum them.

– x is a data point in cluster Ci and mi is the representative point for cluster Ci

• can show that mi corresponds to the center (mean) of the cluster– Given two clusters, we can choose the one with the smallest error– One easy way to reduce SSE is to increase K, the number of clusters

• A good clustering with smaller K can have a lower SSE than a poor clustering with higher K

K

i Cxi

i

xmdistSSE1

2 ),(

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

35

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

36

Importance of Choosing Initial Centroids

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 1

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 2

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 3

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 4

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 6

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

37

Importance of Choosing Initial Centroids

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 1

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 2

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 3

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 4

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 6

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

38

Importance of Choosing Initial Centroids

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 1

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 2

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 3

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 4

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 5

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

39

Importance of Choosing Initial Centroids

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 1

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 2

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 3

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 4

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

xy

Iteration 5

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

40

10 Clusters Example

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

yIteration 1

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

yIteration 2

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

yIteration 3

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

yIteration 4

Starting with two initial centroids in one cluster of each pair of clusters

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

41

10 Clusters Example

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 1

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 2

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 3

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 4

Starting with two initial centroids in one cluster of each pair of clusters

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

42

10 Clusters Example

Starting with some pairs of clusters having three initial centroids, while other have only one.

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 1

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 2

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 3

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 4

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

43

10 Clusters Example

Starting with some pairs of clusters having three initial centroids, while other have only one.

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

yIteration 1

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 2

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 3

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 4

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

44

Solutions to Initial Centroids Problem

• Multiple runs– Helps, but probability is not on your side

• Sample and use hierarchical clustering to determine initial centroids

• Select more than k initial centroids and then select among these initial centroids– Select most widely separated

• Postprocessing• Bisecting K-means

– Not as susceptible to initialization issues

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

45

Handling Empty Clusters

• Basic K-means algorithm can yield empty clusters

• Several strategies– Choose the point that contributes most to SSE– Choose a point from the cluster with the

highest SSE– If there are several empty clusters, the above

can be repeated several times.

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

46

Updating Centers Incrementally• In the basic K-means algorithm, centroids are updated

after all points are assigned to a centroid

• An alternative is to update the centroids after each assignment (incremental approach)– Each assignment updates zero or two centroids– More expensive– Introduces an order dependency– Never get an empty cluster– Can use “weights” to change the impact

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

47

Pre-processing and Post-processing

• Pre-processing– Normalize the data– Eliminate outliers

• Post-processing– Eliminate small clusters that may represent outliers– Split ‘loose’ clusters, i.e., clusters with relatively high SSE– Merge clusters that are ‘close’ and that have relatively low SSE– Can use these steps during the clustering process

• ISODATA

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

48

Limitations of K-means

• K-means has problems when clusters are of differing – Sizes– Densities– Non-globular shapes

• K-means has problems when the data contains outliers.

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

49

Limitations of K-means: Differing Sizes

Original Points K-means (3 Clusters)

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

50

Limitations of K-means: Differing Density

Original Points K-means (3 Clusters)

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

51

Limitations of K-means: Non-globular Shapes

Original Points K-means (2 Clusters)

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

52

Overcoming K-means Limitations

Original Points K-means Clusters

One solution is to use many clusters.Find parts of clusters, but need to put together.

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

53

Overcoming K-means Limitations

Original Points K-means Clusters

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

54

Overcoming K-means Limitations

Original Points K-means Clusters

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

55

K-means application K-means application using Matlabusing Matlab

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

56

Remember the dispersion of sepal lengths

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

57

Code for the dispersion

% sepal lengths of the three flowersplot(meas(1:50,1),' b+')hold on;plot(meas(51:100,1),' g*')hold on;plot(meas(101:150,1),' rx')

xlabel('Count');ylabel('Length');title('Sepal lengths of the three flowers');

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

58

K-means in Matlab• IDX = KMEANS(X, K) partitions the points in the N-

by-P data matrix X into K clusters.• This partition minimizes the sum, over all clusters, of

the within-cluster sums of point-to-cluster-centroid distances. Rows of X correspond to points, columns correspond to variables.

• KMEANS returns an N-by-1 vector IDX containing the cluster indices of each point.

• By default, KMEANS uses squared Euclidean distances.

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

59

Displaying Clusters in MatlabSILHOUETTE Silhouette plot for clustered data.

SILHOUETTE(X, CLUST) silhouettes for the X matrix N-by-P data, CLUST defined clusters.

• S = SILHOUETTE(X, CLUST) returns the silhouette values in the N-by-1 vector S.

•The silhouette value for each point is a measure of how similar that point is to points in its own cluster vs. points in other clusters, and ranges from -1 to +1.

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

60

Clustering the sepal lengths

figureidx=kmeans(meas(:,1),2);silhouette(meas(:,1),idx)

figureidx=kmeans(meas(:,1),3);silhouette(meas(:,1),idx)

figureidx=kmeans(meas(:,1),4);silhouette(meas(:,1),idx)

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

61

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

62

Example

• File new Variable array editor

• Write these numbers and partition these points into 2 clusters

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

63

Result

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

64

Example

• Try to partition these points into 3 clusters.

• Change 35 to 40 and try again

CEN474 Data Mining, Öğr. Gör. Esra Dinçer

65

Result