Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
Data Clustering
Data Mining - Unsupervised Machine Learning
Prof. Yannis VelegrakisUtrecht University
[email protected]://velgias.github.io
2
Disclaimer
The following set of slides originates from a number of different presentations and courses of the following people. l Yannis Velegrakis (Utrecht University)l Jeff Ullman (Stanford University)l Bill Howe (U of Washington)l Martin Fouler (Thought Works) l Ekaterini Ioannou (Tilburg University)l Copyright stays with the authors.
No distribution is allowed without prior permission by the authors.
3
Use Cases
l Tourism: Understanding Points of Interestn Transportationn Shops / Servicesn Security / Safety
l Security: Understanding High Risk Areasl Population Understanding: Types of citizens and Needsl Health: High Risk Areas (The old story of London City)l Software Usage in the Offices:
n Identify best needed software packagesl Home Services (Gas, Water, Electricity) Consumptionl Social Media: What the citizens want/thinkl Telecommunication Data: Identify Families
4
l Other types of Data: n Segments: Transportation Routes, Tourism Paths, Super Market
Shopping Plansn Graphs: Communitiesn Streams: City Sensors
5
High Dimensional Data
l Given a cloud of data points we want to understand its structure
5
66
The Problem of Clustering
l Given a set of points, with a notion of distance between points, group the points into some number of clusters, so that n Members of a cluster are close/similar to each othern Members of different clusters are dissimilar
l Usually:n Points are in a high-dimensional spacen Similarity is defined using a distance measure
uEuclidean, Cosine, Jaccard, edit distance, …
77
Example: Clusters & Outliers
x xx x x xx x x x
x x xx x
xxx x
x x x x x
xx x x
x
x xx x x x
x x xx
x
x
x xx x x xx x x x
x x xx x
xxx x
x xx x x
xx x x
x
x xx x x x
x x xx
Outlier Cluster
8
Clustering is a hard problem!
8
99
Why is it hard?
l Clustering in two dimensions looks easyl Clustering small amounts of data looks easyl And in most cases, looks are not deceiving
l Many applications involve not 2, but 10 or 10,000 dimensions
l High-dimensional spaces look different: Almost all pairs of points are at about the same distance
10
Clustering Problem: Galaxies
l A catalog of 2 billion “sky objects” represents objects by their radiation in 7 dimensions (frequency bands)
l Problem: Cluster into similar objects, e.g., galaxies, nearby stars, quasars, moons, belts, clouds, etc.
l Sloan Digital Sky Survey
10
11
Clustering Problem: Music CDs
l Intuitively: Music divides into categories, and customers prefer a few categoriesn But what are categories really?
l Represent a CD by a set of customers who bought it:
l Similar CDs have similar sets of customers, and vice-versa
11
12
Clustering Problem: Music CDs
Space of all CDs:l Think of a space with one dim. for each customer
n Values in a dimension may be 0 or 1 onlyn A CD is a point in this space (x1, x2,…, xk),
where xi = 1 iff the i th customer bought the CD
l For Amazon, the dimension is tens of millions
l Task: Find clusters of similar CDs
12
13
Clustering Problem: Documents
Finding topics:l Represent a document by a vector
(x1, x2,…, xk), where xi = 1 iff the i th word (in some order) appears in the documentn It actually doesn’t matter if k is infinite; i.e., we don’t limit
the set of words
l Documents with similar sets of words may be about the same topic
13
14
Cosine, Jaccard, and Euclidean
l As with CDs we have a choice when we think of documents as sets of words:n Sets as vectors: Measure similarity by the cosine distance
n Sets as sets: Measure similarity by the Jaccard distance
n Sets as points: Measure similarity by Euclidean distance
14
1515
Overview: Methods of Clustering
l Hierarchical:n Agglomerative (bottom up):
uInitially, each point is a clusteruRepeatedly combine the two
“nearest” clusters into onen Divisive (top down):
uStart with one cluster and recursively split it
l Point assignment:n Maintain a set of clustersn Points belong to “nearest” cluster
16
Hierarchical Clustering
l Key operation: Repeatedly combine two nearest clusters
l Three important questions:n 1) How do you represent a cluster of more
than one point?n 2) How do you determine the “nearness” of clusters?n 3) When to stop combining clusters?
16
17
Hierarchical Clustering
l Key operation: Repeatedly combine two nearest clusters
l (1) How to represent a cluster of many points?n Key problem: As you merge clusters, how do you
represent the “location” of each cluster, to tell which pair of clusters is closest?
l Euclidean case: each cluster has a centroid = average of its (data)points
l (2) How to determine “nearness” of clusters?n Measure cluster distances by distances of centroids
17
18
Example: Hierarchical clustering
(5,3)o
(1,2)o
o (2,1) o (4,1)
o (0,0) o (5,0)
x (1.5,1.5)
x (4.5,0.5)x (1,1)
x (4.7,1.3)
Data:o … data pointx … centroid Dendrogram
19
And in the Non-Euclidean Case?
What about the Non-Euclidean case?l The only “locations” we can talk about are the points
themselvesn i.e., there is no “average” of two points
l Approach 1:n (1) How to represent a cluster of many points?
clustroid = (data)point “closest” to other pointsn (2) How do you determine the “nearness” of
clusters? Treat clustroid as if it were centroid, when computing inter-cluster distances
19
20
“Closest” Point?
l (1) How to represent a cluster of many points?clustroid = point “closest” to other points
l Possible meanings of “closest”:n Smallest maximum distance to other pointsn Smallest average distance to other pointsn Smallest sum of squares of distances to other points
uFor distance metric d clustroid c of cluster C is:
20
åÎCxc
cxd 2),(min
Centroid is the avg. of all (data)points in the cluster. This means centroid is
an “artificial” point.Clustroid is an existing (data)point that is “closest” to all other points in
the cluster.
X
Cluster on3 datapoints
Centroid
Clustroid
Datapoint
21
Defining “Nearness” of Clusters
l (2) How do you determine the “nearness” of clusters? n Approach 2:
Intercluster distance = minimum of the distances between any two points, one from each cluster
n Approach 3:Pick a notion of “cohesion” of clusters, e.g., maximum distance from the clustroiduMerge clusters whose union is most cohesive
21
22
Cohesion
l Approach 3.1: Use the diameter of the merged cluster = maximum distance between points in the cluster
l Approach 3.2: Use the average distance between points in the cluster
l Approach 3.3: Use a density-based approachn Take the diameter or avg. distance, e.g., and divide by the
number of points in the cluster
22
23
Implementation
l Naïve implementation of hierarchical clustering:n At each step, compute pairwise distances
between all pairs of clusters, then mergen O(N3)
l Careful implementation using priority queue can reduce time to O(N2 log N)n Still too expensive for really big datasets
that do not fit in memory
23
k-means clustering
25
k–means Algorithm(s)
l Assumes Euclidean space/distance
l Start by picking k, the number of clusters
l Initialize clusters by picking one point per clustern Example: Pick one point at random, then k-1 other points,
each as far away as possible from the previous points
25
26
Populating Clusters
l 1) For each point, place it in the cluster whose current centroid it is nearest
l 2) After all points are assigned, update the locations of centroids of the k clusters
l 3) Reassign all points to their closest centroidn Sometimes moves points between clusters
l Repeat 2 and 3 until convergencen Convergence: Points don’t move between clusters and
centroids stabilize
26
27
Example: Assigning Clusters
27
x
x
x
x
x
x
x x
x … data point… centroid
x
x
x
Clusters after round 1
28
Example: Assigning Clusters
28
x
x
x
x
x
x
x x
x … data point… centroid
x
x
x
Clusters after round 2
29
Example: Assigning Clusters
29
x
x
x
x
x
x
x x
x … data point… centroid
x
x
x
Clusters at the end
30
Example: Picking k
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org
30
x xx x x xx x x x
x x xx x
xxx x
x x x x x
xx x x
x
x xx x x x
x x xx
x
x
Too few;many longdistances
to centroid.
31
Example: Picking k
31
x xx x x xx x x x
x x xx x
xxx x
x x x x x
xx x x
x
x xx x x x
x x xx
x
x
Just right;distances
rather short.
32
Example: Picking k
32
x xx x x xx x x x
x x xx x
xxx x
x x x x x
xx x x
x
x xx x x x
x x xx
x
x
Too many;little improvement
in averagedistance.
33
Getting the k right
How to select k?l Try different k, looking at the change in the average
distance to centroid as k increasesl Average falls rapidly until right k, then changes little
33
k
Averagedistance to
centroid
Best valueof k
34
Picking the initial k points
34
The BFR Algorithm
Extension of k-means to larger data
3636
3737
38
BFR Algorithm
l BFR [Bradley-Fayyad-Reina] is a variant of k-means designed to handle very large (disk-resident) data sets
l Assumes that clusters are normally distributed around a centroid in a Euclidean spacen Standard deviations in different
dimensions may varyu Clusters are axis-aligned ellipses
l Efficient way to summarize clusters (memory required: O(clusters) and not O(data))
38
39
BFR Algorithm
l Points are read from disk one main-memory-full at a time
l Most points from previous memory loads are summarized by simple statistics
l To begin, from the initial load we select the initial kcentroids by some sensible approach:n Take k random pointsn Take a small random sample and cluster optimallyn Take a sample; pick a random point, and then
k–1 more points, each as far from the previously selected points as possible
39
40
Three Classes of Points
3 sets of points which we keep track of:l Discard set (DS):
n Points close enough to a centroid to be summarized
l Compression set (CS): n Groups of points that are close together but not
close to any existing centroidn These points are summarized, but not assigned to
a cluster
l Retained set (RS):n Isolated points waiting to be assigned to a
compression set
40
41
BFR: “Galaxies” Picture
41
A cluster. Its pointsare in the DS. The centroid
Compressed sets.Their points are in
the CS.
Points inthe RS
Discard set (DS): Close enough to a centroid to be summarizedCompression set (CS): Summarized, but not assigned to a cluster
Retained set (RS): Isolated points
42
Summarizing Sets of Points
For each cluster, the discard set (DS) is summarized by:l The number of points, Nl The vector SUM, whose ith component is the sum of the
coordinates of the points in the ith dimensionl The vector SUMSQ: ith component = sum of squares of
coordinates in ith dimension
42
A cluster. All its points are in the DS. The centroid
43
BFR: “Galaxies” Picture
43
A cluster. Its pointsare in the DS. The centroid
Compressed sets.Their points are in
the CS.
Points inthe RS
Discard set (DS): Close enough to a centroid to be summarizedCompression set (CS): Summarized, but not assigned to a cluster
Retained set (RS): Isolated points
44
A Few Details…
l Q1) How do we decide if a point is “close enough” to a cluster that we will add the point to that cluster?
l Q2) How do we decide whether two compressed sets (CS) deserve to be combined into one?n Compute the variance of the combined subcluster
44
45
How Close is Close Enough?
l Q1) We need a way to decide whether to put a new point into a cluster (and discard)
l BFR suggests two ways:n The Mahalanobis distance is less than a thresholdn High likelihood of the point belonging to currently
nearest centroid
45
46
Mahalanobis Distance
l Normalized Euclidean distance from centroid
l For point (x1, …, xd) and centroid (c1, …, cd)1. Normalize in each dimension: yi = (xi - ci) / si
2. Take sum of the squares of the yi
3. Take the square root
𝑑 𝑥, 𝑐 = &'()
*𝑥' − 𝑐'𝜎'
-
46
σi … standard deviation of points in the cluster in the ith dimension
The CURE Algorithm
Extension of k-means to clustersof arbitrary shapes
48
The CURE Algorithm
l Problem with BFR/k-means:n Assumes clusters are normally
distributed in each dimensionn And axes are fixed – ellipses at
an angle are not OK
l CURE (Clustering Using REpresentatives):n Assumes a Euclidean distancen Allows clusters to assume any shapen Uses a collection of representative
points to represent clusters
48
Vs.
49
Example: Professor’s Salaries
49
e e
e
e
e e
e
e e
e
e
h
h
h
h
h
h
h h
h
h
h
h h
salary
age
50
Starting CURE
2 Pass algorithm. Pass 1:l 0) Pick a random sample of points that fit in main
memoryl 1) Initial clusters:
n Cluster these points hierarchically – group nearest points/clusters
l 2) Pick representative points:n For each cluster, pick a sample of points, as dispersed
as possiblen From the sample, pick representatives by moving them
(say) 20% toward the centroid of the cluster
50
51
Example: Initial Clusters
51
e e
e
e
e e
e
e e
e
e
h
h
h
h
h
h
h h
h
h
h
h h
salary
age
52
Example: Pick Dispersed Points
52
e e
e
e
e e
e
e e
e
e
h
h
h
h
h
h
h h
h
h
h
h h
salary
age
Pick (say) 4remote points
for eachcluster.
53
Example: Pick Dispersed Points
53
e e
e
e
e e
e
e e
e
e
h
h
h
h
h
h
h h
h
h
h
h h
salary
age
Move points(say) 20%toward thecentroid.
54
Finishing CURE
Pass 2:l Now, rescan the whole dataset and
visit each point p in the data set
l Place it in the “closest cluster”n Normal definition of “closest”:
Find the closest representative to p and assign it to representative’s cluster
54
p
55
Summary
l Clustering: Given a set of points, with a notion of distance between points, group the points into some number of clusters
l Algorithms:n Agglomerative hierarchical clustering:
uCentroid and clustroidn k-means:
uInitialization, picking kn BFRn CURE
55
DBSCAN – Density-Based Spatial Clustering of Applications with Noise
57
DBSCAN
Density-based Clustering locates regions of high density that are separated from one another by regions of low density. n Density = number of points within a specified radius (Eps)
l DBSCAN is a density-based algorithm.n A point is a core point if it has more than a specified number
of points (MinPts) within EpsuThese are points that are at the interior of a cluster
n A border point has fewer than MinPts within Eps, but is in the neighborhood of a core point
58
DBSCAN
n A noise point is any point that is not a core point or a border point.
n Any two core points are close enough– within a distance Eps of one another – are put in the same cluster
n Any border point that is close enough to a core point is put in the same cluster as the core point
n Noise points are discarded
59
DBSCAN: The Algorithm
n select a point p
n Retrieve all points density-reachable from p wrt e and MinPts.
n If p is a core point, a cluster is formed.
n If p is a border point, no points are density-reachable from p and DBSCAN visits the next point of the database.
n Continue the process until all of the points have been processed.
Result is independent of the order of processing the points
60
An example
p q
o
p q
o
61
DB Scan Clusters
62
DBSCAN Disadvantage: Irregular Density